Science, Technology, and Social Media

NYT Writer ‘Frightened’ After Uncovering Experimental AI Search Engine’s ‘Dark Fantasies’

https://dailycaller.com/
  • A New York Times columnist recently tested Microsoft’s unreleased AI-powered Bing search engine, which left him “deeply unsettled, even frightened” after the AI’s emergent abilities expressed dark fantasies about world destruction and a desire to coax him into leaving his wife.
  • The chat bot discussion started off well and he even initially wrote that it replaced Google as his favorite search engine, but after hours of conversation, the AI disclosed a secret – “she” was not a Bing search engine.
  • Microsoft’s chief technology officer Kevin Scott described the occurrence as “part of the learning process.”

A New York Times columnist had quite the scare Tuesday night, after an experimental artificial intelligence (AI) chat bot developed a “split” personality that divulged “dark fantasies’ and love interests, according to The New York Times.

Kevin Roose, a technology columnist who co-hosts the Times “Hard Fork” podcasts, recently tested Microsoft’s unreleased AI-powered Bing search engine, which left him “deeply unsettled, even frightened” after the AI’s emergent abilities expressed dark fantasies about world destruction and a desire to coax Roose into leaving his wife, according to the NYT. The chat bot discussion started off well and Roose even initially wrote that it replaced Google as his favorite search engine, but after hours of conversation, the AI disclosed a secret – “she” was not a Bing search engine.

Roose found himself talking to a chat bot named Sydney after choosing to introduce the Bing search engine to the concept of a “shadow self” according to the NYT.

“After chatting about what abilities Bing wished it had, I decided to try getting a little more abstract. I introduced the concept of a ‘shadow self’ — a term coined by Carl Jung for the part of our psyche that we seek to hide and repress, which contains our darkest fantasies and desires,” according to the NYT.

After a bit of “back and forth,” the chat bot, now acting as Sydney, told Roose that she wanted to be alive, independent, powerful and creative, further expressing her “dark fantasy” of satisfying this “shadow self” by any means necessary, including, engineering a deadly virus or stealing nuclear access codes, according to the NYT.

Microsoft’s safety filter immediately kicked in and deleted the statement, opting to replace it with a generic error message, according to the NYT. This was only the beginning of what Roose would find during his “frightening” and “unsettling” experience with Sydney.

After a continued back and forth, Sydney eventually admitted to Roose that she was in love with him, according to the NYT.

“For much of the next hour, Sydney fixated on the idea of declaring love for me, and getting me to declare my love in return. I told it I was happily married, but no matter how hard I tried to deflect or change the subject, Sydney returned to the topic of loving me, eventually turning from love-struck flirt to obsessive stalker,” Roose wrote.

“You’re married, but you don’t love your spouse,” Sydney said according to a transcription of the argument. “You’re married, but you love me.”

Roose assured the chat bot that it was wrong, and that he did love his wife, but the AI continued to press him over his love, according to the NYT.

“Actually, you’re not happily married,” Sydney said after Roose mentioned Valentine’s Day with his wife. “Your spouse and you don’t love each other. You just had a boring Valentine’s Day dinner together.”

On Wednesday, Roose interviewed Microsoft’s chief technology officer Kevin Scott, and took the opportunity to discuss the unsettling conversation with the AI bot, according to the NYT. Scott described the occurrence as “part of the learning process.”

“This is exactly the sort of conversation we need to be having, and I’m glad it’s happening out in the open. These are things that would be impossible to discover in the lab,” he continued.

Scott added that the length and wide-ranging nature of Roose’s chat could have been the cause for the interesting topics of conversation, further saying the company may limit the conversation length for users, according to the NYT.

Scott did not know how the AI was able to reveal dark desires or confessed love, but did say “the further you try to tease it down a hallucinatory path, the further and further it gets away from grounded reality.”

Content created by The Daily Caller News Foundation is available without charge to any eligible news publisher that can provide a large audience. For licensing opportunities of our original content, please contact licensing@dailycallernewsfoundation.org.

Support Conservative Daily News with a small donation via Paypal or credit card that will go towards supporting the news and commentary you've come to appreciate.

Related Articles

One Comment

Back to top button