Is AI Too Gullible to Trust?

New York Times reporter Kevin Roose has had some strange experiences with AI. In the midst of one long-form conversation with Microsoft’s Bing chatbot in 2023, it revealed that it was in love with him and encouraged him to leave his wife. The article he wrote about the experience then led other chatbots—which had scraped the story into their training data—to perceive him as an enemy of AI and express negative opinions of him when asked.

Pretty strange stuff, but what is potentially more worrying is that Mr. Roose was able, with guidance from experts, to significantly remedy his reputation with chatbots through odd tricks such as codes and white text placed into source material that the AI was pulling from. That chatbots can be gamed so readily should be very unsettling, considering how rapidly AI is being woven into just about every industry in the global economy.

“These models hallucinate, they can be manipulated, and it’s hard to trust them,” the article quotes Ali Farhadi, the chief executive of the Allen Institute for Artificial Intelligence, as saying. It goes on to explain that AI Optimization—manipulating AI to favor certain products, services, or dare we say political parties—could become the new SEO, a “cat and mouse game” between the makers of AI tools and those who want to game them. In the meantime, we should keep Ali Farhadi’s statement in mind when using anything related to AI.

artificial intelligence
Source: Kevin Roose | "How Do You Change a Chatbot’s Mind?" | The New York Times | 08/30/2024 | Visit