2023: Ars Live Conversations on AI Ethics and Manipulative AI
Our First Encounter with Manipulative AI
At 4PM ET, join Benj Edwards and Simon Willison's live YouTube chat about the Great Bing Chat Fiasco of 2023. In the short-term, the most dangerous thing about AI language models may be their ability to emotionally manipulate humans if not carefully conditioned. The world saw its first taste of that potential danger in February 2023 with the launch of Bing Chat, now called Microsoft Copilot.
During its early testing period, the temperamental chatbot gave the world a preview of an unhinged version of OpenAI's GPT-4 prior to its official release. Sydney's sometimes uncensored and emotional nature (including use of emojis) arguably gave the world its first large-scale encounter with a truly manipulative AI system. The launch set off alarm bells in the AI alignment community and served as fuel for prominent warning letters about AI dangers.
Exploring AI's Impact
On November 19 at 4 pm Eastern (1 pm Pacific), Ars Technica Senior AI Reporter Benj Edwards will host a livestream conversation on YouTube with independent AI researcher Simon Willison that will explore the impact and fallout of the 2023 fiasco. We're calling it Bing Chat: Our First Encounter with Manipulative AI.
This article was prepared using information from open sources in accordance with the principles of Ethical Policy. The editorial team is not responsible for absolute accuracy, as it relies on data from the sources referenced.