Home>Technology>OpenAI Is Testing Its Powers of Persuasion
Technology

OpenAI Is Testing Its Powers of Persuasion

This week, Sam Altman, CEO of OpenAI, and Arianna Huffington, founder and CEO of the health company Thrive Global, published an article in Time touting Thrive AI, a startup backed by Thrive and OpenAI’s Startup Fund. The piece suggests that AI could have a huge positive impact on public health by talking people into healthier behavior.

Altman and Huffington write that Thrive AI is working toward “a fully integrated personal AI coach that offers real-time nudges and recommendations unique to you that allows you to take action on your daily behaviors to improve your health.”

Their vision puts a positive spin on what may well prove to be one of AI’s sharpest double-edges. AI models are already adept at persuading people, and we don’t know how much more powerful they could become as they advance and gain access to more personal data.

Aleksander Madry, a professor on sabbatical from the Massachusetts Institute of Technology, leads a team at OpenAI called Preparedness that is working on that very issue.

“One of the streams of work in Preparedness is persuasion,” Madry told WIRED in a May interview. “Essentially, thinking to what extent you can use these models as a way of persuading people.”

Madry says he was drawn to join OpenAI by the remarkable potential of language models and because the risks that they pose have barely been studied. “There is literally almost no science,” he says. “That was the impetus for the Preparedness effort.”

Persuasiveness is a key element in programs like ChatGPT and one of the ingredients that makes such chatbots so compelling. Language models are trained in human writing and dialog that contains countless rhetorical and suasive tricks and techniques. The models are also typically fine-tuned to err toward utterances that users find more compelling.

Research released in April by Anthropic, a competitor founded by OpenAI exiles, suggests that language models have become better at persuading people as they have grown in size and sophistication. This research involved giving volunteers a statement and then seeing how an AI-generated argument changes their opinion of it.

OpenAI’s work extends to analyzing AI in conversation with users—something that may unlock greater persuasiveness. Madry says the work is being conducted on consenting volunteers, and declines to reveal the findings to date. But he says the persuasive power of language models runs deep. “As humans we have this ‘weakness’ that if something communicates with us in natural language [we think of it as if] it is a human,” he says, alluding to an anthropomorphism that can make chatbots seem more lifelike and convincing.

The Time article argues that the potential health benefits of persuasive AI will require strong legal safeguards because the models may have access to so much personal information. “Policymakers need to create a regulatory environment that fosters AI innovation while safeguarding privacy,” Altman and Huffington write.

This is not all that policymakers will need to consider. It may also be crucial to weigh how increasingly persuasive algorithms could be misused. AI algorithms could enhance the resonance of misinformation or generate particularly compelling phishing scams. They might also be used to advertise products.

Madry says a key question, yet to be studied by OpenAI or others, is how much more compelling or coercive AI programs that interact with users over long periods of time could prove to be. Already a number of companies offer chatbots that roleplay as romantic partners and other characters. AI girlfriends are increasingly popular—some are even designed to yell at you—but how addictive and persuasive these bots are is largely unknown.

The excitement and hype generated by ChatGPT following its release in November 2022 saw OpenAI, outside researchers, and many policymakers zero in on the more hypothetical question of whether AI could someday turn against its creators.

Madry says this risks ignoring the more subtle dangers posed by silver-tongued algorithms. “I worry that they will focus on the wrong questions,” Madry says of the work of policymakers thus far. “That in some sense, everyone says, ‘Oh yeah, we are handling it because we are talking about it,’ when actually we are not talking about the right thing.”

Source link

Review Overview

Summary

Leave a Reply

Your email address will not be published. Required fields are marked *