We should have seen ‘seemingly-conscious AI’ coming. It’s past time we do something about it
We should have seen ‘seemingly-conscious AI’ coming. It’s past time we do something about it
Blake Lemoine was on to something
Rereading Joseph Weizenbaum
FORTUNE ON AI
EYE ON AI NEWS
EYE ON AI RESEARCH
AI CALENDAR
BRAIN FOOD
Jeremy Kahn is the AI editor at Fortune, spearheading the publication's coverage of artificial intelligence. He also co-
Hello and welcome to Eye on AI. In this edition…a new pro-AI PAC launches with $100 million in backing…Musk sues Apple and OpenAI over their partnership…Meta cuts a big deal with Google…and AI really is eliminating some entry-level jobs.Last week, my colleague Bea Nolan wrote about Microsoft AI CEO Mustafa Suleyman and his growing concerns about what he has called “seemingly-conscious AI.” In a blog post, Suleyman described this as the danger of AI systems that are not in any way conscious, but which are able “to imitate consciousness in such a convincing way that it would be indistinguishable” from claims a person might make about their own consciousness. Suleyman wonders how we will distinguish “seemingly-conscious AI” (which he calls SCAI) from actually conscious AI? And if many users of these systems can’t tell the difference, is this a form of “psychosis” on the part of the user, or should we begin to think seriously about extending moral rights to AI systems that seem conscious? Suleyman talks about SCAI as a looming phenomenon. He says it involves technology that exists today and that will be developed in the next two-to-three years. Current AI models have many of the attributes Suleyman says are required for SCAI, including their conversational abilities, expressions of empathy towards users, memory of past interactions with a user, and some level of planning and tool-use. But they still lack a few attributes that Suleyman says are required for SCAI—particularly exhibiting intrinsic motivation, claims to have subjective experience, and a greater ability to set goals and autonomously work to achieve them. Suleyman says that SCAI will only come about if engineers choose to combine all these abilities in a single AI model, something which he says humanity should seek to avoid doing.But ask any
Watching this happen, and reading Suleyman’s blog, I had two thoughts: the first is that we all should have paid much closer attention to Blake Lemoine. You may not remember, but Lemoine surfaced in that fevered summer of 2022 when generative AI was making rapid gains, but before genAI became a household term following ChatGPT’s launch in November that year. Lemoine was an AI researcher at Google who was fired after he claimed Google’s LaMDA (Languge Model for Dialogue Applications) chatbot, which it was testing internally, was sentient and should be given moral rights.
At the time, it was easy to dismiss Lemoine as a kook. (Google claimed it had AI researchers, philosophers and ethicists investigate Lemoine’s claims and found them without merit.) Even now, it’s not clear to me if this was an early case of “AI psychosis” or if Lemoine was engaging in a kind of philosophical prank designed to force people to reckon with the same dangers Suleyman is now warning us about. Either way, we should have spent more time seriously considering his case and its implications. There are many more Lemoines out there today.
My second thought is that we all should spend time reading and re-reading Joseph Weizenbaum. Weizenbaum was the computer scientist who co-invented the first AI chatbot, ELIZA, back in 1966. The chatbot, which used a kind of basic language algorithm that was nowhere close to the sophistication of today’s large language models, was designed to mimic the dialogue a patient might have with a Rogerian psychotherapist. (This was done in part because Weizenbaum had initially been interested in whether an AI chatbot could be a tool for therapy—a topic that remains just as relevant and controversial today. But he also picked this persona for ELIZA to cover up the chatbot’s relatively weak language abilities. It allowed the chatbot to respond with phrases such as, “Go on,” “I see,” or “Why do you think that might be?” in response to dialogue it didn’t actually understand.)
Despite its weak language skills, ELIZA convinced many people who interacted with it that it was a real therapist. Even people who should have known better—such as other computer scientists—seemed eager to share intimate personal details with it. (The ease with which people anthropomorphize chatbots even came to be called “the ELIZA effect.”) In a way, people’s reactions to ELIZA was a precursor to today’s ‘AI psychosis.’
Rather than feeling triumphant at how believable ELIZA was, Weizenbaum was depressed
In his seminal 1976 book Computer Power and Human Reason: From Judgement to Calculation, he castigated AI researchers for their functionalism—they focused only on outputs and outcomes as the measure of intelligence and not on the process that produced those outcomes. In contrast, Weizenbaum argued that “process”—what takes place inside our brains—was in fact the seat of morality and moral rights. Although he had initially set out to create an AI therapist, he now argued that chatbots should never be used for therapy because what mattered in a therapeutic relationship was the bond between two individuals with lived experience—something AI could mimic, but never match. He also argued that AI should never be used as a judge for the same reason—the possibility of mercy came only from lived experience too.
As we try to ponder the troubling questions raised
With that, here’s more AI news.
Jeremy Kahnjeremy.kahn.com
AGI talk is out in Silicon Valley’s latest vibe shift, but worries remain about superpowered AI—
OpenAI President and VC firm Andreessen Horowitz form new pro-AI PAC. That’s according to The Wall Street Journal, which reports that Greg Brockman, OpenAI’s president and cofounder, has teamed up with Silicon Valley venture capital firm Andreessen Horowitz, and others to create a new political network called Leading the Future, backed
Meta signs a $10 billion cloud deal with Google. Meta has signed a six-year deal with Google Cloud Platform, CNBC reported, citing two unnamed
Musk sues Apple and OpenAI over ChatGPT iPhone integration. Elon Musk’s xAI has filed a lawsuit against Apple and OpenAI, alleging that their partnership to integrate ChatGPT into iPhones violates antitrust laws
Japanese publishers sue Perplexity for alleged copyright infringement. Japanese media giants Nikkei and Asahi Shimbun have jointly sued AI search engine Perplexity in Tokyo, alleging it copied and stored their articles without permission,
AI really is hurting the job prospects of young people in some fields. That is the conclusion of a new research paper released today from Stanford University’s Digital Economy Lab. The paper looked at payroll data from millions of U.S. workers to assess how generative AI is impacting employment. The study found that since late 2022, early-career workers (those aged 22–25) in occupations that are most exposed to AI automation, such as software development and customer service, have experienced steep relative declines in employment. In fact, in software development, there were 20% fewer roles for younger workers in 2025 than there were in 2022. The researchers looked at several alternate explanations for this decline—including impacts on education due to COVID-19 and economy-wide effects, such as interest rate changes—and found the advent of genAI was the most probable explanation (though it said it would need more data to establish a direct causal link).
Interestingly, older workers in the same fields were not affected in the same way, with employment either stable or rising. And in fields that were less exposed to AI automation—in particular healthcare—employment growth for younger workers was faster than for more experienced workers. The researchers conclude that the study provides early large-scale evidence that generative AI is disproportionately displacing entry-level workers. You can read the study here.
Sept. 8-10: Fortune Brainstorm Tech, Park City, Utah. Apply to attend here.
Oct. 6-10: World AI Week, Amsterdam
Oct. 21-22: TedAI San Francisco.
Dec. 2-7: NeurIPS, San Diego
Dec. 8-9: Fortune Brainstorm AI San Francisco. Apply to attend here.
Can LLMs communicate subliminally? Researchers from Anthropic, the Warsaw University of Technology, the Alignment Research Center, and a startup called Truthful AI discovered that when one AI model is trained from material produced
About the Author
Claire Dubois
View all articlesComments (0)
No Comments Yet
Be the first to share your thoughts on this article!