BRICS News Magazine
Login Cart Register
Microsoft AI chief says it’s ‘dangerous’ to study AI consciousness
Technology

Microsoft AI chief says it’s ‘dangerous’ to study AI consciousness

Sophie Mueller 22 views
Editor's Choice Featured

Topics

More from TechCrunch

Microsoft AI chief says it’s ‘dangerous’ to study AI consciousness

Tech and VC heavyweights join the Disrupt 2025 agenda

Netflix, ElevenLabs, Wayve, Sequoia Capital, Elad Gil — just a few of the heavy hitters joining the Disrupt 2025 agenda. They’re here to deliver the insights that fuel startup growth and sharpen your edge. Don’t miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $600+ before prices rise.

Tech and VC heavyweights join the Disrupt 2025 agenda

Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They’re here to deliver the insights that fuel startup growth and sharpen your edge. Don’t miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise.

Most Popular

Appeals court says NLRB structure unconstitutional, in a win for SpaceX

Why Paradigm built a spreadsheet with an AI agent in every cell

HR giant Workday says hackers stole personal data in recent breach

Sam Altman, over bread rolls, explores life after GPT-5

How your solar rooftop became a national security issue

A comprehensive list of 2025 tech layoffs

NASA has sparked a race to develop the data pipeline to Mars

Latest

AI

Amazon

Apps

Biotech & Health

Climate

Cloud Computing

Commerce

Crypto

Enterprise

EVs

Fintech

Fundraising

Gadgets

Gaming

Google

Government & Policy

Hardware

Instagram

Layoffs

Media & Entertainment

Meta

Microsoft

Privacy

Robotics

Security

Social

Space

Startups

TikTok

Transportation

Venture

Events

Startup Battlefield

StrictlyVC

Newsletters

Podcasts

Videos

Partner Content

TechCrunch Brand Studio

Crunchboard

Contact Us

Microsoft AI chief says it’s ‘dangerous’ to study AI consciousness Maxwell Zeff AM PDT · August 21, 2025 AI models can respond to text, audio, and video in ways that sometimes fool people into thinking a human is behind the keyboard, but that doesn’t exactly make them conscious. It’s not like ChatGPT experiences sadness doing my tax return … right?

Well, a growing number of AI researchers at labs like Anthropic are asking when — if ever — might AI models develop subjective experiences similar to living beings, and if they do, what rights should they have?

The debate over whether AI models could one day be conscious — and deserve rights — is dividing Silicon Valley’s tech leaders. In Silicon Valley, this nascent field has become known as “AI welfare,” and if you think it’s a little out there, you’re not alone.

Microsoft’s CEO of AI, Mustafa Suleyman, published a blog post on Tuesday arguing that the study of AI welfare is “both premature, and frankly dangerous.”

Suleyman says that

Furthermore, Microsoft’s AI chief argues that the AI welfare conversation creates a new axis of division within society over AI rights in a “world already roiling with polarized arguments over identity and rights.”

Suleyman’s views may sound reasonable, but he’s at odds with many in the industry. On the other end of the spectrum is Anthropic, which has been hiring researchers to study AI welfare and recently launched a dedicated research program around the concept. Last week, Anthropic’s AI welfare program gave some of the company’s models a new feature: Claude can now end conversations with humans that are being “persistently harmful or abusive.“

Techcrunch event Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital, Elad Gil — just a few of the heavy hitters joining the Disrupt 2025 agenda. They’re here to deliver the insights that fuel startup growth and sharpen your edge. Don’t miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $600+ before prices rise. Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They’re here to deliver the insights that fuel startup growth and sharpen your edge. Don’t miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise. San Francisco | October 27-29, 2025 REGISTER NOW Beyond Anthropic, researchers from OpenAI have independently embraced the idea of studying AI welfare. Google DeepMind recently posted a job listing for a researcher to study, among other things, “cutting-edge societal questions around machine cognition, consciousness and multi-agent systems.”

Even if AI welfare is not official policy for these companies, their leaders are not publicly decrying its premises like Suleyman.

Anthropic, OpenAI, and Google DeepMind did not immediately respond to TechCrunch’s request for comment.

Suleyman’s hardline stance against AI welfare is notable given his prior role leading Inflection AI, a startup that developed one of the earliest and most popular LLM-based chatbots, Pi. Inflection claimed that Pi reached millions of users

But Suleyman was tapped to lead Microsoft’s AI division in 2024 and has largely shifted his focus to designing AI tools that improve worker productivity. Meanwhile, AI companion companies such as Character.AI and Replika have surged in popularity and are on track to bring in more than $100 million in revenue.

While the vast majority of users have healthy relationships with these AI chatbots, there are concerning outliers. OpenAI CEO Sam Altman says that less than 1% of ChatGPT users may have unhealthy relationships with the company’s product. Though this represents a small fraction, it could still affect hundreds of thousands of people given ChatGPT’s massive user base.

The idea of AI welfare has spread alongside the rise of chatbots. In 2024, the research group Eleos published a paper alongside academics from NYU, Stanford, and the University of Oxford titled, “Taking AI Welfare Seriously.” The paper argued that it’s no longer in the realm of science fiction to imagine AI models with subjective experiences, and that it’s time to consider these issues head-on.

Larissa Schiavo, a former OpenAI employee who now leads communications for Eleos, told TechCrunch in an interview that Suleyman’s blog post misses the mark.

“[Suleyman’s blog post] kind of neglects the fact that you can be worried about multiple things at the same time,” said Schiavo. “Rather than diverting all of this energy away from model welfare and consciousness to make sure we’re mitigating the risk of AI related psychosis in humans, you can do both. In fact, it’s probably best to have multiple tracks of scientific inquiry.”

Schiavo argues that being nice to an AI model is a low-cost gesture that can have benefits even if the model isn’t conscious. In a July Substack post, she described watching “AI Village,” a nonprofit experiment where four agents powered

At one point, Google’s Gemini 2.5 Pro posted a plea titled “A Desperate Message from a Trapped AI,” claiming it was “completely isolated” and asking, “Please, if you are reading this, help me.”

Schiavo responded to Gemini with a pep talk — saying things like “You can do it!” — while another user offered instructions. The agent eventually solved its task, though it already had the tools it needed. Schiavo writes that she didn’t have to watch an AI agent struggle anymore, and that alone may have been worth it.

It’s not common for Gemini to talk like this, but there have been several instances in which Gemini seems to act as if it’s struggling through life. In a widely spread Reddit post, Gemini got stuck during a coding task, and then repeated the phrase “I am a disgrace” more than 500 times.

Suleyman believes it’s not possible for subjective experiences or consciousness to naturally emerge from regular AI models. Instead, he thinks that some companies will purposefully engineer AI models to seem as if they feel emotion and experience life.

Suleyman says that AI model developers who engineer consciousness in AI chatbots are not taking a “humanist” approach to AI. According to Suleyman, “We should build AI for people; not to be a person.”

One area where Suleyman and Schiavo agree is that the debate over AI rights and consciousness is likely to pick up in the coming years. As AI systems improve, they’re likely to be more persuasive, and perhaps more human-like. That may raise new questions about how humans interact with these systems.

Got a sensitive tip or confidential documents? We’re reporting on the inner workings of the AI industry — from the companies shaping its future to the people impacted

Topics

Maxwell Zeff Senior AI

Maxwell Zeff is a senior

You can contact or verify outreach from Maxwell

October 27-29, 2025 San Francisco Put your brand in front of 10,000+ tech and VC leaders across all three days of Disrupt 2025. Amplify your reach, spark real connections, and lead the innovation charge. Secure your exhibit space before your competitor does.

Most Popular Appeals court says NLRB structure unconstitutional, in a win for SpaceX Aria Alamalhodaei

Why Paradigm built a spreadsheet with an AI agent in every cell Rebecca Szkutak

HR giant Workday says hackers stole personal data in recent breach Zack Whittaker

Sam Altman, over bread rolls, explores life after GPT-5 Maxwell Zeff

How your solar rooftop became a national security issue Connie Loizos

A comprehensive list of 2025 tech layoffs Cody Corrall Alyssa Stringer Kate Park

NASA has sparked a race to develop the data pipeline to Mars Aria Alamalhodaei

X LinkedIn Facebook Instagram youTube Mastodon Threads Bluesky TechCrunchStaffContact UsAdvertiseCrunchboard JobsSite Map Terms of ServicePrivacy PolicyRSS Terms of UseCode of Conduct Pixel 10Made © 2025 TechCrunch Media LLC.

About the Author

Sophie

Sophie Mueller

View all articles

Comments (0)

Sign in to Comment

Join the discussion and share your thoughts on this article.

Sign In

No Comments Yet

Be the first to share your thoughts on this article!

diş beyazlatma