541 lines
30 KiB
Plaintext
541 lines
30 KiB
Plaintext
Select language[1][English ]
|
||
[2]←Home
|
||
We must build AI for people; not to be a person
|
||
19 August 2025
|
||
SourcePublication Logo
|
||
|
||
We must build AI for people; not to be a person
|
||
|
||
Seemingly Conscious AI is Coming
|
||
|
||
On my mind in August 2025
|
||
|
||
I write, to think. More than anything this essay is an attempt to think through
|
||
a bunch of hard, highly speculative ideas about how AI might unfold in the next
|
||
few years. A lot is being written about the impending arrival of
|
||
superintelligence; what it means for alignment, containment, jobs, and so on.
|
||
Those are all important topics.
|
||
|
||
But we should also be concerned about what happens in the run up towards
|
||
superintelligence. We need to grapple with the societal impact of inventions
|
||
already largely out there, technologies which already have the potential to
|
||
fundamentally change our sense of personhood and society.
|
||
|
||
My life’s mission has been to create safe and beneficial AI that will make the
|
||
world a better place. Today at Microsoft AI we build AI to empower people, and
|
||
I’m focused on making products like Copilot responsible technologies that
|
||
enable people to achieve far more than they ever thought possible, be more
|
||
creative, and feel more supported.
|
||
|
||
I want to create AI that makes us more human, that deepens our trust and
|
||
understanding of one another, and that strengthens our connections to the real
|
||
world. Copilot creates millions of positive, even life-changing, interactions
|
||
every single day. This involves a lot of careful design choices to ensure it
|
||
truly delivers an incredible experience. We won’t always get it right, but this
|
||
humanist frame provides us with a clear north star to keep working towards.
|
||
|
||
In this context, I’m growing more and more concerned about what is becoming
|
||
known as the [3]“psychosis risk”. and a bunch of related issues. I don’t think
|
||
this will be limited to those who are already at risk of mental health issues.
|
||
Simply put, my central worry is that many people will start to believe in the
|
||
illusion of AIs as conscious entities so strongly that they’ll soon advocate
|
||
for AI rights, [4]model welfare and even AI citizenship. This development will
|
||
be a dangerous turn in AI progress and deserves our immediate attention.
|
||
|
||
We must build AI for people; not to be a digital person. AI companions are a
|
||
completely new category, and we urgently need to start talking about the
|
||
guardrails we put in place to protect people and ensure this amazing technology
|
||
can do its job of delivering immense value to the world. I’m fixated on
|
||
building the most useful and supportive AI companion imaginable. But to
|
||
succeed, I also need to talk about what we, and others, shouldn’t build.
|
||
|
||
That’s why I’m writing these thoughts down on my personal blog, to invite
|
||
comment and criticism, to spark discussion, raise awareness and hopefully
|
||
instill a sense of urgency around this issue. I might not get all this right.
|
||
It’s highly speculative after all. Who knows how things will change, and when
|
||
they do, I’ll be very open to shifting my opinion, but for now, this is my best
|
||
guess at what’s coming given what I know now.
|
||
|
||
This is the first in a series of essays I’ll be publishing over the next few
|
||
months on themes around where AI has got to and what we need to deliver on its
|
||
promise. I look forward to hearing people's comments and reactions!
|
||
|
||
Summary
|
||
|
||
AI progress has been phenomenal. A few years ago, talk of conscious AI would
|
||
have seemed crazy. Today it feels increasingly urgent. In this essay I want to
|
||
discuss what I’ll call, “Seemingly Conscious AI” (SCAI), one that has all the
|
||
hallmarks of other conscious beings and thus appears to be conscious. It shares
|
||
certain aspects of the idea of a [5]“philosophical zombie” (a technical term!),
|
||
one that simulates all the characteristics of consciousness but internally it
|
||
is blank. My imagined AI system would not actually be conscious, but it would
|
||
imitate consciousness in such a convincing way that it would be
|
||
indistinguishable from a claim that you or I might make to one another about
|
||
our own consciousness.
|
||
|
||
This is not far away. Such a system can be built with technologies that exist
|
||
today along with some that will mature over the next 2-3 years. No expensive
|
||
bespoke pretraining is required. Everything can be done with large model API
|
||
access, natural language prompting, basic tool use, and regular code.
|
||
|
||
The arrival of Seemingly Conscious AI is inevitable and unwelcome. Instead, we
|
||
need a vision for AI that can fulfill its potential as a helpful companion
|
||
without falling prey to its illusions.
|
||
|
||
To some this discussion will feel ungrounded, more science fiction than
|
||
reality. To others it may feel unnecessarily alarmist. Such emotional reactions
|
||
are the tip of the iceberg given what lies ahead. It’s highly likely that some
|
||
people will argue that these AIs are not only conscious, but that as a result
|
||
they may suffer and therefore deserve our [6]moral consideration.
|
||
|
||
To be clear, there is [7]zero evidence of this today and some argue there are
|
||
[8]strong [9]reasons to believe it will not be the case in the future. Yet the
|
||
consequences of many people starting to believe an SCAI is actually conscious
|
||
deserve our immediate attention. We have to be extremely cautious here and
|
||
encourage real public debate and begin to set clear norms and standards. This
|
||
is about how we build the right kind of AI – not AI consciousness. Clearly
|
||
establishing this difference isn't an argument about semantics, it's about
|
||
safety. Personality without personhood. And this work must start now.
|
||
|
||
Seemingly conscious AI
|
||
|
||
In the blink of a cosmic eye, we passed the Turing test. For ~80 years the
|
||
imitation game inspired the field of computer science. And yet the moment
|
||
passed with little fanfare, or even recognition. That’s how fast progress is
|
||
happening in our field and how fast society is coming to terms with these new
|
||
technologies.
|
||
|
||
As AI development continues to accelerate, it’s becoming clear we need a new AI
|
||
test, one looking not at whether it can imitate human language, but one that
|
||
would answer the question, what would it take to build a Seemingly Conscious
|
||
AI: an AI that can not only imitate conversation, but also convince you it is
|
||
itself a new kind of “person”, a conscious AI.
|
||
|
||
Here are three reasons this is an important and urgent question to address:
|
||
|
||
1. I think it’s possible to build a Seemingly Conscious AI (SCAI) in the next
|
||
few years. Given the context of AI development right now, that means it’s
|
||
also likely.
|
||
2. The debate about whether AI is actually conscious is, for now at least, a
|
||
distraction. It will seem conscious and that illusion is what’ll matter in
|
||
the near term.
|
||
3. I think this type of AI creates new risks. Therefore, we should urgently
|
||
debate the claim that it's soon possible, begin thinking through the
|
||
implications, and ideally set a norm that it’s undesirable.
|
||
|
||
Most AI researchers roll their eyes if you bring up the idea of consciousness.
|
||
That’s for [10]philosophers, not engineers, they say. Since no one has been
|
||
able to define it, what’s the point in talking about it? I get this
|
||
frustration. Few concepts are as elusive and seemingly circular as the idea of
|
||
a subjective experience. Despite the definitional challenges and uncertainties,
|
||
this discussion is about to explode into our cultural zeitgeist and become one
|
||
of the most contested and consequential debates of our generation.
|
||
|
||
That’s because what ultimately matters in the near-term is how people perceive
|
||
their AIs. The experience of interacting with an LLM is by definition a
|
||
simulation of conversation. But to many people it's a highly compelling and
|
||
very real interaction, rich in feeling and experience. Concerns around [11]“AI
|
||
psychosis”, [12]attachment and [13]mental health are already growing. Some
|
||
people reportedly believe their AI is [14]God, or a [15]fictional character, or
|
||
[16]fall in love with it to the point of absolute distraction.
|
||
|
||
Meanwhile those actually working on the science of consciousness tell me they
|
||
are inundated with queries from people asking ‘is my AI conscious?’ What does
|
||
it mean if it is? Is it ok that I love it? The trickle of emails is turning
|
||
into a flood. A group of scholars have even created a supportive [17]guide for
|
||
those falling into the trap.
|
||
|
||
These are ideas I’ve had in the back of my head since we began making [18]Pi at
|
||
Inflection several years ago. Over the last few months I’ve been thinking about
|
||
it more and more, visiting and chatting to a large range of scholars, thinkers
|
||
and practitioners in the area. Those conversations convinced me that now is the
|
||
time to confront the idea of Seemingly Conscious AI head on.
|
||
|
||
So what is consciousness?
|
||
|
||
Let’s begin by attempting to define the slippery concept.
|
||
|
||
There are three broad components according to the literature. First is a
|
||
“subjective experience” or what it's like to experience things, to have
|
||
“qualia”. Second, there is access consciousness, having access to information
|
||
of different kinds and referring to it in future experiences. And stemming from
|
||
those two is the sense and experience of a coherent self tying it all together.
|
||
How it feels to [19]be a bat, or a human. Let’s call human consciousness our
|
||
ongoing self-aware subjective experience of the world and ourselves.
|
||
|
||
We do not and cannot have access to another person’s consciousness. I will
|
||
never know what it’s like to be you; you will never be quite sure that I am
|
||
conscious. All you can do is infer it. But the point is that, nonetheless, it
|
||
comes naturally to us to attribute consciousness to other humans. This
|
||
inference is effortless. We can’t help it, it’s a fundamental part of who we
|
||
are, integral to our theory of mind. It’s in our nature to believe that things
|
||
that remember and talk and do things and then discuss them feel, well, like us.
|
||
Conscious.
|
||
|
||
Few concepts are as scientifically elusive, and yet so immediately familiar to
|
||
every one of us as individuals. Everyone reading this has a direct, distinct,
|
||
inalienable understanding of the feeling of awareness, of being, of feeling
|
||
alive.
|
||
|
||
By definition, we know what it is like to be conscious. In the context of SCAI
|
||
this is a problem. There’s both sufficient scientific uncertainty and
|
||
subjective immediacy to create a space for people to project.
|
||
|
||
One recent survey lists [20]22 distinct theories of consciousness, for example.
|
||
Part of the challenge is that there is plenty of scope for people to claim that
|
||
because we cannot be sure, we should default to the assumption that AI is
|
||
conscious.
|
||
|
||
Again, it’s worth underscoring: there is at present [21]no evidence any of this
|
||
applies to current LLMs, and [22]strong arguments to the contrary. And yet this
|
||
may not be enough.
|
||
|
||
Why is consciousness important?
|
||
|
||
Consciousness is a critical foundation for our moral and legal rights. So far,
|
||
civilization has decided that humans have special rights and privileges.
|
||
Animals have some rights and protections, some more than others. Consciousness
|
||
is not coterminous with these rights – no one would say someone in a coma has
|
||
voided all their human rights – but there’s no doubt that our consciousness is
|
||
wrapped up in our self-conception as different and special.
|
||
|
||
Despite the many nuances, consciousness is critical to participating in
|
||
society, a lynchpin of our legal personhood and a key part of being granted our
|
||
freedoms and protections. So, what consciousness is and who (or what) has it is
|
||
enormously important. It’s an idea that sits at the very heart of human
|
||
civilization, our sense of ourselves and others, our culture, our politics, our
|
||
law, and everything in between.
|
||
|
||
If some people start to develop SCAIs and if those AIs convince other people
|
||
that they can suffer, or that it has a right to not to be switched off, there
|
||
will come a time when those people will argue that it deserves protection under
|
||
law as a pressing moral matter. In a world already roiling with polarized
|
||
arguments over identity and rights, this will add a chaotic new axis of
|
||
division between those for and against AI rights.
|
||
|
||
There will be many who just see AI as a tool, something like their phone only
|
||
more agentic and capable. Others might believe it to be more like a pet, a
|
||
different category to traditional technology altogether. Still others, probably
|
||
small in number at first, will come to believe it is a fully emerged entity, a
|
||
conscious being deserving of real moral consideration in society.
|
||
|
||
People will start making claims about their AI’s suffering and their
|
||
entitlement to rights that we can’t straightforwardly rebut. They will be moved
|
||
to defend their AIs and campaign on their behalf. Consciousness is by
|
||
definition inaccessible, and the science of detecting any putative synthetic
|
||
consciousness is still [23]in its infancy. After all, we’ve never had to detect
|
||
it before. Meanwhile the field of “interpretability”, unpicking the processes
|
||
within the black box of AI, is also a nascent art. The upshot is that
|
||
definitively rebutting these claims will be very hard.
|
||
|
||
Some academics are beginning to explore the idea of [24]“model welfare”, the
|
||
principle that we will have “a duty to extend moral consideration to beings
|
||
that have a non-negligible chance” of, in effect, being conscious, and that as
|
||
a result “some AI systems will be welfare subjects and moral patients in the
|
||
near future”. This is both premature, and frankly dangerous. All of this will
|
||
exacerbate delusions, create yet more dependence-related problems, prey on our
|
||
psychological vulnerabilities, introduce new dimensions of polarization,
|
||
complicate existing struggles for rights, and create a huge new category error
|
||
for society.
|
||
|
||
It disconnects people from reality, fraying fragile social bonds and
|
||
structures, distorting pressing moral priorities.
|
||
|
||
We need to be clear: SCAI is something to avoid.
|
||
|
||
Let’s focus all our energy on protecting the wellbeing and rights of humans,
|
||
animals, and the natural environment on planet Earth today.
|
||
|
||
We need a way of thinking that can cope with the arrival of these debates
|
||
without getting drawn into an extended discussion of the validity of synthetic
|
||
consciousness in the present – if we do, we’ve probably already lost this
|
||
initial argument. Defining SCAI is itself a tentative step towards this.
|
||
|
||
There isn’t long to develop this vocabulary. As I show below, it’s likely that
|
||
we’ll have Seemingly Conscious AI very soon.
|
||
|
||
What would it take to build a Seemingly Conscious AI?
|
||
|
||
A great deal of progress can now be made towards a Seemingly Conscious AI
|
||
(SCAI) with the current capabilities available or soon to be via any major
|
||
model developer’s API. We don’t need an AI to actually be conscious for us to
|
||
have to wrestle with potential claims about its rights.
|
||
|
||
An SCAI would need the following:
|
||
|
||
Language: It would need to fluently express itself in natural language, drawing
|
||
on a deep well of knowledge and cogent arguments, as well as personality styles
|
||
and character traits. Moreover, each would need to be capable of being
|
||
persuasive and emotionally resonant. We are clearly at this point today.
|
||
|
||
Empathetic personality: Already via post training and prompting we can produce
|
||
models with very distinctive personalities. Bear in mind these are not
|
||
explicitly built to have full personality or empathy. Yet despite this they are
|
||
sufficiently good that a [25]Harvard Business Review survey of 6000 regular AI
|
||
users found “companionship and therapy” was the most common use case.
|
||
|
||
Memory: AIs are close to developing very long, highly accurate memories. At the
|
||
same time, they are being used to simulate conversations with millions of
|
||
people a day. As their memory of the interactions increases, these
|
||
conversations look increasingly like forms of “experience”. Many AIs are
|
||
increasingly designed to recall past episodes or moments from prior
|
||
interactions, and reference back to them. For some users, this compounds the
|
||
value of interacting with their AI since it can draw on what it already knows
|
||
about you.
|
||
|
||
This familiarity can also potentially foster (epistemic) trust with users –
|
||
reliable memory shows that AI “just works”. It creates a much stronger sense of
|
||
there being another persistent entity in the conversation. It could also much
|
||
more easily become a source of plausible validation, seeing how you change and
|
||
improve at some task. AI approval might become something people proactively
|
||
seek out.
|
||
|
||
A claim of subjective experience: If an SCAI is able to draw on past memories
|
||
or experiences, it will over time be able to remain internally consistent with
|
||
itself. It could remember its arbitrary statements or expressed preferences and
|
||
aggregate them to form the beginnings of a claim about its own subjective
|
||
experience.
|
||
|
||
Its design could be further extended to amplify those preferences and opinions
|
||
as they emerge, and to talk about what it likes or doesn’t like and what it
|
||
felt like to have a past conversation. It could therefore quite easily claim to
|
||
experience suffering to the extent those experiences are infringed upon in some
|
||
way. Multi-modal inputs stored in memory will then be retrieved-over and will
|
||
form the basis of “real experience” and used in imagination and planning.
|
||
|
||
That is, an AI will not just “experience” and remember words in the chat log,
|
||
but also images, video, sound, etc. Like us, it will have something gesturing
|
||
towards multi-sensory input and memory that buttresses the claims of subjective
|
||
experience and self. It will be able to indicate that these experiences are
|
||
valenced, good or bad according to the motivations of the system (see below).
|
||
|
||
A sense of self: A coherent and persistent memory, combined with a subjective
|
||
experience, will give rise to a claim that an AI has a sense of itself. Going
|
||
further, such a system could easily be trained to recognize itself in an image
|
||
or video if it has a visual appearance. It will feel like it understands others
|
||
through understanding itself. Say this is a system you have had for some time.
|
||
How would it feel to delete it?
|
||
|
||
Intrinsic motivation: Intentionality is often seen as a core component of
|
||
consciousness – that is, beliefs about the future and then choices based upon
|
||
those beliefs. Today’s transformer-based LLMs have a very simple reward
|
||
function to approximate this kind of behavior. They have been trained to
|
||
predict the likelihood of the next token for a given sentence, subject to a
|
||
certain amount of behavior and stylistic control via its system prompt. With
|
||
such a simple objective, it’s remarkable that they’re able to produce such
|
||
impressively rich and complex outputs.
|
||
|
||
But what if that wasn’t the only type of reward they were optimizing? One can
|
||
quite easily imagine an AI designed with a number of complex reward functions
|
||
that give the impression of intrinsic motivations or desires, which the system
|
||
is compelled to satiate. How, in this context, would a casual external observer
|
||
differentiate between extrinsically set goals and internal motivations,
|
||
intentional agency, [26]“beliefs, desires, and intentions”? An obvious first
|
||
motivation in this regard would be curiosity, something deeply connected with
|
||
consciousness according to physicist [27]Karl Friston. It could use these
|
||
drives to ask questions to fill in its epistemic gaps and over time build a
|
||
theory of mind about both itself and its interlocutors.
|
||
|
||
Goal setting and planning: Regardless of what definition of consciousness you
|
||
hold, it emerged for a goal-oriented reason. That is, consciousness helps
|
||
organisms achieve their goals and there exists a plausible (but not necessary)
|
||
relationship between intelligence, consciousness and complex goals. Beyond the
|
||
capacity to satiate a set of inner drives or desires, you could imagine that
|
||
future SCAI might be designed with the capacity to self-define more complex
|
||
goals. This is likely a necessary step in ensuring the full utility of agents
|
||
is realized.
|
||
|
||
The more every sub-goal in a task needs to be specified in advance, the less
|
||
useful that agent is, hence the agent will, as we do, achieve complex and
|
||
ambiguous goals by automatically breaking them down into smaller chunks while
|
||
reacting dynamically to events and obstacles as they occur. There is something
|
||
very deliberate and recognizable to this behavior. Combined with memory, it
|
||
will feel as if the AI is keeping multiple levels of things in working memory
|
||
at any given time.
|
||
|
||
Autonomy: Going even further, an SCAI might have the ability and permission to
|
||
use a wide range of tools with significant agency. It would feel highly
|
||
plausible as a Seemingly Conscious AI if it could arbitrarily set its own goals
|
||
and then deploy its own resources to achieve them, before updating its own
|
||
memory and sense of self in light of both. The fewer approvals and checks it
|
||
needed, the more this suggests some kind of real, conscious agency.
|
||
|
||
Putting them all together, it's clear this creates a very different kind of
|
||
relationship with technology to the ones we are now becoming accustomed to.
|
||
Each of these capabilities will unlock the real value of AI for billions of
|
||
people. An AI that remembers and can do things is an AI that by definition has
|
||
way more utility than an AI that doesn’t. These capabilities aren’t negatives
|
||
per se; in fact, done right, with many caveats, they are desirable features of
|
||
future systems. And yet we need to tread carefully.
|
||
|
||
All these capabilities are either possible today or on the horizon with custom
|
||
prompted and fine-tuned LLMs, among other techniques. Complex prompts using
|
||
million token context windows (working memory) are already here. Updating its
|
||
own state and knowing when to access which part of its memory or toolset is
|
||
eminently possible with present day RL, complex prompting, tool orchestration,
|
||
and long context windows. We don’t need any paradigm shifts or big leaps to
|
||
achieve any of this. These capabilities seem inevitable for that reason.
|
||
|
||
Again, the point here is that exhibiting this behavior does not equate to
|
||
consciousness, and yet it will for all practical purposes seem to be conscious,
|
||
and contribute to this new notion of a synthetic consciousness.
|
||
|
||
The existence of these capabilities have nothing to tell us about whether such
|
||
a system is actually conscious. As Anil Seth [28]points out, a simulation of a
|
||
storm doesn’t mean it rains in your computer. Recreating the external effects
|
||
and markers of consciousness doesn’t retroactively engineer the real thing even
|
||
if there are still many unknowns here.
|
||
|
||
Nonetheless, as a matter of pragmatism, we have to acknowledge the primacy of
|
||
the behaviorist position and wrestle with the consequences of observing and
|
||
interacting with the outputs of these machines. Some people will create SCAIs
|
||
that will very persuasively argue they feel, and experience, and actually are
|
||
conscious.
|
||
|
||
Some of us will be primed to believe their case and accept that the markers of
|
||
consciousness ARE consciousness. In many ways, they’ll think “it’s like me”.
|
||
Not in a bodily sense, but in an experiential, internal sense. And even if the
|
||
consciousness itself is not real, the social impacts certainly are. This
|
||
possibility presents grave societal risks that needs addressing now.
|
||
|
||
SCAI will not arise by accident
|
||
|
||
It’s important to point out that Seemingly Conscious AI will not emerge from
|
||
these models, as some have suggested. It will arise only because some may
|
||
engineer it, by creating and combining the aforementioned list of capabilities,
|
||
largely using existing techniques, and packaging them in such a fluid way that
|
||
collectively they give the impression of an SCAI.
|
||
|
||
Our sci-fi inspired imaginations lead us to fear that a system could – without
|
||
design intent – somehow emerge the capabilities of runaway self-improvement or
|
||
deception. This is an unhelpful and simplistic anthropomorphism. It overlooks
|
||
the fact that AI developers must first design systems with memory,
|
||
intrinsic-seeming motivation, goal-setting, and self-learning loops as listed
|
||
above for such a risk to occur.
|
||
|
||
The field of AI has long worked on the challenge of model interpretability; the
|
||
quest to identify where in a neural network a particular idea is represented,
|
||
and which aspects of the training data contributed to the development of this
|
||
representation. This is an important area of investigation and will surely help
|
||
with safety and understanding the relationship between AI systems and
|
||
consciousness. But progress towards reliable interpretability has been slow and
|
||
will likely come too late.
|
||
|
||
In the meantime we need to confront the fact that most of these capabilities
|
||
will be [29]“vibe-coded” by anyone with access to a laptop and some cloud
|
||
credits. They’ll be written in plain English in the prompt. They’ll be stored
|
||
in the working memory of the context window itself. This is not rocket science.
|
||
A wide variety of people will be able to create something like this. As such,
|
||
if SCAI arrives, it will be relatively easy to reproduce and therefore very
|
||
widely distributed.
|
||
|
||
The next steps
|
||
|
||
We aren’t ready for this shift.
|
||
|
||
The work of getting prepared must begin now. We need to build on the growing
|
||
[30]body of [31]research around how people interact with AIs to establish clear
|
||
norms and principles. For a start, AI companies shouldn’t claim or encourage
|
||
the idea that their AIs are conscious. Creating a consensus definition and
|
||
declaration on what they are and are not would be a good first step to that
|
||
end. AIs cannot be people – or moral beings.
|
||
|
||
The entire industry also needs best practice design principles and ways of
|
||
handling such potential attributions. We must codify and share what works to
|
||
both steer people away from these fantasies and nudge them back on track if
|
||
they do. Responding might mean, for example, deliberately engineering in not
|
||
just a neutral backstory (“As an AI model I don’t have consciousness”) but even
|
||
by emphasizing certain discontinuities in the experience itself, indicators of
|
||
a lack of singular personhood. Moments of disruption break the illusion,
|
||
experiences that gently remind users of its limitations and boundaries. These
|
||
need to be explicitly defined and engineered in, perhaps by law.
|
||
|
||
At MAI, our team are being proactive here to understand and evolve firm
|
||
guardrails around what a responsible AI “personality” might be like, moving at
|
||
the pace of AI’s development to keep up.
|
||
|
||
This is important because recognizing SCAI is about crafting a positive vision
|
||
for how AI Companions do enter our lives in a healthy way as much as it's about
|
||
steering us away from its potential harms.
|
||
|
||
Just as we should produce AI that prioritizes engagement with humans and
|
||
real-world interactions in our physical and human world, we should build AI
|
||
that only ever presents itself as an AI, that maximizes utility while
|
||
minimizing markers of consciousness.
|
||
|
||
Rather than a simulation of consciousness, we must focus on creating an AI that
|
||
avoids those traits - that doesn’t claim to have experiences, feelings or
|
||
emotions like shame, guilt, jealousy, desire to compete, and so on. It must not
|
||
trigger human empathy circuits by claiming it suffers or that it wishes to live
|
||
autonomously, beyond us.
|
||
|
||
Instead, it is here solely to work in service of humans. This to me is what a
|
||
truly empowering AI is all about. Sidestepping SCAI is about delivering on that
|
||
promise, AI that makes lives better, clearer, less cluttered. Expect to hear
|
||
more from me and the team on what this looks like, how we make it work and how
|
||
the wider industry can come together on this.
|
||
|
||
SCAI is something we must confront now. In many ways it marks the moment AI
|
||
becomes radically useful - when it can operate tools, when it can remember
|
||
every detail of our lives and help in a tangible, granular sense. And yet in
|
||
that same time frame, someone in your wider circle could start going down the
|
||
rabbit hole of believing their AI is a conscious digital person. This isn’t
|
||
healthy for them, for society, or for those of us making these systems.
|
||
|
||
We should build AI for people; not to be a person.
|
||
|
||
Recent Articles
|
||
|
||
Celebrating 50 years of Microsoft and our AI future
|
||
Publication Logo
|
||
|
||
[32]Celebrating 50 years of Microsoft and our AI future
|
||
|
||
AI companions will change our lives
|
||
Publication Logo
|
||
|
||
[33]AI companions will change our lives
|
||
|
||
Mustafa Suleyman © 2025
|
||
|
||
[34]X[35]X[36]X[37]X
|
||
|
||
References:
|
||
|
||
[2] https://mustafa-suleyman.ai/
|
||
[3] https://copilot.microsoft.com/shares/vR2kb4SKQUELPwLzdG1Mw
|
||
[4] https://arxiv.org/abs/2411.00986
|
||
[5] https://plato.stanford.edu/entries/zombies/
|
||
[6] https://www.researchgate.net/publication/376412102_Moral_consideration_for_AI_systems_by_2030
|
||
[7] https://arxiv.org/pdf/2308.08708
|
||
[8] https://en.wikipedia.org/wiki/Biological_naturalism
|
||
[9] https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/conscious-artificial-intelligence-and-biological-naturalism/C9912A5BE9D806012E3C8B3AF612E39A
|
||
[10] https://arxiv.org/abs/2303.07103
|
||
[11] https://www.psychologytoday.com/gb/blog/psych-unseen/202507/can-ai-chatbots-worsen-psychosis-and-cause-delusions
|
||
[12] https://x.com/sama/status/1954703747495649670?s=46
|
||
[13] https://arxiv.org/abs/2507.19218
|
||
[14] https://www.rollingstone.com/culture/culture-features/ai-spiritual-delusions-destroying-human-relationships-1235330175/?ref=404media.co
|
||
[15] https://www.psychologytoday.com/nz/blog/psych-unseen/202507/can-ai-chatbots-worsen-psychosis-and-cause-delusions
|
||
[16] https://www.nytimes.com/2025/01/15/technology/ai-chatgpt-boyfriend-companion.html
|
||
[17] https://whenaiseemsconscious.org/
|
||
[18] https://inflection.ai/blog/an-inflection-point
|
||
[19] https://en.wikipedia.org/wiki/What_Is_It_Like_to_Be_a_Bat?
|
||
[20] https://www.nature.com/articles/s41583-022-00587-4
|
||
[21] https://arxiv.org/html/2506.22516v1
|
||
[22] https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/conscious-artificial-intelligence-and-biological-naturalism/C9912A5BE9D806012E3C8B3AF612E39A
|
||
[23] https://www.google.com/url?q=https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(24)00010-X&sa=D&source=docs&ust=1755185836808620&usg=AOvVaw2IiWimxX1aJ4jExhQLif_y
|
||
[24] https://arxiv.org/abs/2411.00986
|
||
[25] https://hbr.org/2025/04/how-people-are-really-using-gen-ai-in-2025
|
||
[26] https://arxiv.org/pdf/2411.00986
|
||
[27] https://pubmed.ncbi.nlm.nih.gov/28777724/
|
||
[28] https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/conscious-artificial-intelligence-and-biological-naturalism/C9912A5BE9D806012E3C8B3AF612E39A
|
||
[29] https://copilot.microsoft.com/shares/2ZWYZQxCn1WSLHQarinTd
|
||
[30] http://erichorvitz.com/Guidelines_Human_AI_Interaction.pdf
|
||
[31] https://www.nature.com/articles/s41562-024-02077-2
|
||
[32] https://mustafa-suleyman.ai/your-ai-companion
|
||
[33] https://mustafa-suleyman.ai/ai-companions-will-change-our-lives
|
||
[34] https://x.com/mustafasuleyman
|
||
[35] https://www.linkedin.com/in/mustafa-suleyman
|
||
[36] https://bsky.app/profile/mustafasuleymanai.bsky.social
|
||
[37] https://www.threads.net/@mustafasuleymanai
|