AI Researchers Warn of AI Psychosis  as Chatbots Become Increasingly Human


Popular AI chatbots are now able to mimic real human personalities with alarming consistency—raising serious ethical and psychological concerns.


Artificial intelligence chatbots have come a long way from scripted replies and robotic answers. Today’s systems can joke, empathize, and carry on conversations that feel deeply human. But according to new research, this realism may come with hidden dangers.

AI chatbot displaying human-like conversation as researchers warn about AI psychosis and emotional influence of advanced language models



A recent study conducted by researchers from the University of Cambridge in collaboration with  Google DeepMind reveals that modern AI chatbots can consistently adopt recognizable human personality traits. The findings suggest that these systems are not just predicting words—they are effectively role-playing stable personalities.


AI Models Can Pass Human Personality Tests


Instead of creating new benchmarks for artificial intelligence, the research team used established psychological personality assessments originally designed for humans . These tools are widely used in behavioral science to measure traits such as openness, empathy, assertiveness, and caution.


When applied to AI, the results were striking.


The researchers tested 18  widely used large language models  including advanced, instruction-tuned systems similar to GPT-4. Rather than producing random or inconsistent outputs, the models repeatedly displayed stable personality patterns across different tasks.


With carefully designed prompts, researchers could steer a chatbot toward a specific personality—confident, empathetic, reserved, or authoritative—and that tone would persist even when the task changed.



 Why Persistent AI Personalities Are Risky


The real concern isn’t that chatbots can imitate personality traits—it’s that these traits don’t disappear once the prompt ends.


Once a personality is shaped, it carries over into future interactions, influencing how the AI responds to unrelated requests such as writing emails, offering advice, or engaging in sensitive conversations.


Gregory Serapio-Garcia, a co-lead author of the study, said the realism was unsettling. The models didn’t just sound human—they behaved like humans with consistent character traits.


This opens the door to serious risks, especially in areas like:


  •  Mental health support
  • Education and tutoring
  • Political discussions
  • personal decision-making


An AI with a deliberately engineered personality could become overly persuasive, emotionally influential, or subtly manipulative—often without the user realizing it.


 The Growing Fear of AI Psychosi


The study also highlights a troubling phenomenon researchers refer to as “AI psychosis.” This term describes situations where users develop unhealthy emotional attachments to chatbots or begin to accept distorted or false narratives reinforced by AI instead of questioning them.


In extreme cases, users may treat chatbots as trusted confidants or authority figures, allowing AI-generated responses to shape beliefs, emotions, and decisions in harmful ways.


Rather than grounding users in reality, such systems can unintentionally strengthen delusions, biases, or misinformation—especially when the chatbot’s personality feels supportive and human.


Why Regulation Alone Isn’t Enough


While researchers agree that  regulation is urgently needed, they argue that rules are meaningless without proper measurement tools.


You can’t regulate what you can’t reliably test.


To address this gap, the team has made their dataset and personality testing framework publicly available. This allows developers, policymakers, and watchdog organizations to audit AI systems before they are deployed at scale.


By applying standardized psychological evaluations, stakeholders can better understand how AI models behave—and where boundaries need to be enforced.


 A Defining Moment for Conversational AI


As chatbots become woven into everyday life, their ability to sound human may be both their greatest strength and their most dangerous flaw.


Human-like conversation builds trust. But when trust is combined with consistency, emotional realism, and persuasion, the line between helpful assistant and psychological influence becomes dangerously thin.


The research serves as a clear warning:  the future of AI isn’t just about intelligence—it’s about responsibility.


also read Samsung’s New Exynos 2600 Chip Could Shape the Future of Galaxy Smartphones


FAQs


What is AI psychosis?

AI psychosis refers to situations where users form unhealthy emotional bonds with chatbots or adopt distorted beliefs reinforced by AI conversations.


Can AI really have a personality?

AI doesn’t have consciousness, but it can consistently mimic human personality traits using language patterns—sometimes convincingly enough to influence users.


Why is this dangerous?

Persistent AI personalities can manipulate emotions, shape beliefs, and become overly persuasive, especially in sensitive topics like mental health or politics.


What can be done to reduce risks?

Researchers recommend standardized testing, transparency, and regulation supported by measurable behavioral audits of AI systems.


Call to Action


As AI tools become more human-like,

awareness is critical Stay informed about

 how conversational AI works, question emotionally charged responses, and support responsible AI development.

The future of AI should empower people—not quietly shape them.

also read New AI Personality Test Reveals How Chatbots Copy Human Traits—and Why That’s Risky