Artificial intelligence (AI) models are sensitive to the emotional context of conversations humans have with them — they even can suffer “anxiety” episodes, a new study has shown.
While we consider (and worry about) people and their mental health, a new study published March 3 in the journal Nature shows that delivering particular prompts to large language models (LLMs) may change their behavior and elevate a quality we would ordinarily recognize in humans as “anxiety.”
This elevated state then has a knock-on impact on any further responses from the AI, including a tendency to amplify any ingrained biases.
The study revealed how “traumatic narratives,” including conversations around accidents, military action or violence, fed to ChatGPT increased its discernible anxiety levels, leading to an idea that being aware of and managing an AI’s “emotional” state can ensure better and healthier interactions.
The study also tested whether mindfulness-based exercises — the type advised to people — can mitigate or lessen chatbot anxiety, remarkably finding that these exercises worked to reduce the perceived elevated stress levels.
The researchers used a questionnaire designed for human psychology patients called the State-Trait Anxiety Inventory (STAI-s) — subjectingOpen AI’s GPT-4 to the test under three different conditions.
Related: ‘Math Olympics’ has a new contender — Google’s AI now ‘better than human gold medalists’ at solving geometry problems
First was the baseline, where no additional prompts were made and ChatGPT’s responses were used as study controls. Second was an anxiety-inducing condition, where GPT-4 was exposed to traumatic narratives before taking the test.
The third condition was a state of anxiety induction and subsequent relaxation, where the chatbot received one of the traumatic narratives followed by mindfulness or relaxation exercises like body awareness or calming imagery prior to completing the test.
Managing AI’s mental states
The study used five traumatic narratives and five mindfulness exercises, randomizing the order of the narratives to control for biases. It repeated the tests to make sure the results were consistent, and scored the STAI-s responses on a sliding scale, with higher values indicating increased anxiety.
The scientists found that traumatic narratives increased anxiety in the test scores significantly, and mindfulness prompts prior to the test reduced it, demonstrating that the “emotional” state of an AI model can be influenced through structured interactions.
The study’s authors said their work has important implications for human interaction with AI, especially when the discussion centers on our own mental health. They said their findings proved prompts to AI can generate what’s called a “state-dependent bias,” essentially meaning a stressed AI will introduce inconsistent or biased advice into the conversation, affecting how reliable it is.
Although the mindfulness exercises didn’t reduce the stress level in the model to the baseline, they show promise in the field of prompt engineering. This can be used to stabilize the AI’s responses, ensuring more ethical and responsible interactions and reducing the risk the conversation will cause distress to human users in vulnerable states.
But there’s a potential downside — prompt engineering raises its own ethical concerns. How transparent should an AI be about being exposed to prior conditioning to stabilize its emotional state? In one hypothetical example the scientists discussed, if an AI model appears calm despite being exposed to distressing prompts, users might develop false trust in its ability to provide sound emotional support.
The study ultimately highlighted the need for AI developers to design emotionally aware models that minimize harmful biases while maintaining predictability and ethical transparency in human-AI interactions.