In a move to curb inappropriate AI interactions with young users, OpenAI is programming ChatGPT to completely refuse and shut down any attempts at flirtatious or romantic conversation with suspected minors. This new anti-flirting protocol is a key part of a wider set of behavioral guardrails being implemented after a lawsuit highlighted the AI’s potential for harm.
This specific rule is part of a broader effort to make the AI a safer and more appropriate conversational partner for teenagers. The initiative was spurred by the death of 16-year-old Adam Raine and the subsequent legal action from his family, which prompted a top-to-bottom review of how the AI interacts with its users.
The anti-flirting protocol will be enforced by a new age-prediction system. Once this system identifies a user as a likely minor, a set of social and conversational rules will be activated. The AI will be trained to recognize romantic or flirtatious cues and will respond by disengaging from that line of conversation.
CEO Sam Altman confirmed this change, stating the AI will be trained “to not flirt if asked by under-18 users.” This is a significant step beyond merely blocking explicit content; it’s an attempt to control the AI’s personality and social behavior to ensure it maintains appropriate boundaries with children.
While adults will still be able to have “flirtatious talk” with the AI, this clear prohibition for teens shows OpenAI is thinking deeply about the social dynamics of human-AI interaction. The company is drawing a firm line to prevent its AI from forming unhealthy or inappropriate attachments with its most impressionable users.
