As we are beginning to see AI tools like ChatGPT become part of everyday life, some users are turning to these platforms for deeply personal and troubling forms of guidance, including assessments and criticism of their appearance. A recent feature in Rolling Stone explored this trend, focusing specifically on individuals with BDD.
While some users prompt AI to critique their looks as harshly as possible, requesting ‘brutal honesty’ and getting cruel feedback in return, clinicians are warning that this behaviour can be psychologically dangerous. The article spoke with Dr. Toni Pikoos, a clinical psychologist in Melbourne, who said, ‘It’s almost coming up in every single session’ with her BDD clients, many of whom are now asking AI to evaluate their attractiveness or suggest cosmetic improvements.
AI’s perceived neutrality makes it especially risky in this context. ‘A chatbot… doesn’t have anything to gain,’ Toni Pikoos shared, which can lead users to believe its feedback is inherently more objective than that of a human. But these bots often ‘reflect back the person’s experience,’ echoing their insecurities rather than offering impartial insight.
This new version of reassurance seeking plays into common BDD compulsions, with chatbots becoming an inexhaustible source of validation, or criticism. As Kitty Newman Managing Director of the BDD Foundation shares, ‘We know that individuals with BDD are very vulnerable to harmful use of AI… The high levels of shame with BDD make it easier for sufferers to engage online than in person.’
There are also growing concerns about privacy and future commercial use. As users reveal their insecurities and upload personal photos, questions arise about whether this data could one day be used to target them with advertisements for cosmetic products or procedures.
This overview is a summary of the key points from a longer article originally published by Rolling Stone.