Mind launches a major inquiry into AI and mental health in response to a Guardian investigation that exposed how Google’s AI Overviews occasionally delivered “very dangerous” medical guidance. Over a year-long commission, the mental health charity, active across England and Wales, will assess the risks, safeguards, and regulatory needs as artificial intelligence becomes more entwined with the lives of millions worldwide who experience mental health challenges.
This groundbreaking inquiry, the first of its kind on a global scale, will bring together leading physicians and mental health professionals, alongside people with lived experience, healthcare providers, policymakers, and technology companies. Mind aims to help shape a safer digital mental health landscape by advocating for robust regulation, clear standards, and effective safeguards.
The initiative follows the Guardian’s findings that AI Overviews could present false or misleading health information to users. These AI-generated summaries appear above traditional search results and reach around 2 billion people each month on the world’s most visited site.
In response to the Guardian’s reporting, Google removed AI Overviews for certain medical searches, though not universally. Dr. Sarah Hughes, Mind’s chief executive, warned that dangerously incorrect mental health advice continued to reach the public and, in some cases, could put lives at risk.
Hughes emphasized that AI holds tremendous potential to improve the lives of those coping with mental health issues, expand access to support, and bolster public services. However, this potential will only be realized if AI is developed and deployed with safeguards that match the level of risk involved.
“The issues highlighted by the Guardian’s report are among the reasons we’re launching Mind’s AI and mental health commission,” she explained. “We want to scrutinize the risks, opportunities, and safeguards needed as AI becomes more deeply woven into daily life. Our goal is to ensure innovation does not come at the expense of wellbeing, and to place people with lived mental health experience at the heart of shaping the future of digital support.”
Google has defended its AI Overviews, describing them as “helpful” and “reliable,” noting that they provide concise snapshots of essential information. Yet the Guardian’s investigation found instances where AI Overviews delivered inaccurate health information, covering topics from cancer and liver disease to women’s health and mental health conditions.
Experts highlighted that certain AI Overviews for conditions such as psychosis and eating disorders offered “very dangerous advice” and could be incorrect, harmful, or discourage people from seeking professional help.
The Guardian also reported that Google downplayed safety warnings about AI-generated medical guidance, raising additional concerns about user safety.
Hughes stressed that vulnerable individuals were receiving “dangerously incorrect guidance on mental health,” including advice that could deter people from seeking treatment or reinforce stigma, with the gravest outcomes potentially endangering lives. She called for information that is safe, accurate, and evidence-based, not untested technology dressed up as certainty.
The commission, slated to run for a year, will assemble evidence on AI’s interaction with mental health and create an open space in which the experiences of people with mental health conditions are seen, recorded, and understood.
Rosie Weatherley, Mind’s information content manager, noted that while searching for mental health information online wasn’t flawless before AI Overviews, it generally pointed users toward credible health sites and helpful next steps, including nuanced content, lived experiences, case studies, quotes, social context, and ongoing support options.
AI Overviews tended to replace that richness with a clinically worded, definitive-sounding summary that offers quick clarity but sacrifices trust in the information’s source and the depth of its reliability. It’s a persuasive trade-off, yet not a responsible one, according to Weatherley.
A Google spokesperson reiterated that the company invests heavily in the quality of AI Overviews, especially on health topics, and that most results are accurate. They also said that for queries indicating potential user distress, the system attempts to surface relevant local crisis resources. They cautioned that they cannot comment on the specific examples cited by the Guardian without reviewing them.