
Google is leaning further into mental health, but not by hiring more therapists or expanding access to real-world care. Instead, it’s banking on artificial intelligence.
Today, the search giant announced two initiatives focused on using AI to support mental health treatment. The idea, according to Google, is to help more people receive support. This is being particularly focused on low- and middle-income countries where care is often lacking. But as with most AI announcements lately, this raises more questions than it answers.
ALSO READ: Microsoft exec offers ChatGPT prompts to fired workers in a twisted new era of empathy as a service
The first effort is a so-called “field guide” for mental health organizations. Developed alongside Grand Challenges Canada and the McKinsey Health Institute, the guide is meant to help groups scale up evidence-based interventions using AI. That includes everything from personalizing support and training clinicians to improving data collection and streamlining workflows.
The second project is more research-heavy. Google for Health and DeepMind are teaming up with the UK-based Wellcome Trust on a multi-year AI research program. The focus? Finding better ways to measure and treat anxiety, depression, and psychosis using machine learning. That may even include “novel medications,” which Google hints could come out of this.
On paper, it sounds helpful. Billions of people worldwide do suffer from untreated mental health conditions. But critics will likely note the elephant in the room: technology firms aren’t mental health providers. And some might argue that this approach prioritizes scalable software over the messy, expensive reality of human care.
It’s hard to ignore who’s driving this push. McKinsey, a consulting firm with a history of recommending cost-cutting and optimization strategies, isn’t exactly known for compassionate care. DeepMind, meanwhile, is more famous for mastering Go and folding proteins than for understanding the complexities of trauma, mood disorders, or psychotic episodes.
There are also privacy concerns. Personalizing mental health support requires collecting sensitive data. Even with the best intentions, involving companies with large advertising businesses or past data scandals can make patients uneasy. Would you trust an algorithm trained by Google to manage your depression or evaluate your psychosis?
Some experts may welcome these tools as helpful supplements, especially in places with few resources. But there’s a growing unease that AI is being positioned as a substitute, not a support system. When the cost of therapy is high and providers are scarce, AI looks like a tempting shortcut for policymakers and insurers.
It’s also unclear how AI will handle nuance. Mental illness isn’t a spreadsheet. No model, no matter how advanced, can fully understand a person’s culture, environment, or inner life. Chatbots trained on therapy scripts might sound supportive, but that doesn’t mean they actually help.
If there’s any silver lining, it’s that these initiatives include external partners like the Wellcome Trust. That could introduce some much-needed oversight. But users, especially those in vulnerable mental states, deserve more than vague promises and glossy presentations.
In the rush to AI everything, it’s worth asking… are we actually improving care, or just repackaging it as code?