The arc of Google’s “What People Suggest” feature — from high-profile launch to unannounced removal — encapsulates the challenges of deploying AI in health search. The tool, which organized community health advice from online discussions using AI, has been confirmed as discontinued by Google and three informed insiders. The circumstances of its removal have prompted questions about accountability.
Launched at Google’s “The Check Up” health event in New York, the feature was introduced with genuine enthusiasm by then-chief health officer Karen DeSalvo. She described it as a meaningful way to connect health seekers with real-world insights from people managing similar conditions. The AI sorted online discussions into thematic health summaries for easy consumption.
Google’s rationale for its removal — search page simplification with no safety concerns — was challenged when the company pointed to a blog post as its public notice, a post that made no reference to the feature. The inconsistency has attracted criticism from health and digital rights commentators.
The removal happens alongside an ongoing controversy about Google’s AI-generated health content. An investigation found that AI Overviews were feeding false medical information to billions of users each month. While Google partially addressed this by pulling some medical AI Overviews, the broader problem of inaccurate health AI on the platform has not been resolved.
Google’s next health event is expected to feature optimistic announcements about AI and global health. The company’s credibility in this space, however, will be difficult to maintain unless it can demonstrate that it handles both successes and failures with equal transparency. The story of “What People Suggest” is a missed opportunity to demonstrate that capacity.
From Fanfare to Silence: Google’s AI Medical Peer Advice Feature Has Been Dropped
Published on
