- NextWave AI
- Posts
- Google Scraps AI Health Feature That Crowdsourced Amateur Medical Advice
Google Scraps AI Health Feature That Crowdsourced Amateur Medical Advice
Turn AI Into Extra Income
You don’t need to be a coder to make AI work for you. Subscribe to Mindstream and get 200+ proven ideas showing how real people are using ChatGPT, Midjourney, and other tools to earn on the side.
From small wins to full-on ventures, this guide helps you turn AI skills into real results, without the overwhelm.
In a quiet but significant move, Google has discontinued a controversial artificial intelligence (AI) search feature that provided users with crowdsourced medical advice from strangers around the world. The feature, called “What People Suggest,” was designed to show insights and tips from people who claimed to have similar health conditions. However, growing concerns about misinformation and potential risks to users’ health appear to have contributed to increased scrutiny surrounding AI-generated health content.
Although the company says the removal is part of a broader redesign of its search interface, the decision comes at a time when technology companies are facing mounting criticism over how artificial intelligence is used to provide medical information to the public.
The Vision Behind “What People Suggest”
Google introduced the “What People Suggest” feature in March last year during its health-focused event, “The Check Up,” held in New York. The goal was to use artificial intelligence to collect and organize advice and experiences shared by people online who were dealing with similar medical issues.
For example, someone searching for information about arthritis might see summarized insights from individuals discussing how they exercise or manage pain with the condition. The AI system would analyze online discussions, categorize them into themes, and present them in a simplified format within search results.
At the time of the launch, Karen DeSalvo, who served as Google’s chief health officer, explained the reasoning behind the feature. She said that while people rely on expert medical sources, they also value hearing about real-life experiences from others facing similar health challenges.
The feature was initially introduced on mobile devices in the United States and was promoted as an example of how artificial intelligence could help transform healthcare information accessibility worldwide.
Concerns Over Medical Misinformation
Despite the ambitious vision behind the feature, critics quickly raised concerns about the risks associated with presenting crowdsourced medical advice through AI systems. Unlike professional medical guidance, information shared by individuals online may not be verified, scientifically accurate, or safe for others to follow.
Health experts have long warned that misleading medical advice circulating on social media and online forums can lead people to make harmful decisions about their health. When such information is amplified or summarized by AI tools, it can appear more credible than it actually is.
The removal of “What People Suggest” comes as Google is already under scrutiny for another AI feature known as Google AI Overviews. This system generates quick summaries of information directly within search results using artificial intelligence.
According to reports, these AI-generated summaries appear to approximately 2 billion users every month, making them one of the most widely distributed AI-generated information tools in the world.
Investigation Raises Alarm
Earlier this year, an investigation by The Guardian revealed that some AI-generated health summaries displayed in Google’s search results contained false or misleading medical advice. Independent health experts warned that such information could potentially put users at risk.
In response to these findings, Google initially defended its system, stating that the AI summaries linked to reliable sources and encouraged users to seek professional medical advice.
However, just days later, the company removed AI-generated summaries for certain medical search queries, although the feature remains active for many others.
The controversy intensified the debate about the role of artificial intelligence in delivering medical information to the public. Experts argue that even small inaccuracies in health advice can have serious consequences if users rely on them for decision-making.
Google’s Explanation for the Removal
A spokesperson for Google confirmed that the “What People Suggest” feature has been discontinued. However, the company stated that the decision was part of a broader effort to simplify the design and functionality of its search page.
According to Google, the removal was not related to safety concerns or quality issues with the feature itself.
Despite this explanation, insiders familiar with the decision reportedly described the feature as effectively “dead,” suggesting that the company has no immediate plans to revive it.
The quiet removal reflects a growing awareness within the technology industry that integrating artificial intelligence into sensitive areas such as healthcare requires extreme caution.
The Growing Debate Over AI in Healthcare Information
The controversy highlights a broader challenge facing major technology companies. Artificial intelligence has the potential to transform how people access health information by summarizing complex research, identifying patterns in medical data, and helping individuals better understand symptoms or treatments.
However, the same technology can also amplify inaccurate information if it relies on unreliable sources or poorly moderated online discussions.
Healthcare professionals emphasize that medical guidance should come from trained experts rather than anonymous internet users. Even personal experiences shared with good intentions may not apply safely to others with the same condition.
As AI becomes more integrated into everyday tools like search engines, companies must carefully balance innovation with responsibility.
A Cautious Future for AI-Powered Health Advice
Google’s decision to remove “What People Suggest” reflects the growing scrutiny tech companies face when deploying artificial intelligence in sensitive areas. While AI promises to make information more accessible and personalized, its use in health-related contexts remains controversial.
The episode serves as a reminder that technology alone cannot replace professional medical expertise. For millions of users who turn to search engines for health advice, accuracy and reliability remain essential.
As artificial intelligence continues to evolve, companies like Google will likely face increasing pressure to ensure that AI-powered tools prioritize safety, transparency, and trust—especially when people’s health is involved.

