Meta’s Muse Spark AI Raises Health Data Privacy and Liability Flags

3 min readSources: Wired

Meta’s new Muse Spark AI asks for raw health data and delivers unreliable medical advice.

Why it matters: The widespread use of AI for health data raises significant regulatory risk. Legal teams must track both privacy compliance and potential liability as AI models analyze sensitive medical information.

  • Muse Spark, launched April 2026, requests detailed health metrics, lab results, and personal medical data.
  • Tests revealed the AI dispenses incorrect advice, including unsafe dietary tips and poor risk assessment.
  • Experts warn of AI hallucinations and privacy risk, given the lack of clarity on data use and retention.
  • Scrutiny is mounting under GDPR and HIPAA, with US and EU regulators actively investigating AI health tools.

Meta debuted Muse Spark in April 2026, positioning the artificial intelligence model as an analyzer of user-submitted health data—including lab results, vital statistics, and specific medical histories. Initial tests by Wired raised alarms: the system delivered inaccurate, potentially unsafe medical advice, such as recommending unsubstantiated dietary changes and failing to flag warning signs for serious conditions.

  • Meta says data entry is voluntary and requires explicit user consent, but privacy experts highlight the opaque handling of sensitive information and potential for secondary use or data sharing without sufficient safeguards.
  • GDPR Article 9 and HIPAA both set high bars for processing health data. Lawyers are questioning whether Muse Spark’s consent mechanisms and data management meet these standards—especially as health data is far more strictly regulated than typical consumer information.
  • Dr. John Torous, Director of Digital Psychiatry at Beth Israel Deaconess Medical Center, stresses the risk: “AI models can hallucinate or generate plausible-sounding nonsense, which is dangerous when people's health is at stake.”
  • According to Caitlin Frazier, privacy attorney: “There are serious questions about where that information goes and how it’s protected.”

US and EU regulators have stepped up oversight of AI health tools. The FTC pursued at least two enforcement cases involving misused health data in 2025, and scrutiny is increasing for new models like Muse Spark. Dr. Karen DeSalvo, former National Coordinator for Health IT, notes: “It’s not appropriate for these systems to provide medical advice. They’re simply not qualified.”

For legal teams, the key concern is cross-jurisdictional compliance. Details on data retention, sharing, and legal exposure remain unclear—and any processing misstep could trigger significant liability or regulatory action.

By the numbers:

  • April 2026 — Muse Spark launched
  • 2 — FTC enforcement actions against AI health apps for misused data in 2025
  • GDPR Article 9 — Requires explicit consent for processing health data

Yes, but: Details on Meta’s internal data retention and use practices for Muse Spark remain undisclosed, and no enforcement action has yet been brought directly against the product.

What's next: Legal and compliance teams should monitor emerging investigations and prepare for potential cross-border enforcement tied to AI health data use.