Stanford Law Warns: AI Can't 'Forget' Genomic Data, Raising Privacy Stakes

3 min readSources: Stanford Law

Stanford Law reports AI is rendering the right to be forgotten unenforceable for genomic data.

Why it matters: Legal and technology leaders face heightened privacy risks as AI models trained on unique genomic data cannot easily remove individuals’ information—posing compliance and ethical challenges. As healthcare AI expands, gaps in regulatory protections could expose organizations to liability and erode consumer trust.

  • Genomic data is nearly impossible to fully anonymize, making individuals easily identifiable.
  • AI models embed data into complex weights, hampering efforts to delete specific genomic information.
  • Machine unlearning techniques exist but remain limited and can degrade AI performance.
  • US privacy laws like HIPAA and GINA lack meaningful 'right to be forgotten' provisions for health or genomic data.

The 'right to be forgotten' (RTBF) gives individuals the ability to request removal of personal data—an intent enshrined in the GDPR and similar privacy laws. But as Stanford Law details, applying this right to genomic data used in artificial intelligence systems is nearly impossible in practice.

  • Genomic data is uniquely identifying. As noted, 99.98% of Americans can be re-identified from as few as 15 demographic attributes, making true anonymization difficult.
  • AI systems don't just store data—they learn from it, encoding traits into their internal workings. After training, it's exceedingly difficult to trace and remove a given individual's genomic data without affecting the entire model, according to industry analysis.
  • Even advanced approaches, like "machine unlearning," are still experimental and often degrade AI performance.

Current US laws offer limited recourse: HIPAA focuses on confidentiality and GINA bars discrimination, but neither provides a mechanism for data subjects to demand deletion. This gap leaves organizations exposed, as patient trust—and regulatory scrutiny—mounts alongside AI adoption in health law.

Stanford Law affirms: “The right to be forgotten is becoming increasingly difficult to enforce in practice for genomic data trained on Artificial Intelligence (AI) models.”

Legal, privacy, and compliance teams must confront this challenge as the European Commission's AI Act and evolving domestic regulations bring data governance to the forefront.

By the numbers:

  • 68% — Share of global consumers concerned about online privacy.
  • 57% — Proportion agreeing that AI is a threat to their data.
  • 99.98% — Americans who can be identified with just 15 demographic variables.

Yes, but: Machine unlearning promises targeted data removal—but it remains technically immature and risks lowering model quality.

What's next: Growing AI adoption in healthcare will increase regulatory focus on genomic data privacy and potential legislative updates.