Back to blog
    Strategic Analysis

    CNIL-HAS AI Healthcare Guide: The Regulator's Maximum Standards That Foreshadow the Future for All Regulated Professions

    Analysis of the HAS-CNIL working document open for public consultation until April 16, 2026. 12 fact sheets, 3 requirement levels, and striking parallels with attorney-client privilege.

    March 12, 202618 min

    On March 5, 2026, France's CNIL (National Commission on Informatics and Liberty) and HAS (National Authority for Health) launched a public consultation on an unprecedented draft guide titled "Supporting the Proper Use of Artificial Intelligence Systems in Healthcare Settings." This working document, the product of a multidisciplinary working group co-led by both institutions, establishes a demanding and detailed framework for the use of AI in healthcare.

    The initiative comes amid already massive AI deployment in health. According to the French Hospital Federation, 65% of public health facilities already use AI systems in their operations. In response to this rapid adoption, France's data protection regulator signals maximum vigilance over the protection of personal health data.

    A deeper analysis reveals a regulatory stance that extends far beyond healthcare alone. The CNIL's requirements foreshadow the rigorous regulation of AI across all regulated professions handling sensitive data — particularly the legal profession, where professional secrecy constitutes a fundamental pillar comparable to medical confidentiality.


    1. Genesis of the guide: an unprecedented public consultation open until April 16, 2026

    An exemplary institutional collaboration

    The guide results from an unprecedented collaboration between the CNIL and HAS, as part of the integration of AI-specific criteria into the 6th cycle of health facility certification. Healthcare structures must now demonstrate mastery of the AI systems they deploy.

    HAS established a multidisciplinary working group specifically co-led with the CNIL to integrate data protection concerns from the outset. This approach reflects a fundamental shift: data protection is no longer an afterthought but a structural principle that must inform the entire design and deployment of AI systems.

    Two complementary objectives

    The first objective is to clarify the applicable legal and regulatory framework — navigating the overlap between the GDPR, the EU AI Act, and medical device regulations.

    The second objective is to establish best-practice recommendations for deployment that is compliant, ethical, and secure — going beyond legal obligations to offer concrete operational practices drawn from real-world experience.

    Open and inclusive consultation

    The consultation, open from March 5 to April 16, 2026, invites input from public and private health facilities, independent practitioners, patient associations, and AI system providers alike — signaling a commitment to co-constructing the regulatory framework.

    → Regulatory context: The EU AI Act: What Lawyers Need to Know


    2. Structure of the guide: 12 fact sheets covering the full AI lifecycle

    Ten lifecycle-specific fact sheets

    The guide contains ten fact sheets covering the entire journey from AI system acquisition to decommissioning: procurement, initial deployment, production launch, routine use, continuous monitoring, maintenance, regular evaluation, and replacement.

    Two cross-cutting structural sheets

    The governance sheet recommends that every healthcare structure establish dedicated AI governance, led by senior management and integrated into overall strategy. The primary task: a dynamic inventory of AI systems, updated at least annually.

    The generative AI sheet addresses the specifics of LLMs that can create content (text, code, images) and raise unique data protection concerns due to the vast datasets required for their development.

    Article 26 of the AI Act: an on-site duty of vigilance

    A critical legal clarification: high-risk AI systems must obtain a CE marking. However, this CE marking does not exempt deployers from their obligation of continuous local monitoring under Article 26 of the AI Act. Even a certified system requires active oversight by the deploying organization.


    3. Key recommendations: three progressive levels of requirements

    Three maturity levels

    Recommendations are structured across three distinct levels:

    • "Standard" recommendations form the minimum compliance baseline every actor should meet
    • "Advanced" recommendations offer improvement pathways for organizations seeking to exceed minimum requirements
    • "Systematic reflexes" define red lines that must never be crossed

    CSR and organizational re-examination

    The guide recommends a structured CSR approach. Responsible AI deployment goes beyond formal regulatory compliance to include reflection on societal and environmental impacts.

    Introducing an AI system requires re-examining: responsibility (who is accountable for errors?), patient information, result interpretation, and the human-technology balance.

    → Applicable methodology: 8 Steps to a Successful AI Project


    4. The regulator's extreme scrutiny: CNIL positions analyzed

    Maximum vigilance on health data

    The CNIL has supported several healthcare AI projects: decision-support algorithms for ICU admission (GENIALLY), cardiac decompensation prediction (HYDRO 1 and HYDRO 2), and stroke detection (PREDISTROKE and AI-STROKE).

    Mandatory impact assessment

    The CNIL details how to conduct a Data Protection Impact Assessment (DPIA), which will be presumed required for providers and deployers of high-risk AI systems. The GDPR's DPIA requirement complements Article 27 of the AI Act, creating synergy between the two regulatory texts.

    Generative AI recommendations

    The CNIL recommends prioritizing on-premise deployment when the use case involves personal data or sensitive documentation. For cloud solutions, a data processing agreement with the host and AI provider is necessary.

    Coordinated European enforcement

    The CNIL actively coordinates with European peers through the EDPB. Tech giants can no longer play jurisdictions against each other to escape oversight.

    → Applicable checklist: The 12-Point GDPR Checklist Before Adopting Legal AI


    5. Parallels with the legal profession: the same data protection challenges

    Medical and legal professional secrecy: comparable protections

    Medical confidentiality (Article L.1110-4, French Public Health Code) finds its direct equivalent in attorney-client privilege (Article 66-5, Law of December 31, 1971). Both are protected under Article 226-13 of the French Criminal Code: one year imprisonment and a €15,000 fine.

    The French National Bar Council (CNB) published its first practical guide on generative AI in September 2024. The primary principle: never submit data covered by professional secrecy to a generative AI without prior pseudonymization.

    The CNIL-CNB partnership renewed in July 2025

    On July 17, 2025, the CNIL and CNB renewed their partnership for joint awareness-raising and training on data protection amid the AI boom.

    "This renewed partnership with the National Bar Council is part of a long-term approach to addressing the concrete challenges posed by digital technology and AI within law firms." — Marie-Laure Denis, CNIL President

    The same risks of breach

    In 2024, a lawyer was suspended for storing client files on Google Drive. The disciplinary board ruled that the lack of European guarantees constituted a direct breach of professional secrecy under the 1971 law.

    According to a CNB survey (2025), 68% of French law firms use AI tools, but 42% fear confidentiality breaches due to insecure data transfers. More concerning: 67% of firms with fewer than 5 lawyers lack the technical skills to evaluate tool reliability.

    The EU AI Act: a common framework for all sectors

    The AI Act timeline applies uniformly:

    DeadlineObligation
    February 2, 2025Prohibition of unacceptable-risk AI practices
    August 2, 2025Rules for general-purpose AI models
    August 2, 2026All provisions applicable, including administration of justice
    August 2, 2027High-risk AI systems in Annex I (medical devices)

    → Full analysis: The EU AI Act: What Lawyers Need to Know

    Sovereign solutions

    The CNIL recommends EU-hosted solutions. For legal professionals, key verification points include: ISO 27001 certification, DPO presence, DPIA completion, non-reuse clauses, Privacy by Design and Privacy by Default.

    → Practical guide: How to Choose Your Legal AI in 5 Steps


    Conclusion: a framework foreshadowing the future for all regulated professions

    The HAS-CNIL guide establishes an exceptionally demanding regulatory framework. Its twelve fact sheets, structured across three recommendation levels, cover the full AI lifecycle. The requirement for dedicated governance, system inventorying, and continuous monitoring signals unprecedented regulatory vigilance.

    The parallel with the legal profession is natural. The renewed CNIL-CNB partnership in July 2025 confirms the convergence of challenges. The progressive enforcement of the AI Act, with its application to the administration of justice from August 2026, confirms that all regulated professions must meet the same rigorous standards.

    The HAS-CNIL guide is not just a healthcare framework. It foreshadows the future of AI regulation for all regulated professions. Legal professionals who anticipate these developments now are best positioned for the future.

    Further reading


    Sources:

    À propos de Gaius : Notre équipe de formateurs en IA juridique accompagne les avocats et juristes dans leur transformation numérique. Retrouvez nos analyses et formations sur www.gaius-tech.com et sur notre page LinkedIn.