Back to blog
    Strategic Analysis

    European Commission's AI Code of Practice: How Content Marking Rules Will Transform Obligations for Regulated Professions

    Analysis of the second draft Code of Practice (March 2026). Provider and deployer obligations, standardized AI icon, and the editorial exception for lawyers.

    March 12, 202616 min

    Update of March 11, 2026. The European Commission has published the second draft of its Code of Practice on the transparency of AI-generated content, a crucial technical document that specifies how Article 50 of the EU AI Act will be implemented in practice. This 36-page text, developed by two expert working groups, establishes the operational standards that providers and deployers of generative AI systems must meet before the regulation takes effect on August 2, 2026.


    A two-tier regulatory architecture

    The Code of Practice adopts a differentiated approach based on actors' positions in the AI value chain.

    Section 1: Obligations for providers of generative AI systems

    The first section addresses providers of AI systems capable of generating audio, image, video, or text content. These technical obligations ensure that system outputs are machine-readably marked and detectable as artificially generated or manipulated.

    The mandatory multi-layer approach

    The Code requires a multi-level marking strategy, recognizing that no single technique currently satisfies all four requirements of Article 50(2) — effectiveness, interoperability, robustness, and reliability. Providers must implement at minimum:

    1. Digitally signed metadata: cryptographically secured information indicating AI origin, including an interoperable identifier and access to detection tools
    2. Imperceptible watermark: marker directly embedded in the content (except very short texts), serving as a robust backup to metadata
    3. Digital fingerprinting or logging (optional): additional verification mechanism with strict privacy and GDPR compliance guarantees

    Measurable quality requirements

    The Code establishes precise technical criteria:

    CriterionRequirement
    ReliabilityLow false positive and negative rates across content of varying lengths and entropies
    RobustnessResistance to compression, cropping, paraphrasing, and adversarial attacks
    InteroperabilityOpen standards, public watermark encoding, shared European repository
    EffectivenessDetection mechanisms accessible to authorities and fact-checking organizations

    Accessible detection mechanisms

    Providers must make free interfaces (API or web tool) available to deployers, end users, and legitimate third parties to verify AI-generated content. These tools must be hosted in the EU and GDPR-compliant.


    Section 2: Obligations for deployers of AI systems

    The second section addresses deployers using AI systems to generate or manipulate deepfakes or text published on matters of public interest.

    The standardized "AI" icon proposal

    The Code proposes development of a uniform European icon displaying "AI" in capital letters, potentially supplemented by short text ("Generated with AI", "Made by AI", "Manipulated with AI"). Mandatory design characteristics:

    • Letters of equal vertical dimension
    • Minimum contrast ratio of 4.5:1 with the background
    • Proportional scaling when resized
    • Clarity and distinguishability for all, including vulnerable users

    The Code also envisions an optional "second layer" of interactivity: hovering or clicking the icon would reveal detailed information about what was AI-generated or manipulated.

    Placement requirements by modality

    • Real-time video: icon displayed consistently throughout exposure, or disclaimer at the start and at regular intervals
    • Non-real-time video: icon at the start, repeated at regular intervals (long videos) or consistently (short videos)
    • Image: icon placed from first exposure, clearly distinguishable from the image itself
    • Audio: audible natural-language disclaimer at the beginning (short content) or at beginning, intermediate points, and end (long formats)
    • Text: icon in a consistent position (above, near the title, or in the colophon)

    Proportionate regime for artistic works

    For deepfake content that is part of an "obviously artistic, creative, satirical or fictional" work, disclosure applies in a manner that "does not impede the display or enjoyment of the work." The Code permits placement in corners during credits or contextual disclosure.


    Impact on regulated professions

    Legal professions: the editorial exception

    For lawyers and legal professionals using generative AI tools, the disclosure obligation applies to documents intended for publication or dissemination. However, the editorial exception under Commitment 4 allows exemption from labeling by demonstrating:

    • Effective human review of the generated content
    • Exercise of editorial control over the publication
    • Clear identification of a person assuming editorial responsibility

    This exception preserves the traditional model of professional liability while recognizing AI's role as an assistance tool.

    → Learn more: CNB Guide on Generative AI for Lawyers

    Medical professions

    Letters, reports, or health recommendations generated by AI must be clearly identified as such, unless subject to full medical review and the practitioner assumes professional responsibility. AI-manipulated medical images for public presentation are also covered.

    Media and information professions

    The Code guarantees that marking and detection "must in no case affect media freedom, editorial independence, and the protection of journalistic sources." AI-assisted content that has been reviewed and validated by the editorial team is not subject to labeling.

    → Complementary analysis: The EU AI Act: What Lawyers Need to Know


    Timeline and finalization process

    The Code follows a tight schedule imposed by the AI Act's entry into force:

    DateMilestone
    March 30, 2026 (10:00 PM CET)Deadline for stakeholder feedback via EUSurvey
    March 2026Stakeholder consultation meetings
    Before August 2, 2026Publication of final Code

    The Commission specifically seeks feedback on technical implementation considerations, terminological definitions, and the development of the EU icon and interactive second layer. A task force will be established post-publication to finalize the standardized European icon.


    Analysis: technical ambition meets operational pragmatism

    Strengths

    The Code stands out for its technical granularity (contrast ratios, error rates, attack types to test), differentiated proportionality (SMEs, artistic works, media), its future-proofing approach allowing alternative techniques, and its multi-stakeholder process incorporating hundreds of participants' feedback.

    Persistent challenges

    The implementation complexity of the multi-layer approach imposes significant technical costs, particularly for small providers. Without mature technical standards, risks of fragmentation and heterogeneous implementations reducing interoperability remain. The definition of "obviously artistic work" remains vague, and regulated professions will need new documentation and compliance processes to benefit from exemptions.


    Conclusion: toward algorithmic transparency "by design"

    This second draft marks a decisive step in realizing Europe's ambition for trustworthy AI. By requiring technical transparency from system design and clear disclosure at the point of public exposure, the EU aims to preserve informational ecosystem integrity.

    For regulated professions, this Code fundamentally transforms transparency obligations: where professional ethics historically required responsibility for content produced, EU regulation now adds an obligation to trace the tool used to produce it. This evolution will require adapting internal processes, training teams, and potentially revising client service contracts.

    The final Code's publication in June 2026 will deliver the definitive signal on Europe's position regarding this major AI governance challenge.

    Further reading


    Sources:

    À propos de Gaius : Notre équipe de formateurs en IA juridique accompagne les avocats et juristes dans leur transformation numérique. Retrouvez nos analyses et formations sur www.gaius-tech.com et sur notre page LinkedIn.