The legal text, decoded

Article 50 explained

A plain-language breakdown of every sub-article in the EU AI Act's transparency chapter — what each provision requires, who it applies to, and what "robust, durable, machine-readable marking" actually means.

Overview

What Article 50 covers

Article 50 of Regulation (EU) 2024/1689 — the EU AI Act — imposes transparency obligations on providers and deployers of AI systems. It is part of the Act's broader goal of ensuring that AI-generated content is clearly identifiable. The article contains five sub-articles, each addressing a different aspect of AI transparency. Enforcement begins 2 August 2026, with fines of up to €15 million or 3% of global annual turnover.

Sub-article by sub-article

The five pillars of Article 50

Article 50(1)

Human-AI interaction disclosure

What it says: Providers must ensure that AI systems designed for direct interaction with natural persons are designed and developed such that the persons are informed they are interacting with an AI system — unless this is obvious from the circumstances and context of use.

Who it affects: Chatbot providers, virtual assistant developers, AI customer service platforms — any system where a user might mistake the AI for a human.

Technical implication: This is primarily a UI/UX obligation (e.g. displaying "You are chatting with an AI"), not a content-marking requirement. Capture does not address this sub-article directly, as it is a product-design concern.

Article 50(2) — core obligation

Machine-readable marking of synthetic content

What it says: Providers of AI systems that generate synthetic audio, image, video, or text content must ensure such content is marked in a machine-readable format and is detectable as artificially generated or manipulated. The marking must be interoperable, robust, and reliable — as far as technically feasible.

Who it affects: Every generative AI provider — image generators, video synthesis tools, large language models generating public-facing text, AI music generators, and voice synthesis platforms.

Technical implication: This is the provision that requires embedded, machine-readable provenance metadata. C2PA content credentials satisfy the "interoperable, robust" requirement. The draft Code of Practice further specifies that a multi-layered approach is necessary.

Capture's answer: C2PA credentials embedded at generation time (Layer 1), plus ERC-7053 on-chain registration (Layer 2). One API call, two layers, full compliance.

Article 50(3)

Emotion recognition and biometric categorisation

What it says: Deployers of emotion recognition systems or biometric categorisation systems must inform natural persons exposed to such systems of their operation, and must process personal data in accordance with the GDPR.

Who it affects: Companies deploying facial sentiment analysis, emotion-based advertising targeting, or biometric categorisation in public or workplace settings.

Technical implication: This is a consent-and-disclosure obligation. It requires informing subjects and processing data lawfully — not content marking. Capture does not address this sub-article, as it pertains to data processing practices rather than content provenance.

Article 50(4) — deepfakes

Deepfake labelling obligation

What it says: Deployers of AI systems that generate or manipulate image, audio, or video content constituting a "deep fake" must disclose that the content has been artificially generated or manipulated. This obligation extends to any entity that makes such content publicly available.

Who it affects: Media organisations, social media platforms, content creators, marketing agencies — anyone who produces or distributes realistic synthetic media depicting real people or events.

Technical implication: The deepfake label must be both human-visible and machine-readable. C2PA's c2pa.actions manifest with a c2pa.created or c2pa.edited action, combined with a generator assertion, satisfies the machine-readable element.

Capture's answer: The C2PA generator claim identifies the AI system. The ERC-7053 on-chain attestation provides a tamper-evident public record. Together, they prove both the synthetic origin and the chain of custody.

Article 50(5) — durability

Robust, durable, machine-readable format

What it says: The information referred to in paragraphs 1 to 4 must be provided in a clear and distinguishable manner, at the latest at the time of first interaction or exposure. The marking shall be in a format that is machine-readable and detectable as artificially generated or manipulated.

Who it affects: All entities covered by 50(1)–50(4) — this sub-article sets the quality standard for how marking must be implemented.

Technical implication: This is where the "robust and durable" language originates. The draft Code of Practice interprets this as requiring marking that survives common transformations — screenshots, social media re-uploads, format conversions, and metadata stripping.

Capture's answer: C2PA metadata is the in-file layer. When it is stripped, the ERC-7053 content hash on the Numbers Mainnet remains discoverable — any verifier can recover the full provenance chain from the file's fingerprint alone. This is the durability the regulation demands.

The multi-layer mandate

Why single-layer marking fails

The draft Code of Practice released by the European Commission in January 2026 explicitly states: "no single active marking technique suffices to meet the requirements of robustness and reliability." This has major implications for compliance strategies.

Marking techniques and their limitations
Technique Survives screenshot? Survives re-upload? Machine-readable? Sufficient alone?
Visible watermark Partially Partially No No
Invisible watermark No Often stripped Yes No
C2PA metadata Stripped Often stripped Yes No
C2PA + ERC-7053 Yes (via hash) Yes (via hash) Yes Yes

Timeline

Key dates for Article 50 compliance

January 2026

European Commission publishes the first draft Code of Practice for AI content provenance, establishing the multi-layer framework.

June 2026

Final Code of Practice expected. This becomes the de facto compliance benchmark that regulators and auditors will reference.

2 August 2026

Article 50 enforcement begins. National supervisory authorities can issue fines of up to €15M or 3% of global turnover.

2027 and beyond

First enforcement actions expected. Companies without a multi-layer marking system face regulatory scrutiny, reputational risk, and potential fines.

Penalties

What non-compliance costs

€15M
Maximum administrative fine
3%
Of worldwide annual turnover
Whichever is higher
The regulation applies the larger amount

Beyond fines — reputational risk

Non-compliance with Article 50 goes beyond financial penalties. Companies found distributing unlabelled AI-generated content face public enforcement notices, potential class-action litigation, and loss of enterprise customer trust. Early compliance is both a legal shield and a competitive advantage.

Frequently asked

Article 50 questions

What is the purpose of Article 50 in the EU AI Act?

Article 50 establishes transparency obligations for providers and deployers of AI systems. Its primary aim is to ensure that AI-generated or AI-manipulated content is always identifiable by both humans and machines, preventing undisclosed synthetic media from eroding public trust.

Which organisations must comply with Article 50?

Article 50 applies to two groups. First, providers of AI systems that generate synthetic audio, image, video, or text content. Second, deployers (companies that use AI systems in their workflows) who distribute AI-generated content publicly. Both face penalties for non-compliance.

Does Article 50 apply to AI-generated text as well as images and video?

Yes. Article 50(2) covers all synthetic content types including text, audio, images, and video. However, there is an exception for AI-generated text that undergoes substantial human editorial review before publication, and for content that serves an assistive function only.

What is the draft Code of Practice and is it legally binding?

The European Commission published a draft Code of Practice for AI content provenance in January 2026, with updates expected through mid-2026. While the Code of Practice itself is voluntary, it provides the reference framework that regulators will use to assess whether organisations meet Article 50 obligations. Adopting it is strongly recommended.

What does the multi-layer requirement mean in practice?

The draft Code of Practice states that no single active marking technique suffices. In practice, this means combining at least two independent provenance signals — typically embedded C2PA metadata within the file, plus a durable secondary signal such as on-chain registration that survives metadata stripping.

How does Article 50 interact with the GDPR?

Article 50 complements the GDPR rather than replacing it. Provenance metadata must not expose personal data of end users. Capture's implementation hashes content for on-chain registration without storing personal data, ensuring both Article 50 and GDPR compliance simultaneously.

Are there exemptions for small companies or startups?

Article 50 does not provide a blanket SME exemption for content marking. However, the broader AI Act does include lighter requirements for certain low-risk systems from small providers. For content marking under Article 50, any provider or deployer generating synthetic content that reaches the EU must comply regardless of company size.

What happens if an AI model is open-source — who is responsible?

Under Article 50, the deployer who uses an open-source model to generate and distribute synthetic content bears the marking obligation. If the open-source provider includes content-marking capabilities, they share responsibility. The EU treats the entity that releases content to the public as the primary compliance point.

Ready to comply?

See exactly how Capture maps to each Article 50 requirement, or start a free proof-of-concept.