Sub-article by sub-article
The five pillars of Article 50
Article 50(1)
Human-AI interaction disclosure
What it says: Providers must ensure that AI systems designed
for direct interaction with natural persons are designed and developed such that
the persons are informed they are interacting with an AI system — unless this is
obvious from the circumstances and context of use.
Who it affects: Chatbot providers, virtual assistant developers,
AI customer service platforms — any system where a user might mistake the AI for
a human.
Technical implication: This is primarily a UI/UX obligation (e.g. displaying
"You are chatting with an AI"), not a content-marking requirement. Capture does not
address this sub-article directly, as it is a product-design concern.
Article 50(2) — core obligation
Machine-readable marking of synthetic content
What it says: Providers of AI systems that generate synthetic audio,
image, video, or text content must ensure such content is marked in a machine-readable
format and is detectable as artificially generated or manipulated. The marking must be
interoperable, robust, and reliable — as far as technically feasible.
Who it affects: Every generative AI provider — image generators,
video synthesis tools, large language models generating public-facing text, AI music
generators, and voice synthesis platforms.
Technical implication: This is the provision that requires embedded,
machine-readable provenance metadata. C2PA content credentials satisfy the
"interoperable, robust" requirement. The draft Code of Practice further specifies that
a multi-layered approach is necessary.
Capture's answer: C2PA credentials embedded at generation time (Layer 1),
plus ERC-7053 on-chain registration (Layer 2). One API call, two layers, full compliance.
Article 50(3)
Emotion recognition and biometric categorisation
What it says: Deployers of emotion recognition systems or biometric
categorisation systems must inform natural persons exposed to such systems of their
operation, and must process personal data in accordance with the GDPR.
Who it affects: Companies deploying facial sentiment analysis,
emotion-based advertising targeting, or biometric categorisation in public or
workplace settings.
Technical implication: This is a consent-and-disclosure obligation.
It requires informing subjects and processing data lawfully — not content marking.
Capture does not address this sub-article, as it pertains to data processing practices
rather than content provenance.
Article 50(4) — deepfakes
Deepfake labelling obligation
What it says: Deployers of AI systems that generate or manipulate
image, audio, or video content constituting a "deep fake" must disclose that the content
has been artificially generated or manipulated. This obligation extends to any entity
that makes such content publicly available.
Who it affects: Media organisations, social media platforms,
content creators, marketing agencies — anyone who produces or distributes
realistic synthetic media depicting real people or events.
Technical implication: The deepfake label must be both human-visible
and machine-readable. C2PA's c2pa.actions manifest with a
c2pa.created or c2pa.edited action, combined with
a generator assertion, satisfies the machine-readable element.
Capture's answer: The C2PA generator claim identifies
the AI system. The ERC-7053 on-chain attestation provides a tamper-evident public record.
Together, they prove both the synthetic origin and the chain of custody.
Article 50(5) — durability
Robust, durable, machine-readable format
What it says: The information referred to in paragraphs 1 to 4
must be provided in a clear and distinguishable manner, at the latest at the time
of first interaction or exposure. The marking shall be in a format that is
machine-readable and detectable as artificially generated or manipulated.
Who it affects: All entities covered by 50(1)–50(4) — this
sub-article sets the quality standard for how marking must be implemented.
Technical implication: This is where the "robust and durable"
language originates. The draft Code of Practice interprets this as requiring
marking that survives common transformations — screenshots, social media
re-uploads, format conversions, and metadata stripping.
Capture's answer: C2PA metadata is the in-file layer. When it is
stripped, the ERC-7053 content hash on the Numbers Mainnet remains discoverable —
any verifier can recover the full provenance chain from the file's fingerprint alone.
This is the durability the regulation demands.