Generative AI Watermarking Named One of Top 10 Emerging Technologies

A new World Economic Forum report1 has identified generative AI watermarking as one of the top 10 emerging technologies of 2025.

For 13 years, this annual report has aimed to shine a spotlight on breakthrough technologies, that have both the potential to make the critical leap from scientific discovery to real-world application, as well as help societies adapt and thrive in the face of complex challenges.

Generative AI watermarking has been identified as one such technology, particularly for its potential to address two of the world’s current key risks: misinformation and disinformation.

Two contributors to the report – Katharine Daniell, School of Cybernetics, Australian National University, and Andrew Maynard, School for the Future of Innovation in Society, Arizona State University – described the technology and its application.

The technology consists of embedding invisible markers in AI-generated content

–including text, images, audio and video

–to verify authenticity and help trace the content’s origin.

The authors described how, as AI- generated content becomes increasingly hard to differentiate from that created without AI, there has been a surge in innovative watermarking technologies designed to help combat misinformation, protect intellectual property, counter academic dishonesty, and promote trust in digital content.

Watermarking techniques aim to subtly alter generative AI outputs without noticeably impacting their quality. Text- based watermark technologies, such as Google DeepMind’s SynthID technology, take advantage of the fact that there are thousands of words in a given language that can be randomly substituted by other words.

They work by including a narrow and specific subset of such words throughout AI-generated text that seems natural but is distinct from the more random word choices a human writer might make. This results in an AI-specific textual ‘fingerprint’.

Image and video watermark technologies include introducing imperceptible changes at the pixel level that can survive edits like resizing and compression and that can only be seen by a machine, or embedding hidden patterns in generated output that only a machine can extract.

Gaining traction

The authors described how watermarking AI-generated content gained traction in 2022, as models like ChatGPT became more popular. By 2023, major AI companies, including OpenAI, Google and Meta, committed to watermarking under regulatory pressure.

A breakthrough came in 2024 when Google DeepMind open-sourced SynthID. Simultaneously, Meta introduced VideoSeal, a watermarking system for AI- generated videos.

Leading AI companies are now increasingly integrating watermarking into their platforms. Google, for instance, is embedding SynthID into AI-generated images, text and videos across its services. Meta is applying invisible watermarks and metadata tags to AI-generated content on Facebook, Instagram and Threads.

Despite this progress, though, widespread use of AI watermarking faces challenges, advised the authors. Simple modifications to AI-generated outputs can still disrupt detection. Users can attempt to remove or forge watermarks, either by cropping images and video where watermarks are embedded in a specific location, or by adjusting text (and even using AI-based watermark removers).

Uneven adoption also presents risks where, without universal industry standards, inconsistent implementation may weaken effectiveness. There are also substantial ethical concerns around misuse, such as falsely labelling real content as AI-generated or false positives, where erroneous accusations of covertly using AI can have unintended consequences, especially in cases related to academic integrity.

To be successful, technologies will need to be accompanied by equally sophisticated governance and use guidelines, he authors advised. China has acted to regulate generated content to require watermarking, and other regions, such as the EU, are also developing responses to manage the security and authenticity of digital content.

The Coalition for Content Provenance and Authenticity, a coalition of leading media generators in the AI space, is also leading the development of technical standards for certifying the source of media content – an approach that regulators would struggle to meaningfully produce, said the authors.

Watermarking has proven a fertile area for start-ups globally with different technological approaches, they concluded.

1 - www.weforum.org/publications/top-10-emerging-technologies-of-2025/