SynthID is a watermarking technology developed by Google DeepMind that embeds imperceptible, machine-detectable signals into AI-generated content across text, images, audio, and video. The watermarks are designed to survive common edits such as compression, cropping, filtering, and noise, enabling verification of synthetic content origin without degrading output quality. 1)
SynthID uses dual neural networks trained together — one for watermark injection and one for detection — optimized for imperceptibility and robustness across different media types. 2)
SynthID embeds watermarks directly into pixel values during the image generation process (for example, via diffusion models like Imagen). The modifications are subtle enough to be invisible to the human eye but create patterns detectable by the trained detection network. The watermark persists through JPEG compression, color filters, rotation, cropping, and screenshots. Detection outputs a confidence level indicating the likelihood that an image was generated with SynthID. 3) 4)
For video content, SynthID applies frame-by-frame pixel-level watermarking similar to its image approach. The temporal embedding enables detection even after trimming, compression, or frame-level edits. 5)
SynthID converts the audio signal to a spectrogram (a visual representation of the sound wave), embeds the watermark in the spectrogram, then reconverts it to a waveform. The watermark withstands noise addition, trimming, and lossy compression. 6)
For text, SynthID modifies the next-token probability scores during LLM generation using a pseudorandom g-function. The system uses context hashing, secret keys, and a “tournament” selection mechanism where candidate tokens compete based on their likelihood plus a watermark bias signal. This creates statistical patterns in the generated text that are detectable via key-based verification but do not alter the syntactic quality of the output. 7) 8)
Key configuration parameters include:
SynthID was developed by Google DeepMind and launched in beta for images via Google Cloud's Vertex AI in 2023. 10) The technology expanded to cover text, audio (via Lyria), and video (via Veo) by 2024. The SynthID Text research was published in Nature, with large-scale validation across nearly 20 million Gemini model responses demonstrating impressive scalability. 11)
SynthID is integrated into Google's AI products including Gemini (text), Imagen (images), Veo (video), and Lyria (audio), with a detector portal available for verification. 12)
In October 2024, Google DeepMind open-sourced SynthID Text through Hugging Face Transformers (v4.46.0+), making text watermarking available for integration into any LLM pipeline without modification to the underlying model. 13) The implementation works as a logits processor compatible with any model using the standard generate() interface. 14)
The tool is also available through Google's Responsible GenAI Toolkit and on GitHub. 15) Image, audio, and video watermarking components remain proprietary to Google products as of early 2026. 16)
SynthID and C2PA represent complementary approaches to content authenticity: