• Charles Hoskinson proposed ‘generative AI-proof watermarking’ to combat the emergence of AI deep fakes which will be indistinguishable from real life.
• He suggested storing a verifiable “chain of evidence” on blockchain to authenticate visual content.
• This shift from “seeing is believing” to assuming everything is fake may be used by nation-states for instruments of polarization.
AI Deep Fakes
AI deep fakes are technological advancements that make fakery indistinguishable from real life. During a recent AMA, Input Output CEO Charles Hoskinson was asked about using blockchain technology to authenticate visual content and he acknowledged that a verification system is needed for combating this emerging trend. In the next 12 to 24 months, AI technology will make deep fake videos and audio indistinguishable from real life, creating a gradual shift from the customary “seeing is believing” attitude currently prevalent to one assuming everything is fake.
Propagandized Use by Nation-States
Hoskinson expressed concern over the propagandized use of this technology by nation-states as instruments of polarization – using them to produce and distribute deep fakes for their own gain. He proposed using blockchains with immutable properties in order to store a verifiable “chain of evidence” for authentication purposes – calling it “generative AI-proof watermarking”.
Verification System Needed
The only way out of this situation is through verified information and verified content, according to Hoskinson, which can be achieved through such watermarking systems stored on blockchains. This would ensure authenticity and combat any attempts at falsifying data or images that could otherwise be used for propaganda purposes or malicious intent against individuals or organizations.
„Seeing Is Believing“ Attitude
The current attitude towards verifying information has been based around the assumption that seeing something makes it legitimate; however, with these advances in AI technology rapidly changing how we identify what is true or false, it will become increasingly important to have reliable forms of verification in order to protect ourselves against malicious actors who could leverage this new technology for their own gain.
Generative AI-Proof Watermarking
Generative AI-proof watermarking looks like it could offer an effective solution here as it allows us to store authentic evidence on blockchains which cannot then be tampered with in any way – ensuring its accuracy and validity even when faced with powerful adversarial threats such as those posed by advanced AI algorithms being used maliciously.