Knowledge management programs rely on accurate information inputs and this includes images as well as text and numerical data.
Seeking to combat the spread of increasingly sophisticated doctored images and videos, prominent camera manufacturers Nikon, Sony, and Canon are incorporating authentication capabilities directly into their latest professional-grade mirrorless shooters.
The tamper-resistant digital signatures embedded in images will include metadata like capture time, date, location, and device details that can be used to verify authenticity. Nikon plans to target the feature at photojournalists and media creators, while Sony and Canon also have firmware updates and new models set for release in 2024.
The efforts come as hyper-realistic deepfakes have gone viral globally, testing the judgment of both content producers and consumers. An alliance of news outlets and tech firms has already introduced a free web verification tool called Verify that checks credentials when available.
Nikon, Sony, and Canon adopted a common standard for their digital signatures, with the trio controlling around 90% of the digital camera market. Verify flags images as having “No Content Credentials” if artificial intelligence created them or if they have been manipulated.
Sony’s upcoming firmware update will bring authentication capabilities to three existing full-frame mirrorless cameras popular with professionals. The company is exploring expanding support to videos as well. Once images have credentials, Sony servers can confirm they’re not AI-generated when transmitting to newsrooms and other clients.
Canon is targeting similar features for launch in 2024 cameras and is also developing secured video functionality. The company has an anti-deepfake project team established in 2019 and partnerships with academic institutes focused on data integrity.
The capacity for creating synthetic media is scaling rapidly. Researchers recently revealed a new generative AI technique called a latent consistency model that can allegedly generate 700,000 fake images daily.
Beyond cameras, technology giants are unveiling their own protections. Google launched an invisible digital watermarking system for AI images in August. Intel and Hitachi are also working on authentication technologies for online media and identity verification respectively.
With the reach of cameras and social platforms enabling manipulation threats to span geographies and industries, directly embedding credentials within capture devices offers an appealing solution. Nikon, Sony, and Canon’s pivot represents hardware taking up the mantle alongside algorithmic and blockchain detections emerging in software.
The traction of their efforts will depend partly on adoption by news publishers, social networks, and other distribution channels. But with deepfakes eroding public trust, the computer vision titans have picked an opportune time to marshal their market dominance in enabling image forensics.