As AI picture editing apps proliferate and become more widely available, hardware and software developers are creating solutions to assist users in confirming the legitimacy of an image from the point of capture.

On Wednesday, Leica revealed that the new M 11-P camera will be the first to enable the application of Content Credentials at the time of image capture.

AI-generated or -altered images are being identified by Adobe, Microsoft, and other companies through the addition of information known as Content Credentials. However, it is believed that extending content verification to the camera is an essential step in the fight against deepfakes.

READ MORE: MrBeast Cautions Fans Not To Fall For His ‘World’s Largest iPhone 15 Giveaway’ Deepfake Hoax

On Tuesday, Qualcomm announced that the Snapdragon 8 Gen 3, the company’s most recent high-end smartphone processor, incorporates Truepic technology to enable comparable image tagging for both camera-captured and artificial intelligence (AI)-generated photos.

READ MORE: Deepfakes Are On The Rise; How Will They Affect How Businesses Verify Their Customers?

On Wednesday, Google said that its “about this photo” feature—which provides details on how a photo was taken and edited, as well as when it initially surfaced in Google’s search results—was now live. That can be useful in breaking news scenarios because outdated photos are frequently shared again.

The M 11-P from high-end camera manufacturer Leica costs around $10,000. Smartphones are the primary source of user-generated photos.

Many more individuals may be impacted by Qualcomm and TruePic’s revelation, but their strategy depends on users, phone manufacturers, and app developers all choosing to use the image-verification feature.

READ MORE: Deepfake Startups Become A Venture Capital Focus

All iOS and Android camera apps should ideally include content authentication. Which Qualcomm senior vice president Alex Katouzian told Axios he thinks will happen in the upcoming years. “They’re going to do the right thing, I believe,” he stated.

Remember that this is not an ideal solution. Dana Rao, general counsel and chief trust officer of Adobe, stated that obtaining enough photos certified to make people wary of unauthenticated photos is crucial.

Although Rao pointed out that Caanon, Nikon, and Sony do not yet have the technology in their cameras. They are all part of the Content Authenticity Initiative, which supports the Content Credential standard that Leica uses.

READ MORE: I Believe My Face Was Deepfaked Into a Chinese Camping Stove Advertisement

Rao added that the White House’s call for the labeling of AI-generated photos and videos indicates that momentum for the project has increased.
Between the lines: As AI-powered photo editing tools become more widely available and accessible, content authentication becomes increasingly important. For instance, the “magic editor” on Google’s new Pixel smartphones is a standout feature that makes it simple to move objects and people around in pictures. Adobe and other companies are promoting comparable features.

What they’re saying: Truepic CEO Jeff McGregor stated in a statement, “We believe deploying the provenance open standard on-device is one of the most significant breakthroughs toward a more authentic internet and will be the model moving forward.”

For example, the present Israel-Gaza war could benefit from the widespread adoption of such authentication technologies.

“The problem isn’t that deepfakes are everywhere,” Rao stated. Doubt permeates everything. People are unsure of who to believe these days.”


Download The Radiant App To Start Watching!

Web: Watch Now

LGTV™: Download

ROKU™: Download

XBox™: Download

Samsung TV™: Download

Amazon Fire TV™: Download

Android TV™: Download