Intel Unveils ‘World’s First’ Real-Time Deepfake Detector

Intel has presented what it claims(Opens in a new window) is the world’s first real-time deepfake detector. FakeCatcher is said to have a 96% accuracy rate and works by analyzing blood flow in video pixels using innovative photoplethysmography (PPG(Opens in a new window)).

Ilke Demir, the senior research scientist at Intel Labs, designed the FakeCatcher detector in collaboration with Umur Ciftci from the State University of New York at Binghamton. The real-time detector uses Intel hardware and software and runs on a server and interfaces through a web-based platform.

FakeCatcher differs from most deep learning-based detectors in that it looks for authentic clues in real videos instead of examining raw data to detect signs of authenticity. Her method is based on PPG, a method used to measure the amount of light that is either absorbed or reflected by blood vessels in living tissue. When our heart pumps blood, the veins change color and these signals are picked up by the technology to determine if a video is fake or not.

Speaking to VentureBeat, Demir said(Opens in a new window) that FakeCatcher is unique because PPG signals “have not previously been applied to the deep fake problem.” The detector collects these signals from 32 locations on the face before algorithms translate them into spatiotemporal maps before deciding whether a video is real or fake.

Deepfake videos are a growing threat around the world. According to Gartner(Opens in a new window), companies will spend an estimated $188 billion on cybersecurity solutions to address them. Currently, detection applications typically require video to be uploaded for analysis, and results can take hours.

Intel says the detector could be leveraged by social media platforms to prevent users from uploading harmful deepfakes, while news organizations could use it to prevent the unintended publication of fake videos.

Recommended by our editors

Deepfakes have targeted prominent political figures and celebrities. Last month, a viral changed TikTok(Opens in a new window) it was made so that Joe Biden sang the children’s song Baby Shark instead of the national anthem.

Efforts to detect deepfakes have also run into issues related to racial bias in the datasets used to train them. According to a 2021 study(Opens in a new window) from the University of Southern California, some detectors showed as much as a 10.7% difference in error rate by racial group.

SecurityWatch<\/strong> newsletter for our top privacy and security stories delivered right to your inbox.”,”first_published_at”:”2021-09-30T21:22:09.000000Z”,”published_at”:”2022-03-24T14:57:33.000000Z”,”last_published_at”:”2022-03-24T14:57:28.000000Z”,”created_at”:null,”updated_at”:”2022-03-24T14:57:33.000000Z”})” x-show=”showEmailSignUp()” class=”rounded bg-gray-lightest text-center md:px-32 md:py-8 p-4 mt-8 container-xs”>

Do you like what you read?

Sign up SecurityWatch newsletter of the top privacy and security stories delivered straight to your inbox.

This newsletter may contain advertisements, promotions or affiliate links. Signing up for a newsletter indicates your agreement to our Terms of Use and Privacy Policy. You can unsubscribe from newsletters at any time.

Leave a Reply

Your email address will not be published. Required fields are marked *