Intel has presented what it claims(Opens in a new window) is the world’s first real-time deepfake detector. FakeCatcher is said to have a 96% accuracy rate and works by analyzing blood flow in video pixels using innovative photoplethysmography (PPG(Opens in a new window)).
Ilke Demir, the senior research scientist at Intel Labs, designed the FakeCatcher detector in collaboration with Umur Ciftci from the State University of New York at Binghamton. The real-time detector uses Intel hardware and software and runs on a server and interfaces through a web-based platform.
FakeCatcher differs from most deep learning-based detectors in that it looks for authentic clues in real videos instead of examining raw data to detect signs of authenticity. Her method is based on PPG, a method used to measure the amount of light that is either absorbed or reflected by blood vessels in living tissue. When our heart pumps blood, the veins change color and these signals are picked up by the technology to determine if a video is fake or not.
Speaking to VentureBeat, Demir said(Opens in a new window) that FakeCatcher is unique because PPG signals “have not previously been applied to the deep fake problem.” The detector collects these signals from 32 locations on the face before algorithms translate them into spatiotemporal maps before deciding whether a video is real or fake.
Deepfake videos are a growing threat around the world. According to Gartner(Opens in a new window), companies will spend an estimated $188 billion on cybersecurity solutions to address them. Currently, detection applications typically require video to be uploaded for analysis, and results can take hours.
Intel says the detector could be leveraged by social media platforms to prevent users from uploading harmful deepfakes, while news organizations could use it to prevent the unintended publication of fake videos.
Recommended by our editors
Deepfakes have targeted prominent political figures and celebrities. Last month, a viral changed TikTok(Opens in a new window) it was made so that Joe Biden sang the children’s song Baby Shark instead of the national anthem.
Efforts to detect deepfakes have also run into issues related to racial bias in the datasets used to train them. According to a 2021 study(Opens in a new window) from the University of Southern California, some detectors showed as much as a 10.7% difference in error rate by racial group.
Do you like what you read?
Sign up SecurityWatch newsletter of the top privacy and security stories delivered straight to your inbox.