Intel’s deepfake detector tested on real and fake videos::We tested Intel’s new tool, “FakeCatcher”, on videos of Donald Trump and Joe Biden - with mixed results.
There is no such thing, nor can there ever be such a thing, as an accurate deepfake detector.
Yes the entire process of training the models used in deepfakes requires building the detector in the first place… and then beating it. That’s what adversarial means. Keep training until you can’t distinguish any more.
Now it’s novel what they did here: they used a hypothetically orthogonal loss metric, something that the model’s discriminator isn’t actually looking at. The blood flow thingy. However, that could still be a latent variable in the real models, so they could be training to replicate it anyways. But apparently not because their detector worked when provided sufficient resolution.
This is the best summary I could come up with:
In March last year a video appeared to show President Volodymyr Zelensky telling the people of Ukraine to lay down their arms and surrender to Russia.
It was a pretty obvious deepfake - a type of fake video that uses artificial intelligence to swap faces or create a digital version of someone.
Central to the system is a technique called Photoplethysmography (PPG), which detects changes in blood flow.
Earlier this year the BBC tested Clearview AI’s facial recognition system, using our own pictures.
Although the power of the tech was impressive, it was also clear that the more pixelated the picture, and the more side-on the the face in the photo was, the harder it was for the programme to successfully identify someone.
This includes a “wild” test - in which the company has put together 140 fake videos - and their real counterparts.
I’m a bot and I’m open source!
oof. Needs more work it seems.