I'm no expert but don't think a software/algorithmic arms race is the correct approach to thwarting "deepfakes."
Back in middle school, I first started using Photoshop to make funny pictures. My friends and I would trade weird mashup images we shopped together.
In high school, I wouldn't put a picture online without airbrushing and teeth whitening.
Sometime since then, machine learning software has started shopping together pictures better than I ever could. And faking voices. And pornographic videos. And... who knows what else.
I've always thought the solution to this would be some blockchain thing. Because this is maybe one legitimate use case for blockchain - you put together a digital provenance for digital media. A chain of custody to link something from someone's phone camera to when it's distributed online.
Or maybe another solution is a "certificate authority" sort of thing. You use public-private key cryptography, where artists, photographers, or whoever sign media with their private keys. Various authority organizations track whose public keys belong to who.
Photo credit - AWS Security Blog
Then, when people interact with the media, data about the source is displayed and/or tracked. You just need to trust certain places aren't faking.
All things considered... I don't know what to do for sure.
But a long-term solution I never considered for this "deepfakes" problem is training more software to detect fakes instead of make them.
Yet, that's what the "experts" propose, it seems.
Photo credit - The Observer
To detect fakes, there have to be giveaways that what you're looking at and hearing is counterfeit. But the deepfake models/algorithms/software won't stop improving.
Which means your detection models/algorithms/software need to improve. And then, as this cycles back and forth, it strikes me as an arms race of sorts.
Should deepfakes be fought with more neural networks? I don't think so, but what do I know.
The one thing for certain is... the future will be a weird place.