We’re in the midst of a presidential election. Along with the campaigning, speeches and crowds is the inevitable increase in disinformation. I hate it. It gets in the way of an honest national discussion on issues that engage us and divide us.
And I particularly hate it when it comes from abroad. But what happens today isn’t like the 1950s when the Soviet Union started disinformation campaigns in obscure left-leaning Italian newspapers. It planted fake news in them, which would then be picked up by the Austrian press, then German press, then appear in English or French newspapers. Eventually a U.S. daily or two would publish the story as fact.
That’s a long way from what goes on today. Now thousands of bots can blitz American social media in minutes.
And new technology is raising the stakes even higher. I call it “disinformation on steroids.” It’s the use of machine learning to create hoax videos. But these are not your garden variety “cheapfake” videos we’re all used to. Most of those are pretty obvious and don’t need specialized expertise to produce.
These new ones are “deepfakes.” They often depict famous people. The video looks like them and sounds like them, but it’s completely synthesized. Unlike cheapfakes, these videos are much harder to detect (take a look at these 10 examples). And that makes them much more dangerous.
Because they’re getting easier and cheaper to produce, they’re also proliferating. A recent study found more than 145,000 examples online so far this year. That’s nine times more than last year.
This goes way beyond hacking emails or the crude manipulation of cheapfake videos. Deepfakes are generated by artificial intelligence (AI). And they can continue to learn and improve.
Earlier this month, Microsoft launched a detector tool in the hopes of helping find disinformation aimed at November’s U.S. election. It also warned that, “The fact that [deepfakes are] generated by AI that can continue to learn makes it inevitable that they will beat conventional detection technology.”
But big names like Microsoft aren’t the only ones trying to address this critical issue. Startups are getting involved too. Sentinel is developing a detection platform for identifying deepfakes. Founder and CEO Johannes Tammekänd says that “we already reached the point where somebody can’t say with 100% certainty if a video is a deepfake or not.”
“Nobody has a very good method of how to detect those,” he adds, “unless the video is somehow ‘cryptographically’ verifiable… or unless somebody has the original video from multiple angles.”
This is a serious threat. I guarantee it: if technology can be used to influence political outcomes, public policy and — especially — who comes to power, it will be used for such ends.
Deepfakes jeopardize the legitimacy of our elections. Tammekänd (who’s Estonian, by the way) is worried about this too. “Imagine,” he says, “Joe Biden saying ‘I have cancer, don’t vote for me.’ That video goes viral.”
And the technology to do this, he points out ominously, is already here.
I fear for our democracy and the integrity of our electoral system. There’s no “if” here, only “when.” And perhaps there’s a sliver of goods news here. This technology is just new and time-consuming enough that this presidential election may escape an onslaught of deepfake disinformation.
Then again, I may be overly optimistic. The Washington Post fears a deepfake bomb could be dropped during November and December — a “delicate period,” it says, “when poll workers are counting mail-in ballots.”
I think it’s unlikely that the world’s governments will be able to effectively prevent deepfakes. It would also be a mistake to turn to the goliath tech companies like Facebook or Google. It would be very expensive for them to develop their own deepfake detection. Sure, they could afford it. But the incentives aren’t there.
It will be up to tech-savvy startups… like Sentinel. It just raised $1.35 million in a seed round. I believe this is just the beginning. There will be other impressive but very small companies raising early-round funds. I’ll be on the look-out for them. And hopefully I’ll recommend one or two to my First Stage Investor members.
The technology created by those startups is going to be critical in winning the battle against future deepfake disinformation campaigns. If we can support a couple of the best ones, it would be good for us — both as investors and as citizens.