Start your day with intelligence. Get The OODA Daily Pulse.
After the murder by the Russian state of his company’s lead lawyer in Russia – Sergei Magnitsky – Bill Browder became a sworn enemy of Vladimir Putin. Browder was the driver behind the passage of the Magnitsky Act in 2012, which has pinched the Russian Oligarchs and their global yacht and mansion ownership and square footage footprint ever since – not to mention really effective economic sanctions and the frozen cash and assets of their businesses. Putin, of course, took notice – and has been surveilling and ideally trying to take out Browder for good if at all possible.
So, considering his experience and resources available to protect himself, his family, and his business and activist communications against the Russian threat directed at him, Browder’s recent appearances on a few media outlets recounting a live deep fake video call experience he had recently is, well, alarming – and worth a post here to raise your organizations risk awareness. We include What’s Next?, a section at the end of the post with some formative directives about what you and your organization can being to do to guard against what most experts are predicting is going to get exponentially worse with the further democratization of generative AI techniques.
Browder recently shared his experience in an episode of WBUR’s On Point (linked above) – “Reality wars: Deepfakes and national security” – which also includes a longer conversation about deep fakes with:
Hany Farid, professor at the University of California, Berkeley’s Schools of Information and Electrical Engineering and Computer Sciences. He specializes in digital forensics, generative AI and deepfakes.
Jamil Jaffer, founder and executive director of the National Security Institute at the Antonin Scalia Law School at George Mason University. Venture partner with Paladin Capital Group, which invests in dual-use national security technologies.
Bill Browder, head of the Global Magnitsky Justice Campaign.
Wil Corvey, program manager for the Semantic Forensics (SemaFor) program at the Defense Advanced Research Projects Agency (DARPA), which aims to develop technologies to detect and analyze deepfakes.
On Point Host MEGHNA CHAKRABARTI: As Bill Browder says, as a result of his constant criticism of Vladimir Putin, he has had to protect every aspect of his life, his physical safety, his financial safety, even his digital safety. Browder told us he’s always on guard against any way in real life or online that Putin might get to him.
But he’s also still criticizing the Russian regime, and most recently, he’s been vocally supporting sanctions against Russia for its attack on Ukraine. So just a few weeks ago, Browder told us he wasn’t surprised at all to get an email that seemed to come from former Ukrainian President Petro Poroshenko, asking if Browder would schedule a call to talk about sanctions.
BILL BROWDER: And so that seemed like a perfectly appropriate approach. The Ukrainians are very interested in sanctions against Russia. And so, I asked one of my team members to check it out, make sure it’s legit, and then schedule it. I guess in the rush of things that were going on that week, this person didn’t actually do anything other than call the number on the email. The person seemed very pleasant and reasonable. The call was scheduled, and I joined the call a little bit late.
I’m on like 10 minutes after it started because of some transportation issues, and apparently before I joined there was an individual who showed up on the screen saying, I’m the simultaneous translator. I’m going to be translating for former President Poroshenko. And there’s an image of the Petro Poroshenko as I know him to look like. And he starts talking. It was odd because everybody else, as they were talking, you could see them talking.
And he was talking, and there was this weird delay, which I attributed to the simultaneous translation. It was as if you’re watching some type of foreign film that was dubbed in. So, you know, the person’s watching their lips move, it’s not a correspondent with the words coming out of the mouth. Then it started getting a little odd. The Ukrainians, of course, are under fire, under attack by the Russians. And this fellow who portrayed himself as Petro Poroshenko started to ask the question, “Don’t you think it would be better if we released some of the Russian oligarchs from sanctions if they were to give us a little bit of money?”
And it just seemed completely odd. And I gave the answer which I would give in any public setting. And I said, “No, I think the oligarchs should be punished to the full extent of the sanctions.” And then he did something even stranger, which is he said, “Well, what do others think on this call?” And that’s a very unusual thing. If it’s sort of principal to principal, people don’t usually ask the principal’s aides what they think of the situation.
But my colleagues then chimed in and said various things, and I didn’t think that it wasn’t Poroshenko. I just thought, what an unimpressive guy. All these crazy and unhelpful ideas he’s coming up with. No wonder he’s no longer president. That was my first reaction. And then it got really weird. And as the call was coming to an end, he said, “I’d like to play the Ukrainian national anthem, and will you please put your hands on your heart?”
And again, we weren’t convinced it wasn’t Petro Poroshenko. And so, we all put our hands on our heart. Listening to the Ukrainian national anthem, I had some reaction that maybe this wasn’t for real, but there he was this Petro Poroshenko guy. Then the final moment that I knew that this was a trick was when he put on some rap song, in Ukrainian, that I don’t know what it said. And asked us to continue putting our hands on our hearts. And at that point, it was obvious that we had been tricked into some kind of deepfake.
Well, this was done by the Russians. Why would the Russians do this? Well, the Russians have been trying to discredit me for a long time, in every different possible way. And I think what they were hoping to do is to get me in some type of setting where I would say something differently than I had said publicly.
I’ve been under attack. Under death threat, under a kidnaping threat by the Russians since the Magnitsky Act was passed in 2012. And so the fact that they’ve actually penetrated my defenses is very worrying. The fact that we didn’t pick it up is extremely worrying. And I think thankfully, I mean, in a certain way, this is a very cheap lesson. Because nobody was hurt, nobody was killed, nobody was kidnaped. You know, we all just looked a little stupid. And I’m glad they taught me this lesson because since then, we’ve dramatically heightened our vigilance and our security. Maybe we’ve just gotten too relaxed, but we aren’t anymore.
CHAKRABARTI: Bill Browder, a prominent critic of the Russian government. Now, Browder also told us that he and his staff finally confirmed that the call was indeed a deepfake when they took a much closer look at the email that that message, supposedly from Poroshenko, where it came from. Turns out they traced the email back to a domain in Russia that had only recently been created.
So, Browder’s experience raises the question once again about what happens when deep fakes move from the realm of saying a thing that a celebrity never said, and into the realm of governments using deepfakes against each other. (2)
The following are some really insightful suggestions from Project Liberty:
Some believe that within a few years, up to 90 percent of online content could be synthetically generated. While generative AI has the potential to democratize access to creative tools and expand economic livelihoods for creators and entrepreneurs, the sheer volume of synthetic media could also erode trust in video or audio recordings—and in the news more generally, which is why we’ll need to leverage a whole suite of solutions to fight back:
As the number of deepfakes continues to grow, so will the number of tools and approaches to detect and regulate them. The development of responsible technology can match the development of technology used to mislead, but it will require equipping citizens, journalists, and lawmakers with the tools they need to stay ahead of the curve.
An article from the MIT Sloan School of Management – “Deepfakes, explained” – featured the following interesting recommendations from Henry Ajder, head of threat intelligence, and his team at deepfake detection company Deeptrace:
“Deeptrace takes the approach championed by WITNESS Program Director Samuel Gregory: Don’t panic. Prepare.
“When it comes to securing business processes, you’ve got to identify the avenues where risks are most apparent,” Ajder said. “Maybe that is your telecom infrastructure in the company, maybe it’s the kind of video conferencing software you use.”
Recommendations include:
Test your deepfake-spotting skills.
Experiment with the MIT Media Lab’s artificial intelligence tool Deep Angel.
Watch: In Event of Moon Disaster.
Read ‘The biggest threat of deepfakes isn’t the deepfakes themselves’ at MIT Technology Review.
Read The State of deepfakes, a 2019 report from Deeptrace.” (1)
For more OODA Loop News Briefs and Original Analysis, see Deepfakes | OODA Loop.
https://oodaloop.com/archive/2022/03/24/you-be-the-judge-deepfakes-enter-the-information-warfare-ecosystem/
https://oodaloop.com/archive/2019/02/27/securing-ai-four-areas-to-focus-on-right-now/