Start your day with intelligence. Get The OODA Daily Pulse.

Home > Analysis > Every Business Leader Should Know About the Recent Deep Fake Experience of Bill Browder

After the murder by the Russian state of his company’s lead lawyer in Russia – Sergei Magnitsky – Bill Browder became a sworn enemy of Vladimir Putin.  Browder was the driver behind the passage of the Magnitsky Act in 2012, which has pinched the Russian Oligarchs and their global yacht and mansion ownership and square footage footprint ever since – not to mention really effective economic sanctions and the frozen cash and assets of their businesses.  Putin, of course, took notice – and has been surveilling and ideally trying to take out Browder for good if at all possible.

So, considering his experience and resources available to protect himself, his family, and his business and activist communications against the Russian threat directed at him, Browder’s recent appearances on a few media outlets recounting a live deep fake video call experience he had recently is, well, alarming – and worth a post here to raise your organizations risk awareness. We include What’s Next?, a section at the end of the post with some formative directives about what you and your organization can being to do to guard against what most experts are predicting is going to get exponentially worse with the further democratization of generative AI techniques.

Background:  Three Recent Deepfakes in the U.S.

Bill Browder’s Live Deep Fake Video Call

Browder recently shared his experience in an episode of WBUR’s On Point (linked above) – “Reality wars: Deepfakes and national security” – which also includes a longer conversation about deep fakes with:

Guests

Hany Farid, professor at the University of California, Berkeley’s Schools of Information and Electrical Engineering and Computer Sciences. He specializes in digital forensics, generative AI and deepfakes.

Jamil Jaffer, founder and executive director of the National Security Institute at the Antonin Scalia Law School at George Mason University. Venture partner with Paladin Capital Group, which invests in dual-use national security technologies.

Also Featured

Bill Browder, head of the Global Magnitsky Justice Campaign.

Wil Corvey, program manager for the Semantic Forensics (SemaFor) program at the Defense Advanced Research Projects Agency (DARPA), which aims to develop technologies to detect and analyze deepfakes.

On Point Host MEGHNA CHAKRABARTI: As Bill Browder says, as a result of his constant criticism of Vladimir Putin, he has had to protect every aspect of his life, his physical safety, his financial safety, even his digital safety. Browder told us he’s always on guard against any way in real life or online that Putin might get to him.

But he’s also still criticizing the Russian regime, and most recently, he’s been vocally supporting sanctions against Russia for its attack on Ukraine. So just a few weeks ago, Browder told us he wasn’t surprised at all to get an email that seemed to come from former Ukrainian President Petro Poroshenko, asking if Browder would schedule a call to talk about sanctions.

BILL BROWDER: And so that seemed like a perfectly appropriate approach. The Ukrainians are very interested in sanctions against Russia. And so, I asked one of my team members to check it out, make sure it’s legit, and then schedule it. I guess in the rush of things that were going on that week, this person didn’t actually do anything other than call the number on the email. The person seemed very pleasant and reasonable. The call was scheduled, and I joined the call a little bit late.

I’m on like 10 minutes after it started because of some transportation issues, and apparently before I joined there was an individual who showed up on the screen saying, I’m the simultaneous translator. I’m going to be translating for former President Poroshenko. And there’s an image of the Petro Poroshenko as I know him to look like. And he starts talking. It was odd because everybody else, as they were talking, you could see them talking.

And he was talking, and there was this weird delay, which I attributed to the simultaneous translation. It was as if you’re watching some type of foreign film that was dubbed in. So, you know, the person’s watching their lips move, it’s not a correspondent with the words coming out of the mouth. Then it started getting a little odd. The Ukrainians, of course, are under fire, under attack by the Russians. And this fellow who portrayed himself as Petro Poroshenko started to ask the question, “Don’t you think it would be better if we released some of the Russian oligarchs from sanctions if they were to give us a little bit of money?”

And it just seemed completely odd. And I gave the answer which I would give in any public setting. And I said, “No, I think the oligarchs should be punished to the full extent of the sanctions.” And then he did something even stranger, which is he said, “Well, what do others think on this call?” And that’s a very unusual thing. If it’s sort of principal to principal, people don’t usually ask the principal’s aides what they think of the situation.

But my colleagues then chimed in and said various things, and I didn’t think that it wasn’t Poroshenko. I just thought, what an unimpressive guy. All these crazy and unhelpful ideas he’s coming up with. No wonder he’s no longer president. That was my first reaction. And then it got really weird. And as the call was coming to an end, he said, “I’d like to play the Ukrainian national anthem, and will you please put your hands on your heart?”

And again, we weren’t convinced it wasn’t Petro Poroshenko. And so, we all put our hands on our heart. Listening to the Ukrainian national anthem, I had some reaction that maybe this wasn’t for real, but there he was this Petro Poroshenko guy. Then the final moment that I knew that this was a trick was when he put on some rap song, in Ukrainian, that I don’t know what it said. And asked us to continue putting our hands on our hearts. And at that point, it was obvious that we had been tricked into some kind of deepfake.

Well, this was done by the Russians. Why would the Russians do this? Well, the Russians have been trying to discredit me for a long time, in every different possible way. And I think what they were hoping to do is to get me in some type of setting where I would say something differently than I had said publicly.

I’ve been under attack. Under death threat, under a kidnaping threat by the Russians since the Magnitsky Act was passed in 2012. And so the fact that they’ve actually penetrated my defenses is very worrying. The fact that we didn’t pick it up is extremely worrying. And I think thankfully, I mean, in a certain way, this is a very cheap lesson. Because nobody was hurt, nobody was killed, nobody was kidnaped. You know, we all just looked a little stupid. And I’m glad they taught me this lesson because since then, we’ve dramatically heightened our vigilance and our security. Maybe we’ve just gotten too relaxed, but we aren’t anymore.

CHAKRABARTI: Bill Browder, a prominent critic of the Russian government. Now, Browder also told us that he and his staff finally confirmed that the call was indeed a deepfake when they took a much closer look at the email that that message, supposedly from Poroshenko, where it came from. Turns out they traced the email back to a domain in Russia that had only recently been created.

So, Browder’s experience raises the question once again about what happens when deep fakes move from the realm of saying a thing that a celebrity never said, and into the realm of governments using deepfakes against each other. (2)

What Next?

How we can respond

The following are some really insightful suggestions from Project Liberty:

Some believe that within a few years, up to 90 percent of online content could be synthetically generated. While generative AI has the potential to democratize access to creative tools and expand economic livelihoods for creators and entrepreneurs, the sheer volume of synthetic media could also erode trust in video or audio recordings—and in the news more generally, which is why we’ll need to leverage a whole suite of solutions to fight back:

  • More fact-checkers: In Taiwan, a group of fact-checking nonprofits uses tools developed by tech companies to find and debunk disinformation.
  • Partnerships with tech firms: Google has trained more than 100 Taiwan government officials, and legislative and campaign staff in Taiwan on how to use tools to detect deep fakes and disinformation.
  • Cryptographic signatures: The Coalition for Content Provenance and Authenticity (C2PA) is an open technical standard providing publishers, creators, and consumers the ability to trace the origin of different types of media. It’s being used to create a signature on a piece of media to prove its legitimacy.
  • WatermarkingWatermarking videos with a digital watermark can help people trace the origin of the video.
  • Regulation: Governments have been slow to respond to deepfakes, but earlier this year China rolled out rules requiring deepfakes to have the subject’s consent and include digital signatures or watermarks. In the US, both Texas and California have laws banning deepfakes.
  • Media literacy for the public: Developing the public’s media literacy so they can detect truth from fiction is critical. MIT has created a free online course for the public.

As the number of deepfakes continues to grow, so will the number of tools and approaches to detect and regulate them. The development of responsible technology can match the development of technology used to mislead, but it will require equipping citizens, journalists, and lawmakers with the tools they need to stay ahead of the curve.

What can a business leader do to protect their company and employees against deepfakes?

An article from the MIT Sloan School of Management – “Deepfakes, explained” – featured the following interesting recommendations from Henry Ajder, head of threat intelligence, and his team at deepfake detection company Deeptrace:

“Deeptrace takes the approach championed by WITNESS Program Director Samuel Gregory: Don’t panic. Prepare.

“When it comes to securing business processes, you’ve got to identify the avenues where risks are most apparent,” Ajder said. “Maybe that is your telecom infrastructure in the company, maybe it’s the kind of video conferencing software you use.”

Recommendations include:

  • Consider using semantic passwords for conversations, or a secret question you ask or answer at the start of a call.
  • If you have a voice authentication service or biometric security features, ask those providers whether their tools are up to date.
  • Educate your employees. Explain that deepfake attacks might become more frequent and there is no magic formula for detecting them.
  • “Interrogate your security infrastructure,” Ajder said. “Understand where weak spots may be, prepare and see where technological solutions can fit into that infrastructure to secure at critical points.”
  • In [Modulate CEO and co-founder Mike] Pappas’ mind, it’s everyone’s responsibility to protect against malicious deep fakes.
  • “The social answer is we all build an immune system,” he said. “We start asking ourselves questions: Who is the person presenting this image to me? Where did it come from? What is evident, what is actually authentic? Having that general demeanor of asking these questions certainly helps.”
  • [Matt Groh, a research assistant with the Affective Computing Group at the MIT Media Lab] said people can defend themselves against deep fakes using their own intuition and intellect.
  • “You have to be a little skeptical, you have to double-check and be thoughtful,” Groh said. “It’s actually kind of nice: It forces us to become more human because the only way to counteract these kinds of things is to really embrace who we are as people.”

Ready to go deeper?

Test your deepfake-spotting skills.

Experiment with the MIT Media Lab’s artificial intelligence tool Deep Angel.

Watch: In Event of Moon Disaster.

Read ‘The biggest threat of deepfakes isn’t the deepfakes themselves’ at MIT Technology Review.

Read The State of deepfakes, a 2019 report from Deeptrace.” (1)

For more OODA Loop News Briefs and Original Analysis, see Deepfakes | OODA Loop.

https://oodaloop.com/archive/2022/03/24/you-be-the-judge-deepfakes-enter-the-information-warfare-ecosystem/

https://oodaloop.com/archive/2019/02/27/securing-ai-four-areas-to-focus-on-right-now/

Daniel Pereira

About the Author

Daniel Pereira

Daniel Pereira is research director at OODA. He is a foresight strategist, creative technologist, and an information communication technology (ICT) and digital media researcher with 20+ years of experience directing public/private partnerships and strategic innovation initiatives.