Search Icon
Back to Blog

On February 5, an article came out about a new service that can be used by fraudsters to circumvent document verification methods using an instantly-generated deepfake ID.

A New Era of Technical Capability

First, it’s incredible that these types of complex, digital documents can be produced at the click of a button with nothing more than artificial intelligence. We’re truly living in a new era of technical capability.

And to be sure, the deepfakes that are generated are very good. While there are minor visual issues that could be distinguished by the naked eye, for the most part it is as good as the fake IDs from your friendly neighborhood fake ID manufacturer — at a much lower cost and effort than ever before.

But fake IDs are nothing new. For decades, fake IDs have been generated in massive quantities to help under 21 year-olds get into bars and nightclubs in the U.S. And those fake IDs have typically been good enough to fool a bouncer who gets to look at thousands of IDs a month. If not, there wouldn’t be a market for those fake IDs in the first place.

Why This Identity Fraud is Different

So why is this new identity fraud concerning? Well, for one, the scale. High quality fake IDs have always required time and money, so it was not typically a scalable attack vector for identity fraud

Second, the prevalence of online document verification has blown up, particularly for creating accounts for financial institutions, where getting it wrong can be very bad. And having a handful of fraudulent accounts opening is business as usual for many financial institutions where there’s always been a tolerable level of identity fraud which is higher than zero. But now with this scale, fraudsters could multiply their impact dramatically.

What Do We Do Now?

First, we need to acknowledge that document verification, which was always hard, is about to get a whole lot harder. This is just the beginning of this trend of broadening access to identity fraud tooling. 

Second, we need to revisit our assumptions on what acceptable looks like for digital identity verification

To date, a huge proportion of ID verification that happens online is still at least partially powered by humans. If you have ever waited more than 15 seconds to get a result after uploading your ID, there was a human on the other side manually checking that ID. 

That is completely obsolete starting today. Humans will not be able to easily distinguish between deepfakes and real IDs. If bouncers can’t do it when they’re looking at or touching the fake IDs, what chance does some manual reviewer have looking at an image on a screen?

The most obvious identity fraud vectors are no longer the ones you need to be worried about. Why would you manually alter an ID or take a photocopy of it if you can just use a service to create a deepfake that is visually indistinguishable from a real one? Why take that risk as a fraudster? 

We need to rethink our toolchain to catch the really scary identity fraud — the scalable, technology-powered kind.

The Path Forward

It’s time to fight fire with fire. Organizations need technology and big data to solve this next generation of digital identity verification problems, because humans are not the answer. 

Here’s how that works:

  • Look at the full digital identity picture rather than examining each piece individually 
  • Leverage a defense-in-depth approach 

In security, you never rely on a single protection to keep bad actors at bay. Instead, you think about how you can create many layers of defense such that no single failure can cause full system failure.

The Answer: Predictive Document Verification

With Predictive DocV, we’ve packaged this all up into a single product. Before you’ve uploaded your first image, we have a huge amount of knowledge about the user who is going through the process: 

  • Have we seen them before? 
  • What devices have they used? 
  • Is their information in the source of truth datasets? 
  • Does the phone number they’re using correlate to their digital identity
  • Are they likely to be a synthetic identity? 
  • Are they likely to be a third-party fraudster? 

Once we’ve answered those questions, we collect your identity documents and compare that to a selfie. In addition to our liveness and image recapture models, we are looking for inconsistencies in the data presented. This includes spotting minor visual flaws in the photos, as well as consistency with the submitted data versus the extracted data. 

And we’re doing all of this with an incredible user experience and a response time of under 2 seconds.

While this recent news is surprising for many, it’s not at all surprising to us at Socure. We’ve been anticipating this day, planning for it, and executing to prepare for this new reality. Welcome to the new age.

See our Predictive DocV in action by scheduling a live demo here.

Eric Levine

Eric Levine is the SVP, Head of Document Verification at Socure. Prior to that role, he was the Co-Founder and CEO at Berbix where they built privacy-first document verification products and were acquired by Socure in 2023. Prior to founding Berbix, he led the Engineering Trust & Safety team at Airbnb where he built many of the early components of the Airbnb identity systems.