There’s a well-known adage: “You don’t have to run faster than the bear. You have to run faster than the guy next to you.”
When it comes to financial institutions protecting their portfolios from identity fraud, the most practical path is to assess their exposure based on the risk level of each credit, loan, or investment product before it launches, and then apply applicable machine learning based identity verification and fraud controls to mitigate that risk—because FIs will never outrun the proverbial bear, but you can insert deterrents to persuade fraudsters to look for an easier avenue elsewhere.
This was the premise of a recent webinar in which I participated as a panelist: “Banking and Payments: Balancing Growth with Security.” The webinar was moderated by Elena Kozhemyakina, founder of Fintech4Funds. My co-panelists included Ian Mitchell, head of financial services fraud management at PwC, and Maya Parbhoe, CEO at Ourox.
We opened the webinar by discussing that fraud and financial crime vulnerabilities have always existed, but the COVID-19 pandemic, the loss of jobs, the massive shift to digital onboarding, and the resulting relief in the form of unemployment and stimulus funds have only exacerbated the problem. In a rush to be more instant and more digital to respond to consumers as fast as possible, these digital products have experienced increased waves of fraud.
One new and prevalent threat we’re seeing and which manifested with PPP stimulus loans is synthetic identities. This happens when a scammer starts with a Social Security number — sometimes stolen from a child or inmate—and builds a synthetic identity using other fake elements of data. We talked about how some fraudsters are even applying for Social Security numbers for these fake identities. The combination of these factors often produces a very real-looking person who often receives account approval.
A Recipe for Success
The good news is that with the proper controls, traditional fraud and synthetic ID fraud risk can be mitigated. As an example, we brainstormed what this might look like if you are a startup offering consumer loans: you would institute traditional identity verification methods to verify a new customer.
At onboarding, you would want to gather as much intelligence as you can invisibly, such as device ID and IP address. With those elements, you can monitor biometric behavior such as how they interact with your enrollment form. Once they’ve completed the form, you’ll get a telephone number, email address and other data sources that can be verified. Questions you can answer include: is it that the applicant’s actual email address? How long has it existed? Is it associated with fraud?
Finally, after the above steps, you apply smart machine learning, and you challenge in a dynamic way. In some cases, the person may appear super safe, but your product is so risky that you may want to employ a step-up method to be sure the applicant is who they say they are. You might add on a document verification solution that also incorporates a passive one-time password. When the link is sent, a phone receiving the password can be evaluated at the same time. This combination of robust risk assessment confirming the user is low risk on top of an additional verification makes riskier product offerings less risky.
Roadmap of Cooperation
This is where the industry is going, but there are some other things happening in the marketplace which could make for even more robust workflows. We talked about the emergence of eCBSV, a new service being offered through the Social Security Administration (SSA), which will allow an approved entity to verify if the applicant’s name, date of birth, and Social Security number match the SSA’s record—with the consent of the applicant. This has huge implications for quashing synthetic identities.
The other opportunity we chatted about is how the financial services industry can come together to share more enriched data that can be fed into a machine learning construct and then deployed appropriately. The notion would be to create a consortium—a single data view of banks’ and other financial services providers’ shared information, not for competitive purposes, but to fight financial crime that would lead to a more holistic view, not just for fraud and AML, but a really convergent leveraged model based on a construct that has existed for a long time and where it would become better. At that point, the focus becomes about the treatment or application of that machine learning. It would be powerful because we have a tendency now to think in silos, when otherwise we can leverage advanced analytics to mitigate this risk.
Whether your organization offers credit, money transfer, investment, or other types of financial products, you could be the victim of significant losses due to synthetic identity fraud, like we discussed during the webinar. At Socure, we apply a multi-layered approach that looks beyond PII elements and leverages advanced analytics and diverse, deep data sets to gain conviction on the applicant’s identity. Deploying machine learning to detect synthetic IDs creates efficiencies and avoids manual reviews and human errors, while optimizing on speed and customer experience.
Socure’s Sigma Synthetic Fraud solution tackles synthetic ID fraud through feature engineering and data source analysis. It uses both supervised and unsupervised machine learning models to derive a common definition of synthetic identity fraud, upon which Socure developed classification models that achieve 97.3% under the ROC curve with an auto-fraud capture rate of 90% or higher in the riskiest 3% of users. And when your product falls into a risky category where additional vetting is warranted, Socure’s fully-automated, omnichannel document verification service, DocV, provides greater clarity. When a comprehensive suite of ID+ solutions are combined, Socure supports auto-decisioning rates of up to 98%.
For more information on how Socure can mitigate traditional identity fraud and synthetic identity fraud in your digital customer ecosystem, contact firstname.lastname@example.org.