Search Icon
Back to Blog

Recently, Socure attended the inaugural ACAMS FinTech Regulatory summit, where we explored the duality of leading technologies that can be both threats and solutions for the tracking and prevention of financial crimes. On one hand, technologies such as Artificial Intelligence (AI) can be leveraged in new crimes that were previously unimaginable. On the other hand, innovation can (and must be) harnessed to investigate and prevent those new crimes.

I think about it as new methods and new mediums for money laundering as well as financial fraud.

When it comes to money laundering, Internet-based industries are a treasure trove of opportunity. Online video gaming, cryptocurrency, online banking, and peer-to-peer lending can be abused for the placement and conversion of dirty money. Those Combo Bonus Points! and digital gift cards have enough agreed-upon value that they can be considered as viable types of currency. Additionally, their markets are generally unregulated and can be easily compromised and/or gamed (no pun intended) by bad actors. Regulators (and the public) need our cooperation to embed the best monitoring and detection safeguards into these platforms to make them safer places for digital life.

AI tools are especially advantageous in combating money laundering and financial fraud because imitating genuine or honest human behavior is a guiding tenet of fraud. Thanks to massive data breaches, online services, and the Internet of Things, there are rich repositories of data available to train AI models to get continuously better at impersonating human images and interactions. One of the ACAMS speakers even asserted that labeled training data is the new gold for fraudsters wanting to leverage AI. As a result, it’s now possible to employ deep fakes, metadata scraping, and even turn generative adversarial networks on each other in an unremitting tit for tat between tech as innovation and tech as fraud.

In our industry, deep fakes are a dire threat to digital identity because AI software can create images, video and audio for “nearly flawless” fake identity documents and spearphishing attacks. Have you ever tried to pick out the AI-generated fake faces at http://www.whichfaceisreal.com/ It’s a vivid demonstration of the prowess of deep fake tech. On the flip side of this problem, AI-based identity verification technology can be harnessed to fight back against the tide of both inauthentic digital identities and physical documents with a high level of effectiveness. Here at Socure, we’ve reacted to the threats by merging the expertise of our AI/ML models in verifying (or disproving) digital identity with data from our document verification solution that judges the authenticity of identity documents. Did I say yet that it’s an unremitting tit for tat … oh yes, I did. Well, I meant it.

Another example of emerging tech that can cut both ways —as a medium for fraud and a tool to counteract fraud—is the biohacking of DNA synthesizers. On one hand, the noise emitted by these machines is metadata that can be misappropriated for harm or as a counter-terrorism tool. Researchers have applied machine learning to recordings of DNA printers in order to ascertain which sounds correlate with which chemical processes, and then to reverse engineer that gene sequencing. This acoustic side-channel attack can enable the theft of intellectual property, which is not an AML financial crime per se … but I wouldn’t put it past criminals to find a way. Can you think of how this tech could be misappropriated for financial crimes? On the other hand (or the solutions side), biohacking can be leveraged as a counterterrorism monitoring tool to detect if machines are producing biological weapons.

After an invigorating day at ACAMS in San Francisco, as I headed to dinner in a Chinatown emptied by the coronavirus gloom, I concluded that financial crimes experts really need to be tech champions. There is no avoiding the momentum of all these new methods and mediums for fraud. Rather, we should push back by incentivizing regulated entities to raise up the best people to be human-led, machine-powered defensive and monitoring strategies.

Interested in speaking to someone at Socure about KYC, Watchlist or other identity verification needs? Click here!

Annie C. Bai
Posted by

Annie C. Bai

Annie C. Bai

Privacy, data security, and fintech lawyer and compliance officer. Annie is a graduate of NYU School of Law and former law clerk to the Hon. A.W.Thompson in the District of Connecticut. Her experiences range from the non-profit to Fortune 500 sectors. She advises Socure on privacy, cyber, AI innovation, data/model governance, fair banking, and AML/BSA compliance. IAPP notables include CIPP/US, CIPP/C, CIPT, FIP, and Education Advisory Board.