Unraveling Shell Companies with AI

The project that permanently changed my perspective on AI models.

The project that permanently changed my perspective on AI models.

AI

Compliance

Regulatory

For over a decade, my role within financial crime compliance (FCC) was to design and implement AML detection controls for global financial institutions. These systems were handcrafted through collaborations with experienced risk managers and seasoned AML investigators, developing real-world typologies into automated detection scenarios.

I was proud of that work, having developed a deep understanding of AML risk and, more importantly, how to build systems to identify that risk automatically. Every rule, threshold, and scenario was carefully crafted. Each control reflected hard-earned expertise.

Little did I know that I was about to embark on a project that would strike at the core of these beliefs, making me re-think my entire approach to fighting financial crime. A project that would convert me from an AI doubter to a true believer.

A New Challenge: Understanding Shell Companies

Several years ago, regulatory guidance began to highlight the increasing risks posed by 'shell companies' and the role they play in facilitating financial crime. A shell company is exactly what it sounds like - on the outside it looks like a typical corporate entity, but inside it is hollow with no real assets, no operations and no physical presence. Global regulators and law enforcement agencies warned that these opaque corporate structures were being used to conceal ownership, obscure the flow of illicit funds, and distance criminals from the proceeds of their crimes.

At the same time, my organization was seeing a rise in the use of shell companies across our customer base, particularly in parts of Asia. What initially appeared to be complex but legitimate corporate structures quickly revealed a far more troubling reality. Through internal investigations, we identified shell companies being used to hide the proceeds of some of the most serious crimes – including human trafficking and child exploitation.

I was tasked with taking ownership of a significant and urgent challenge: design a mechanism to proactively identify shell companies within our global customer population.

A New Approach: Introducing AI 

As with every initiative I had worked on before, I began by turning to regulatory guidance, industry best practices, and law enforcement publications. Sources such as the FATF, Wolfsberg, and FinCEN provided well-established red flags associated with shell companies, and my initial approach was to codify those indicators into automated rules.  

This method was familiar and proven. Translating known typologies into machine-readable logic was something I trusted, and something I believed I did well. But then a colleague proposed a different approach.

Rather than relying solely on predefined rules, they suggested building an AI model and allowing it to determine how shell companies could be identified within the data. 

I was skeptical. After a decade of developing AML controls, this felt like an affront to my expertise. The idea that a machine could identify risk more effectively than human judgment rooted in regulatory expertise and investigative experience seemed unlikely. Despite my doubts, we persevered. 

The Process: Training, Teaching, and Trials 

Recent investigations within my organization had uncovered a population of known shell companies. These entities had been identified through extensive, labor-intensive reviews carried out by dozens of analysts. This confirmed population of 'labeled data' became the foundation of our AI training dataset.

We provided the model with labeled examples of what shell company behavior looked like, alongside examples of ‘good’ customer behavior that should not be classified as shell companies. The idea was simple: train the model to distinguish between the two.

During feature engineering we included attributes commonly associated with shell companies – newly incorporated entities, accounts with shared addresses or phone numbers, foreign nationals acting as directors, and limited business-related transaction activity. 

Alongside these expected indicators we also introduced a range of other attributes, deliberately giving space to the model to identify patterns we may not have anticipated. 

With the data prepared and the features defined, we allowed the model to iterate and observed the outcomes. The results shocked me.

Analysing the Model’s Mistakes

After several rounds of training, the model demonstrated strong performance. It successfully recalled 100% of the known shell company population and identified additional entities that had not previously been flagged.

Many of the features contributing to these predictions aligned with expectations. However, one factor stood out immediately – and appeared to be a clear mistake.

One of the strongest contributors to the model’s predictions was relationship manager ID.

This result made little sense. Relationship manager information was not a recognised shell company red flag. It did not feature in regulatory guidance or established typologies, and there was no obvious reason it should influence shell company risk.

I was convinced: the model had identified a correlation that was spurious and likely wrong. But then we dug deeper.

The Revelation: AI Catches An Unexpected Signal

As we investigated the newly identified entities, it became clear that the model had not made a mistake. The accounts it flagged were indeed shell companies, some of which were directly linked to entities we had already identified.

A clear pattern began to emerge. In many cases, the accounts had been opened at the same branch and were associated with the exact same relationship manager.

What the model had uncovered was a systemic weakness rather than a customer-level red flag. Criminal networks had identified specific bank branches who were lax in their KYC practices and repeatedly exploited that vulnerability. Once a weak link was found, it was used again and again.

Further analysis showed this was not an isolated issue. Across multiple jurisdictions, multiple relationship managers were associated with elevated shell company risk. When all accounts linked to these individuals were reviewed, the network of known shell companies expanded significantly.

The model had revealed a pattern we had not explicitly taught it to find.

A New Perspective: Understanding The Power of AI

This fundamentally changed how I viewed the role of AI in financial crime detection. In a matter of weeks, we had built a model that outperformed processes which would traditionally take months to develop.

More importantly, it demonstrated that AI can uncover risk patterns that even experienced FCC professionals may not immediately see. The model did not replace expertise – it built upon it, challenging assumptions and exposing blind spots in ways that traditional rules-based approaches cannot.

This experience completely changed my perspective on the power of AI. What I had initially viewed with skepticism became, very quickly, something else entirely.

For the first time, I saw AI not as a challenge to hard-earned expertise, but as a powerful tool that could build upon it. A tool we could use to accelerate our position in the arms race against bad actors, and, hopefully, begin to turn the tide in our favor.

That moment marked a shift for me. From that point on, AI was no longer something to be cautious of or resistant to – it became an essential part of how I believed financial crime could, and should, be fought.

Contributor

Patrick Kirwin

Head of Product Management

Share article

Latest news

Discover how AI is Revolutionising Compliance and Risk Adjudication

Download our latest collateral to stay ahead.