Platform

Expertise

Solutions

Resources

Events

Company

Platform

Expertise

Solutions

Resources

Events

Company

Preventing AI Bias in Financial Crime Detection

The wristwatch has been a staple of human ingenuity for over a century. From early inventions to today’s luxury brands, watch design has evolved impressively - but one thing has remained curiously constant for decades: in nearly every advertisement for a watch, the hands are set to 10:10.

Why? Because 10:10 frames the logo, balances the hands symmetrically, and creates a subtle ‘smile’ on the watch face. It’s an aesthetic choice, not a functional one.

Now, imagine feeding an AI model thousands of watch images all set to 10:10, and then asking it to show you 6:23. Odds are, it’ll still smile back at you with 10:10. The AI has internalised a picture-perfect pattern, not the concept of time, through training that has taught it how a watch should look in ads, not what a watch does in the real world.

Bias In Action

While it’s an amusing quirk for wristwatches, AI bias can be a serious concern if a similar phenomenon happens during financial risk assessments.

In July 2023, Michael Barr, the U.S. Federal Reserve’s Vice Chair for Supervision, warned that AI systems in banking might produce unfair outcomes, highlighting the risk of models inheriting societal biases from incomplete or poorly representative training datasets.

However, AI bias has been impacting FinTech institutions long-before Barr’s stark warning. In 2019, the Apple Card came under scrutiny for reportedly giving men higher credit limits than women with similar financial profiles. 

While the issuing bank, Goldman Sachs, claimed the algorithm didn’t use gender as an input, experts highlighted the risk of proxy bias - with variables like spending patterns or location having the potential to indirectly encode gendered differences. 

Last December, an AI system used by the UK government to vet universal credit claims incorrectly selected people from certain demographics more than others - based on age, disability, marital status and nationality - when recommending investigations for fraud. 

These examples show how quickly bias can move from a harmless curiosity to a compliance and reputational risk. In financial crime prevention, the stakes are far higher - meaning that the quality, diversity, and governance of training data must be as robust as the controls it powers. 

Financial Crime Compliance (FCC) Implications

In FCC, supervised machine learning models are often trained on a limited set of ‘true’ cases - confirmed money laundering alerts, verified sanctions matches, or existing suspicious activity reports (SARs).

The problem? In an industry where less than 2% of alerts lead to a SAR, the vast majority of labelled data reflects false positives. This creates a skewed reality for the model, teaching it to overvalue certain patterns and miss others entirely.

Just like the watch set to 10:10, the model starts to replicate its training bias rather than accurately detect the truth.

The Risks of Biased Models in FCC

When bias creeps into compliance models, it can lead to:

  • Over-flagging low-risk activity - Flooding analysts with false positives and slowing response times.

  • Missing emerging typologies - Overlooking new financial crime patterns such as evolving laundering schemes, innovative sanctions evasion methods, or previously unseen threat vectors that fall outside existing detection models.

  • Uneven treatment of clients - Risk scoring that skews toward certain geographies, industries, or transaction patterns without factual basis.

In high-stakes environments where missed cases can mean regulatory fines, reputational damage, and real-world harm, this is more than a technical flaw. It’s an operational and strategic vulnerability.

Countering Bias: Lessons from the Watch Face

Just as you’d retrain an AI watch model with images of many times of day, models used for FCC need data diversity and contextual intelligence to avoid bias. Practical steps include:

  1. Augmenting training data - Use synthetic but realistic examples to balance skewed datasets.

  2. Broadening ‘success’ criteria - Include high-quality investigative outcomes, not just confirmed cases.

  3. Federated or pooled learning - Leverage anonymised data from multiple regions or institutions.

External calibration - Work with partners who can tune models using a broad spectrum of risk scenarios.

Adjusting the 10:10 Effect

At Silent Eight, we embed explainable, context-aware AI across the entire risk journey, from screening to investigation to resolution, ensuring decisions aren’t just automated, but also grounded in relevant context.

  • Multi-source intelligence - We ingest data from multiple geographies, jurisdictions, and typologies to avoid single-pattern bias.

  • Explainable outputs - Every automated decision is accompanied by a clear, regulator-ready explanation of why it was made.

  • Continuous learning - Models evolve based on new cases, typologies, and investigator feedback - closing blind spots before they become operational risks.

  • Configurable to your risk appetite - We calibrate models to each bank’s specific regulatory environment and client base, avoiding “one-size-fits-all” assumptions.

The result? Faster resolution, reduced false positives, and models that adapt to real-world risk - not just the patterns they’ve seen before.

Patrick Kirwin, Head of Product Management, emphasised the importance of using context-relevant data, stating:

“Having access to high quality training data is crucial for developing successful AI models, and this is especially relevant to financial crime surveillance where access to real AML and sanctions activity can be limited."

“At Silent Eight, our global customer base and decade of experience in the industry have allowed us access to a wide, diverse and representative population of data to train our models.”

Key Takeaways

Whether it’s a wristwatch or a risk model, bias in training data can distort outcomes and undermine trust. The good news: with the right approach, bias can be identified, mitigated, and even turned into an opportunity for sharper decision-making.

In FCC, that means building systems that not only see the right time, but act on it - protecting both the institution and its clients.

Contributor

Patrick Kirwin

Head of Product Management

Share article

Discover how AI is Revolutionising Compliance and Risk Adjudication

Download our White Paper to stay ahead.