Our Explainable AI

Fuse’s AI is explainable at every step of production. AI explainability is increasingly important, with new and upcoming regulation from the UK, EU, and policy from the OECD.

For an AI system to be explainable, you must be able to have a human intervene and comprehend what is happening at every stage.

Explainability is important to ensure good and expected customer outcomes without unwanted bias, and to maintain desired security levels. To do this, we follow four AI principles at Fuse: 

Human intervention

Fuse always keeps a human in the loop. All automated decisions can be deep-dived into - users are able to examine the exact transactions leading to a specific decision. We use selected expertise to inform our models’ decision-making.

Comprehensible explanations

We ensure that all decisions made by our AI are able to be made comprehensible for the end-user, no matter their technical competency. Plain English directions are given to clients, made through a traceable and examinable causal model.

Data transparency

We ensure all of our input data is clearly labelled. We have trained our algorithms on over 400 million data points, and our bespoke training data sets were created to ensure we were not starting off biassed. Every decision can be traced back to the data points that fed into it.

Bias limitation

Fuse algorithms operate within a set of core assumptions, made by us prior to model creation, which can never be breached. This ensures we limit bias against protected characteristics, and guarantee good outcomes for end customers.
To find out more about our models and algorithms, schedule a meeting today.
Find out more