AI explainability

We follow four principles to ensure our AI is explainable and compliant with upcoming regulation from the UK, EU, and OECD.

Human intervention

Fuse always keeps a human in the loop. All automated decisions can be deep-dived into - users are able to examine the exact transactions leading to a specific decision. We use selected expertise to inform our models’ decision-making.

Comprehensible explanations

We ensure that all decisions made by our AI are able to be made comprehensible for the end-user, no matter their technical competency. Plain English directions are given to clients, made through a traceable and examinable causal model.

Data transparency

We ensure all of our input data is clearly labelled, and our bespoke training data sets were created to ensure we were not starting off biassed. Every decision can be traced back to the data points that fed into it.

Bias limitation

Fuse algorithms operate within a set of core assumptions, made by us prior to model creation, which can never be breached. This ensures we limit bias against protected characteristics, and guarantee good outcomes for end customers.