Retour aux actualités
Article suivant Article précédent

Revue TELECOM 192 - Al Solutions un Italian fintech

Articles Revue TELECOM




Par Andrea Caridi (2010), Tim Negris et Werther Cialdai dans la revue TELECOM n° 192

The inherent complexities of using AI in the fintech market, and the Explainable AI solution that eases the GDPR headache.

Fintech aims to provide transparency and accessibility in the muddy waters of Italian bureaucracy. The Italian fintech market raised approximately 198.65 million euro in 2018, four times the amount raised in 2017, and there are currently 235 fintech-specific start-ups in the country. In 2018 25% of Italians used fintech, through banking and insurance services in mobile payments, family finances and money transfers.

AI in fintech

Fintech business areas involving process workflows, such as fraud claims, non-profit loans and credit scoring, are among those which benefit most from AI solutions. These processes are executed according to defined business rules, which were historically created by business process experts using operations research and statistical data analysis, and then used by operational personnel to guide their actions and decisions. The accuracy of these operational decisions depended directly on the depth and currency of the knowledge of the people creating the decision rules. Machine learning can be implemented in these processes to deal with the growing complexity inherent in these rules, increase accuracy and process a greater volume of data.

Traditional machine learning produces predictive models in the form of mathematical functions, which can accurately make predictive decisions on credit, loans, etc., processing customer information with the function to produce a positive or negative outcome. These models are called “black boxes” because the complex mathematics of the model cannot be used to explain why the predictive decision was made. This represents a huge problem in the era in which ethical guidelines on trustworthy AI are being defined by the European Commission¹, and the most important regulation in data privacy in 20 years, the General Data Protection Regulation, has just been enforced in May 2018.

General Data Protection Regulation (GDPR)

The General Data Protection Regulation (GDPR)²is a set of rules, established by the European Parliament, which aim to give EU citizens and residents greater control over how their personal data is handled. Such data includes names, addresses and photos, and more sensitive data such as biometrics.

The fintech market consequently needs to obtain explicit and informed consent to collect and store personal data and pay particular attention to how it obtains and subsequently protects biometrics, which are commonly used as identifiers for financial transactions.

Less well known, but equally important, is that GDPR also restricts the use of personal data with machine learning, especially for automated decisions on loans, credit, policies, and claims, and, in the process, wipes out much of the value of existing analytical data assets. GDPR (Art.22, recital 71) declares that everyone has the “right to obtain an explanation of the decision reached and to challenge the decision”. Traditional black-box AI, which provides no transparency on the logic used to reach decisions, cannot consequently operate in compliancy with article 22 of the GDPR.

The text in the figure below shows the start of a typical pages-long black-box AI model for predicting insurance auto claims fraud.

Should a policy holder’s damage claim be rejected, based on Recital 71 of the GDPR, the policy holder is well within his rights to request an explanation for the rejection.

But how is this possible with black-box AI? To put it simply, it isn’t.

Article 9 of the GDPR prohibits the use of particularly sensitive data, such as ethnic origin, religious beliefs and political opinions, in profiling and automated decisions. Again, the unexplainable nature of conventional machine learning predictive models precludes the easy discovery of hidden biases and introduces a challenging regulatory risk for Financial Services companies.

Explainable AI (XAI)

Motivated by the stipulations of the GDPR and the principles they represent, as well as various commercial initiatives in fintech, a new wave of AI techniques is now emerging, broadly called Explainable AI (XAI). XAI introduces new methods and technologies, which aim to produce self-explanatory predictions, rather than opaque mathematical functions.

Explainable AI is just as accurate as black-box algorithms, but in addition to making accurate predictions, it also produces what could be called “cognitive output”, i.e. knowledge of why each prediction is made the way it is.

Rulex XAI Use Cases in Italian fintech

Rulex³ the world leader in Explainable AI, employs clear-box machine learning algorithms that automatically produce predictive models in the form of self-explanatory logical if-then business rules, which have been successfully applied in the Italian fintech market.

The if-then rules in the figure on the right illustrate how the self-explanatory nature of the logical model generated by Rulex reveals the reasons for suspected frauds, directly satisfying the GDPR “right to obtain an explanation” requirements.

This is the Rulex XAI alternative to the black-box AI model depicted on the left, which was the key to success in building a vertical XAI anti-fraud solution together with GFT4, an official Rulex partner in banking and insurance. The solution, which has been in production for over a year in one of the main Italian insurance companies, has produced a two-fold benefit: to retrieve company know-how through heuristic rules, integrated into the machine learning platform, and to activate a machine learning process on their overall database. This combined approach has led to a fast activation of the process, with reduced technological risk, and a highly accurate model.

Rulex XAI has also been applied in the field of credit scoring to identify patterns and potential defaulters one year ahead. The XAI model ingests customer profile raw data and existing scoring engine data and can consequently run on-top of existing solutions. The model can characterize and predict the risk class to which a customer belongs, and determine the default risk associated to each customer, consequently decreasing the number of customers to monitor, and increasing the percentage of identified defaulters.Oss. Artificial Intelligence - School of Management, Politecnico di Milano – Oct 2018

Rulex XAI has also be applied to make the most out of existing non-performing loans, for example to distinguish between cases in which it is worth starting a legal action to retrieve funds, and cases in which a legal action would simply represent a further loss of money. Thanks to XAI the patterns leading to a profitable recovery action can be clearly distinguished from others, thus providing not only tactical, but also strategical information to empower the decision process.


Over the last two years the Banking and Insurance market has witnessed the greatest increase in the use of AI.

The greatest number of XAI production projects have been in this sector, confirming the overall trend, in which open banking and compliance have played a crucial role.

The increasing use of AI in fintech has, however, been accompanied by increasing concerns regarding sensitive data, ethics and biased decisions. Explainable AI is a practical example of trustworthy AI, adopting the human-centric approach underlined by the European Commission, showing that AI can be understandable and ethical, and operate in full respect of GDPR regulations, whilst still bringing enormous benefits to business and clients through its technologically innovative solutions.

Rulex is a Boston (MA) based AI pioneer, with Research and Development in Genoa, Italy. Rulex’s mission is to democratize advanced analytics by enabling business and process experts to use AI to optimize and automate rule-based decisions, with no need for math or programming skills, and produce human-understandable explanations for all its predictive decisions. Rulex XAI solutions are globally deployed in a variety of industries, such as manufacturing, banking, and telecom, and its many clients include a Fortune 50 global manufacturing company.

1/ Ethics Guidelines for Trustworthy AI, drafted by the European Commission’s high level expert group on AI:

2/ Fur further information on the General Data Protection Regulation:
3/ Rulex Inc website:
4/ GFT website:

Biographies de auteurs

Andrea Caridi (2010) is Business Development Manager at Rulex. After obtaining a degree in Telecommunications Engineering at the University of Genoa, and specializing at the Télécom ParisTech in Paris, he started working as Business Developer Manager for an Italian ICT consulting SME before moving to his current role in 2018.

Tim Negris is SVP of Marketing and Business Development at Rulex. Previously he served as a senior executive at Oracle and IBM, and a strategic consultant to HP, Dell, and numerous start-ups in advanced data analytics.

Werther Cialdai is Business Development and Channel Manager at Rulex. After obtaining a degree in Economics and Master’s in finance he worked as a Business Consultant and Managing Director in various industries, including fashion, Business Information and ICT. He has supported the development of many start-up projects and has been working on the Fintech market with Rulex since 2014.

789 vues Visites



Veuillez vous connecter pour lire ou ajouter un commentaire

Articles suggérés

Articles Revue TELECOM

Editorial revue TELECOM #199

photo de profil d'un membre

Rédaction Revue TELECOM

30 décembre

Articles Revue TELECOM

Editorial LE NUMERIQUE dans le secteur de l'AÉROSPATIAL #199

photo de profil d'un membre

Rédaction Revue TELECOM

30 décembre

Articles Revue TELECOM

Cœur numérique de nos avions : de nouveaux défis # 199

photo de profil d'un membre

Rédaction Revue TELECOM

30 décembre