BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

The AI-Bias Problem And How Fintechs Should Be Fighting It: A Deep-Dive With Sam Farao

This article is more than 2 years old.

Just a decade ago, both Artificial Intelligence and Fintechs were fairly new words to the technologically uninitiated. However, in the last decade, they have both had a profound effect on how business, financial transactions, and investments are now being conducted. 

AI has had such a revolutionary effect on Financial Service Providers (FSPs) and FIntechs that they now rely on AI for algorithmic trading, fraud detection, client research, and portfolio optimization. There are few industries besides the financial services industry where AI is so fitting and beneficial.  

A recent survey by NVIDIA of FSP’s showed that 83% of respondents agreed that AI is important to their company and the future of Financial Services. This survey is hard to argue with when we consider how much AI has already changed the industry. 

As is normal with every developing technology the infiltration of AI into the Financial space hasn’t come without its own problems. The major complaint has always been the surprising level of bias that has often been expressed by these algorithms. In an age, where the cries for social justice have become louder than ever, the last thing Fintechs need is “programmed bias”, because if left alone, it is even deadlier and more inflexible than human bias.

Artificial Intelligence relies on data; consumer data, spending data, and other data that describes people’s behavior. The insights gotten from this data when augmented with Natural language Processing and advancements in computer imaging can be very useful for Fintechs in making correct financial decisions as regards, loans, investments, and portfolios. 

With the surge of data enabled by Big Data, API’s and IoT, it is clear that data is the lifeblood of fintech going forward. However, this also highlights why Fintechs should become more vigilant to spot the bias in their data, in the way the AI is taught, or in the programmers themselves.  

In this article, Sam Farao, a first-generation Iranian Immigrant to Norway who has managed to become one of Norway’s leading entrepreneurs, shares his thoughts on AI bias with a specific focus on the Fintech space. Farao is the CEO of Banqr, a global fintech powerhouse with a strong focus on payment processing and revenue share partnerships.

Farao’s entrepreneurial story and the bias he has had to deal with as an Iranian immigrant to Europe has made him extremely attentive to all forms of bias in the industry and has helped him develop systems to identify it and eradicate it to introduce more fairness in the system. 

Farao describes the attitude Fintechs should have towards AI-bias In a rather powerful way;  “We cannot hide behind data to justify a lack of equity in the way our financial technologies perform their functions. We made these technologies, we must direct it and when it derails due to our lack of care, like a good father, we must redirect it again and take responsibility”

AI, Fintech and The Risk Of Bias

Fintechs and FSP’s are beginning to rely increasingly on AI and its Machine learning capabilities to process and understand data and to make decisions about issues like creditworthiness, fraud detection/prevention, and customer support. 

 “Data is neutral, but that doesn’t mean it is innocent. Bias is a purely human phenomenon that can be introduced into or deduced by data, consciously or unconsciously, tainting the data with it,” explains Farao. 

AI and machine learning solutions can react to the data they are presented with, in real-time, finding patterns and relationships and making decisions on who is creditworthy or not, what is a fraud, and what is not. However, the risk of bias is often found in the baseline data that is set for these machines to work with. 

For instance, in 2015, Amazon had to scrap their AI recruiting tool that showed bias against women when hiring for software developer jobs. Investigations revealed that the bias came from the baseline data that was fed the system; resumes submitted to the company over the 10 years prior which were predominantly from men. This was indicative of men’s domination of the tech space, but it also trained the machine to disqualify resumes with mention of “women” and ultimately to see men as preferable for the role.  

According to Farao, baseline data is only one of the ways that bias can creep into AI and fintech activities. 

“I have been a sociocultural outcast all my life, so I know all forms of bias. My goal with Banqr is not just to provide excellent financial and payment-processing services, but to do so in the most equitable way. I have found that bias is not often introduced intentionally. Most of the biases in financial services are either historical, or unconscious.”

An illustration of Farao’s point is seen in a Haas School of Business review of US mortgages, which found that both online and face-to-face lenders charge higher interest rates to African American and Latino borrowers, earning 11 to 17 percent higher profits on such loans. Based on this alone, introducing this same historical data into an AI system only prolongs the bias. Also, when a programmer consults this data and utilizes the same indices and high bars that created these lopsided results, the bias is strengthened. 

How Fintech’s Can Tackle Bias

Farao’s call for more data investigation, is something that has been echoed by other leaders in the fintech world as they begin to recognize the potential impact of AI-crystalized bias.  

Tackling these issues is so key that Farao partly attributes the massive success of Banqr over the last few years to it. In Farao’s words, “It is not always easy to have zero bias, but we should maintain a willingness to quash it wherever it can be shown to exist.”

He continues, “We need to look at AI as a tool, not just to solve our problems as financial service providers, but as a tool to solve the larger problems of society. The potential of AI to impact millions of people at once makes it a very sensitive tool. Fintechs and Financial service providers all over the world should make data investigation a priority. We should be intentionally looking for all kinds of bias, racial, gender, and sexual bias in this data.”

Fintech Companies should keep at the top of their mind the data that establish bias while working on building effective systems. For instance, the larger data set on US FSP’s suggests that 20% of the adult population is under-serviced for credit. This percentage is mostly comprised of minorities also this data suggests that women-owned businesses though comprising one-third of the businesses in the US win a disproportionately low share of available credit, attract smaller loans and attract higher penalties for default.  

If Fintechs know these, then they would work actively to stop this larger data set from being reflected in their systems. Fintechs can also rely on other bias-conscious technologies to handle this. 

Open Source toolkits like IBM AI Fairness 360, Aequitas, and Google What-if are available to help fintech companies measure discrimination in models, suggest mitigation pathways, and test the effect of their data on scenarios.  

Farao remains highly optimistic about the potentials of AI in Investments, banking, trading, and fraud prevention if only we can properly introduce equity into the system. 

In conclusion, Farao states: “The amount of ease and precision that AI can bring to our services is extremely desirable, but the only desire we should have that exceeds our desire to be fast and correct is our desire to be fair and right.”

Follow me on Twitter or LinkedInCheck out my website