Big Data and Financial Ethics: The Significant Capabilities of Artificial Intelligence Necessitate Human Guidance and Input
Billie Trinder
Abstract: Innovations in artificial intelligence will revolutionise the financial industry and present new risks and ethical concerns. Consequently, financial institutions should self-regulate. There are two reasons to do so. Firstly, customers cannot be expected to regulate the use of their own data in instances where this precludes accessing a service. Secondly, external regulation lags behind technological advances. The increasing complexity of algorithms and subsequent ethical questions are explored, along with algorithms’ potential for bias. Ensuring human participation throughout the algorithmic decision-making processes helps to mitigate associated risks and provides an avenue for implementation of an ethical framework.
Introduction
Adoption of AI innovations has accelerated enormously in the financial sector. For traditional banks, the stakes are high – as of 2017, more than 80% of executives in the financial services industry believed that business was at risk from financial technology firms (PricewaterhouseCoopers 2017, p2). But as banks recognise the threat posed by FinTech, it is also evident big data lies at the heart of this digital revolution, and financial institutions have unique access to the data. At the 2017 Google Cloud Next Conference held in San Francisco, Chief Information Officer of HSBC Bank, Darrel West, placed AI at the frontiers of finance, stating that “apart from our $2.4 trillion dollars of assets on our balance sheet, we have at the core of the company a massive asset in the form of our data” (FinTech Innovation 2017).
The actual and potential uses for such data are wide ranging and significant – possibilities have been demonstrated in key areas including credit risk assessment and fraud detection. Yet, these opportunities come with significant risks and unique ethical issues. Given that risk affects the proper operation of financial institutions and markets – something we observed all too well in 2008 – the risks associated with big data seems to be undertheorized (Cockcroft 2018, p327).
Complexity and Transparency
Debate and regulations focus on the right to privacy at the moment but use of big data analytics in the financial industry gives rise to more material risks. Consequences of ‘biased algorithms’ are an especially significant concern when it comes to using big data analytics for credit rating. Non-traditional data can be useful for assessing creditworthiness. This was evident as early as 2002, when JP Martin, an executive at Canadian Tyre, collated information his company had collected from credit card transactions over the course of a year. From the data Martin was able to predict a person’s likelihood to default on payments through analysis of purchases (Duhigg 2009). These purchases were categorised in terms of “riskiness”. For example, those who bought premium birdseed were in the bottom 1% of risk (and thus some of the least likely to default on a payment), and those who bought chrome scull accessories in the riskiest 1%.
From the masses of information collected from a person’s internet activity every day, analytics can make incredibly accurate predictions about traits and patterns of behaviour. However, unlike Martin’s correlations between purchasing and risk, these predictions can be drawn from data with no obvious relevance to a person’s financial activity. In fact, predictions can appear to have no logical relationship to the material on which they are based. A study of over nine million Facebook ‘likes’ conducted by Cambridge Psychometrics Centre revealed that ‘liking’ curly fries or Morgan Freeman’s voice was a strong indicator of high intelligence, and people who ‘like’ the page “That Spider is More Scared Than U Are” were likely to be non-smokers (2013).
This ability to find correlations between seemingly unrelated pieces of information becomes a danger when the logic upon which the algorithm works is lost. Machine learning means that some software applications are able to reassess and alter their operations based on input of new data sets. Additionally, large institutions employ multiple people for the development of a single program. As a result of both these factors, algorithms can become so complex that in retrospect their working and logic may be rendered opaque even to their creators. Most obviously, this is an issue of transparency loss. By relying on systems, which are incomprehensible even to the institutions using them, effective auditing becomes a virtual impossibility. When it comes to banks using big data analytics for key tasks such as credit rating, transparency issues become institutional problems. These “black box algorithms” grow to greater problems when we mistake their mathematical, apparently independent modus operandi for objectivity and infallibility. Whilst such programs process data at a speed and scale far greater than can be done by any human brain, the fact the algorithm foundations are often built on data sets collected from human subjects makes results susceptible to error and bias.
One often cited example of biased algorithmic decision-making is in the criminal justice system. Across the United States, courtrooms are increasingly using algorithms to predict risk of reoffence. The allocated score informs vital decisions such as assigning bond amounts, and is even used by judges for reference in criminal sentencing. A 2016 study (Angwin et al. 2016) revealed that these scores were only 61% accurate in predicting criminal activity over the subsequent two years, and 20% accurate in predicting violent crimes. Even more concerning, in retrospect, it became evident that the algorithms exhibited significant racial bias, falsely flagging black defendants as future reoffenders almost twice as often as white defendants.
Examples like these are plentiful, from gender affecting job related ads, to Amazon’s same day delivery service being unavailable in black neighbourhoods. Such bias could just as easily be unconsciously built into an algorithm used to determine credit scores. Additionally, the fact that data mining algorithms can process an incredible number of factors for these types of “risk assessment” tasks means that discrimination could emerge through correlations between a protected class and other attributes. For example, association between credit score and race from using a person’s address. This type of algorithmic “discrimination by proxy” (Datta et al. 2017, p1) could be harder to identify, and thus correct. Additionally, as the results of these algorithmic processes become part of a person’s data footprint, the risk of ‘cascading disadvantage’ emerges, as algorithmic decision making reinforces itself.