We live in the age of data-driven decisions. With easy access to large amounts of data and efficient off-the-shelf algorithms for analysis, automated systems decide on fraud alerts, creditworthiness, and even who is going to be called in for a job interview. Often, however, there is a need — or even a requirement due to regulations along with the basic human desire to understand — to be able to explain how a decision was made. Due to lack of transparency, off-the-shelf machine learning methods can be confusing, and sometimes even misleading.
Despite the AI moniker, algorithms do not “know” if they are biased. If the AI is biased, it certainly doesn’t have the capability to say why it is biased, much less control for it. The game changer is the Probabilistic Rules Engine (PRE), which creates the ability to monitor for bias, explain how bias crept into a model, and adjust the model to mitigate bias while minimizing the impact on model performance. Stratyfy has even developed a family of proprietary algorithms, based on PREs, to do exactly this.
Companies around the world are facing growing pressure to increase the transparency around the algorithms that make predictions that drive their business decisions. In Washington, recently proposed legislation makes meaningful strides in the regulatory catch-up game around AI and machine learning. This article provides an overview of the good, the bad, and the ugly about AI and how you can get rid of the ugly and reduce the bad.
Machine Learning in Insurance
Insurance companies only process 10-15% of the available data most of which is structured data. Analyzing the remaining data, structured and unstructured, would provide any insurer with tremendously more information, which can impact the premiums they charge, fraud prevention, and general risk management. Successful implementation will be specific to the insurer, leveraging the existing organization while finding places to add value.
The Cure for AI Fever
AI is best seen as a process, not a tool. There is no single solution, magic bullet AI that will cure all your organization's problems. Instead, AI is something that fits into your organization's problem-solving process. AI will be critical in some areas, very helpful in others, and perhaps not as helpful elsewhere. As a manager, you are responsible for knowing where AI fits and how to best deploy it in your organization.
Big Data, Smart Credit
Much SME lending is relationship-based, which is subjective and unscalable. Traditional credit metrics, however, are static and one-dimensional, which is impractical for SMEs that do not have credit history and/or fail conventional credit tests. AI bridges the gap by evaluating alternative data sources alongside conventional data sources to paint a better picture of creditworthiness.
The Roles of Alternative Data and Machine Learning in Fintech Lending
A recent paper by the Federal Reserve Bank of Philadelphia illustrates the value of machine learning in lending. In particular, the paper stresses non-traditional inputs in determining creditworthiness and predicting loan performance, such as with Lending Tree, for example. We believe that these machine learning methods can still be further improved and streamlined, creating additional opportunities for lenders.
Tackling AI’s Unintended Consequences
One has to remember that AI is neither perfect nor fool-proof. AI can lead to loss of skill or thinking, it can institutionalize bias, and it can contribute to loss of empathy. The solution is human-machine interaction to make sure AI fills the gaps you need but doesn't overstep its bounds. In brief, AI can add tremendous value to an organization, but it should also be utilized with deliberation, not haphazardness.