• Home

  • COVID-19

  • Solutions

  • About

  • Careers

  • Press

  • Contact

  • REQUEST A DEMO

  •  

    Aflac Case Study
    image_edited_edited.png
    Request Demo
    Signup for our Newsletter
    bias.jpeg

    Thoughts on Bias in Algorithmic Decision Making in Light of AppleCard and Goldman Story

    The world, however, is changing and numerous new cases prove that failing to properly address the issue of bias can be dangerous and costly. 

    transparency.jpeg

    Transparency in Decision-Making: Advantages of a Rule-Based Approach — Part 1/2

    We live in the age of data-driven decisions. With easy access to large amounts of data and efficient off-the-shelf algorithms for analysis, automated systems decide on fraud alerts, creditworthiness, and even who is going to be called in for a job interview. Often, however, there is a need — or even a requirement due to regulations along with the basic human desire to understand — to be able to explain how a decision was made. Due to lack of transparency, off-the-shelf machine learning methods can be confusing, and sometimes even misleading.

    logo_new2.png
    transparency.jpeg

    Transparency in Decision-Making: Advantages of a Rule-Based Approach — Part 2/2
    In this post, we present a different way to think about explainability globally using rule-based decision algorithms, like Stratyfy’s Probabilistic Rule Engine (PRE). 

    equity.png

    Transparency matters when dealing with data (Part 1)

    Despite the AI moniker, algorithms do not “know” if they are biased.  If the AI is biased, it certainly doesn’t have the capability to say why it is biased, much less control for it.  The game changer is the Probabilistic Rules Engine (PRE), which creates the ability to monitor for bias, explain how bias crept into a model, and adjust the model to mitigate bias while minimizing the impact on model performance.  Stratyfy has even developed a family of proprietary algorithms, based on PREs,  to do exactly this. 

    logo_new2.png
    bias_head.jpeg

    Bias in predictive models — part 1/2

    The problem of AI fairness and biases is attracting growing attention from both research and legislators. There exist a few possible definitions of what is bias and how to measure it, each having its own merits, but also — limitations of applicability.

    Ai_law.png

    AI and the Algorithmic Accountability Act: 3 things you can do right now to avoid costly mistakes

    Companies around the world are facing growing pressure to increase the transparency around the algorithms that make predictions that drive their business decisions. In Washington, recently proposed legislation makes meaningful strides in the regulatory catch-up game around AI and machine learning. This article provides an overview of the good, the bad, and the ugly about AI and how you can get rid of the ugly and reduce the bad.

    logo_new2.png
    accenture-logo (1).png

    Machine Learning in Insurance
    Insurance companies only process 10-15% of the available data most of which is structured data. Analyzing the remaining data, structured and unstructured, would provide any insurer with tremendously more information, which can impact the premiums they charge, fraud prevention, and general risk management. Successful implementation will be specific to the insurer, leveraging the existing organization while finding places to add value.

    The Cure for AI Fever
    AI is best seen as a process, not a tool. There is no single solution, magic bullet AI that will cure all your organization's problems. Instead, AI is something that fits into your organization's problem-solving process. AI will be critical in some areas, very helpful in others, and perhaps not as helpful elsewhere. As a manager, you are responsible for knowing where AI fits and how to best deploy it in your organization. 

    Logo.png

    Big Data, Smart Credit

    Much SME lending is relationship-based, which is subjective and unscalable. Traditional credit metrics, however, are static and one-dimensional, which is impractical for SMEs that do not have credit history and/or fail conventional credit tests. AI bridges the gap by evaluating alternative data sources alongside conventional data sources to paint a better picture of creditworthiness. 

    frb-phil.png

    The Roles of Alternative Data and Machine Learning in Fintech Lending
    A recent paper by the Federal Reserve Bank of Philadelphia illustrates the value of machine learning in lending. In particular, the paper stresses non-traditional inputs in determining creditworthiness and predicting loan performance, such as with Lending Tree, for example. We believe that these machine learning methods can still be further improved and streamlined, creating additional opportunities for lenders.

    Tackling AI’s Unintended Consequences

    One has to remember that AI is neither perfect nor fool-proof. AI can lead to loss of skill or thinking, it can institutionalize bias, and it can contribute to loss of empathy. The solution is human-machine interaction to make sure AI fills the gaps you need but doesn't overstep its bounds. In brief, AI can add tremendous value to an organization, but it should also be utilized with deliberation, not haphazardness. 

    Back
    ABOUT

    info@stratyfy.com

    Tel: +1 646-791-6702

    SOCIAL
    • White LinkedIn Icon
    • White Facebook Icon
    • White Twitter Icon