Did you know that if you don’t pass your next AI bias audit it could cost you millions of dollars?

Bias in AI algorithms have prompted several lawsuits and strict regulations.

We can help you uncover bias in your models before it’s too late.

Left unchecked, AI bias can lead to costly penalties and affect your reputation and credibility.

Contact us today for a free consultation.

The regulatory environment is evolving, and several states are either considering, or have enacted laws to mitigate bias in AI algorithms. These laws demand full transparency.

For example, New York City requires AI tool audits to be conducted and the results to be publicly available on their websites. The employer also must disclose the data the AI tool is collecting, whether by revealing it publicly or responding to an inquiry.

Is your organization ready for this type of scrutiny?


AI Regulation is Expanding

Algorithm bias has picked up steam in other states, as well. Illinois, Colorado, and California all have bills under consideration, with some expanding beyond hiring and including all types of artificial intelligence algorithms.

With the introduction of these regulatory statutes and the increased use of AI programs across a spectrum of industries, now is the time to identify and review your AI applications.

You need to demonstrate proper safeguards are in place to mitigate bias. The bottom line – organizations must be ready to define their algorithms in simple terms.

We Can Help

  • Destiny has been creating advanced analytic algorithms for over three decades and can provide your organization with complete transparency to document:
    • How the algorithm was designed and its purpose
    • What data was used in the algorithm
    • Why the data was used
    • How the math derives its decisions

New Statutes

  • NYC Local Law 144 – Automated Employment Decision Tool (AEDT) Requirements

This first of its kind law will prohibit employers from using automated employment decision tools for recruiting, hiring or to assess employees for promotion without the tools first being audited for bias.

  • Colorado’s Algorithm and Predictive Model Governance Regulation

Establishes requirements for Colorado life insurance companies to verify insurers’ use of external consumer data and AI systems do not result in discriminatory insurance practices.

Understanding these biases and how to mitigate them is imperative for any organization implementing AI.


Recent Lawsuits and Articles

Contact Us

If you’re using AI to scale your business, do you have a plan to address AI bias?  An effective strategy includes creating governance and controls as well as regular monitoring.

Contact us today for a free consultation on how to prepare for a bias audit.

Frequently Asked Questions

What is algorithmic bias?

Algorithmic bias defines the systematic and repeatable errors in a computer system that create unfair outcomes, such as privileging one category over another in ways different from the intended function of the algorithm. Typically, this results from erroneous or incomplete data being fed into an algorithm.

Are there different types of algorithm bias?

Yes. AI bias is not just about protected classes. It is also about how a model determines an object, such as the definition of a guard rail for an autonomous vehicle. Following are three common types of bias an auditor/regulator may be looking for:

• Latent Bias
An algorithm may incorrectly identify something based on historical data or as a result of an existing stereotype. This type of bias is commonly found in financial, healthcare, and automotive industries.

• Selection Bias
Selection bias occurs when a data set for training AI models is not accurate and overrepresents one particular group and underrepresents another.

• Emergent Bias
Emergent bias results from using and relying on algorithms across new or unanticipated contexts. In other words, the algorithm was designed for one purpose but is used for another.

What is advertising bias?

Advertising bias relies on assumptions instead of fact-based truth to carry out campaigns. Although marketers try to avoid preliminary judgments in their decision-making, it can be difficult because the original assumptions can either be wrong or altered due to machine learning (ML) training in advertising technology.  This alteration the due to ML, thus influencing the campaign targets causing certain groups to be either advantaged or disadvantaged and targeted over others and the original intent of the algorithm.

How does Destiny cross-reference its work?

We rely on our proprietary methods and analysis, over three decades of experience, as well as our partners at IBM.

• Test biases in models and datasets.
• Mitigate biases with an extensive library of detection methods such as Learning Fair Representations, Reject Option Classification, and Disparate Impact Remover.
• Integrate the top bias metrics, bias mitigation algorithms, and metric explainers from fairness analysts across industries and disciplines.
• Perform real-time bias checking and mitigation when AI makes its decisions.
• Detect drift in data and anomalies in model behavior.
• Explain transactions and executes what-if analysis.

What industries has Destiny worked with on Artificial Intelligence and Machine Learning Algorithms?

We have developed and thoroughly documented advanced analytic solutions from design to execution in the following industries:

• Financial Services
• Banking
• Insurance
• Healthcare
• Retail
• Hospitality
• Pharmaceutical and Biotech
• Transportation
• Telecommunications
• Manufacturing


CONTACT US NOW and let's work together.