Ethical AI Group Says Bias Bounties Can Reveal Algorithmic Flaws Faster

Bias in AI systems is proving to be a major obstacle to efforts to more broadly integrate the technology into our society. A new initiative to reward researchers for finding biases in artificial intelligence systems could help solve the problem.

The effort is based on the bug pattern that software companies pay cybersecurity experts to alert them tof any possible security defects in their products. The idea is not new. “prejudices” were which was proposed for the first time rather thanI am a researcher and entrepreneur JB Rubinovitz in 2018, and various organizations have already faced such challenges.

However, the new effort seeks to create an ongoing forum for bias contests that is independent of any particular organization. Made up of volunteers from a range of companies including Twitter, the so-called ‘Bias Buccaneers’ plan to hold regular competitions or ‘rebellions’ and earlier this month launched the first such challenge.

“Bug bounties are a standard cybersecurity practice that has yet to find a foothold in the algorithmic bias community,” the organizationlower they say on their website. “While initial breakout events showed enthusiasm for bounties, Bias Buccaneers is the first non-profit dedicated to creating ongoing bounties, partnering with technology companies, and paving the way for transparent and reproducible evaluations of AI systems.”

This first competition aims to tackle bias in image detection algorithms, but instead of having people target specific AI systems, the competition will chask researchers to create tools that can detect biased data sets. The idea is to create a machine learning model that can accurately label each image in a dataset with skin tone, perceived gender and age group. Contest ends November 30th and has a first prize of $6,000, a second prize of $4,000 and a third prize of $2,000.

The challenge is based on the fact that often the source of algorithmic bias is not so much the algorithm itself, but the nature of the data it is trained on. Automated tools that can quickly assess how balanced a collection is of Images are relative to features that are often sources of discrimination could help AI researchers avoid clearly biased data sources.

But organizers say this is just the first step in an effort to create a toolbox to assess bias in datasets, algorithms and applications, and ultimately create standards for how to destroy it.l with algorithmic bias, fairness and explanation.

Of it is not the only such attempt. One of the leaders of the new initiative is Twitter’s Rumman Chowdhury, who helped organize the first AI contest last year, targeting an algorithm the platform used to crop images that users complained it favored white skin and male faces over black and female ones.

The competition gave hackers access to the company’s model and challenged them to find flaws in it. Incoming found a wide range of problemsincludingpreferring stereotypical beautiful faces, an aversion to people with white hair (age index) and a preference for memes with English and not Arabic script.

Stanford University also recently completed a competition that challenged teams to find tools designed to help people screen commercially developed or open-source AI systems for discrimination. And current and upcoming EU laws could make it mandatory for companies to regularly audit their data and algorithms.

However, getting hits on AI bugs and algorithmic control and doing them effectively will be easier said than done. Inevitably, companies that build their businesses around their algorithms will resist any attempt to discredit them.

Leveraging lessons learned from control systems in other fields, such as finance and environmental and health regulations, researchers recently outlined some of the critical ingredients for effective accountability. One of the most important Criteria they found that it was the substantial involvement of independent third parties.

The researchers pointed out that current voluntary AI audits often involve conflicts of interest, such as the target organization paying for the audit, helping to frame the scope of the audit, or having the opportunity to review the findings before they are made public. This concern is reflected in a recent report from the Algorithmic Justice Leaguewherech marked the oversizedHey role of target organizations in current cyber bug bounty programs;

Finding a way to fund and support truly independent AI testers and bug hunters will be a major challenge, particularly as they come up against some of the most well-resourced companies in the world. Fortunately though, there seems to be a growing sense in the industry that addressing this issue will be critical to maintaining user confidence in their services.

Image credit: Jakob Rosen / Unsplash

Leave a Reply

Your email address will not be published. Required fields are marked *