Home CATEGORIES Business Ethics & Philanthropy How Companies Can Use CSR to Reduce AI Bias

How Companies Can Use CSR to Reduce AI Bias

525
0
SHARE
 
AI and Machine learning have slowly started to make a place for itself in every aspect of a corporate organisation. One of the main purposes of employing this technology is to improve the efficiency of processes by reducing human biases.
This was especially true for its employment in recruitment practices. In fact, it was argued that algorithmic recruitment systems remove human bias because their determinations are based on statistical predictions of which candidates are most likely to be a “good fit.”
This was proved inaccurate when in late 2018, Amazon had to discontinue the use of their AI-based recruitment system because they found that it was biased against women. The company’s AI system gave low ratings to resumes with the terms “woman” or “women’s” in applications for technical roles and went as far as downgrading applicants from two all-women’s colleges.
How did this happen? The answer lies in how the algorithms actually work. In the case of Amazon, the algorithms driving the automated recruitment tool were trained to flag strong candidates by identifying the keywords most often used in the resumes of the company’s top performers.
Now, algorithms cannot be trained to understand social context. In the case of employment, workplace politics often play a role in performance evaluations. For example, some employees may be evaluated as top performers because they are related to a senior executive, have seniority, or are in the same social groups as their managers. However, none of this is captured on the employee evaluation forms that were used to decide which resumes would be used to train the automated recruitment tools. Computer scientists simply pull the resumes of employees with the highest performance rates within each role. But, those resumes clearly don’t show the full picture. And they propagate the status-quo and all of the inherent biases that come with it.
Data scientist Cathy O’Neil has argued that the statistical models produced by algorithmic decision-making systems are not really unbiased. They are simply opinions written into code. She argues that we should not assume training datasets are accurate or impartial, because they are encoded with the biases of their largely white, male producers. Legal Scholar Rashida Richardson has called this dirty data.
This is dangerous because the decisions made using dirty data are fed back into the training datasets and are then used to evaluate new information. This could create a vicious cycle in which decisions based on historical biases continue to be made.

How CSR can help in reducing bias in training data

Artificial intelligence is a tremendous opportunity, but also comes with the responsibility to monitor for data quality, and around processes for collecting data and how data impacts social justice. Because they need lots of examples in order to be trained, artificial intelligence and machine learning algorithms are dependent on the quality of the data they are fed. When automated systems “learn” from non-representative or poorly curated data, they come to biased results.
In order to reduce this bias, there are certain steps companies could take to differentiate themselves in the marketplace by using fair and accurate AI. They could hire critical public interest technologists — teams made up of computer scientists, sociologists, anthropologists, legal scholars, and activists — to develop strategies to develop more fair and accurate training data. These teams are charged with conducting research that can help advise CSR groups on how to make strategic investments with groups working to reduce the expression of racism, sexism, ableism, homophobia, and xenophobia in our society. This would reduce these biases being encoded into datasets used in machine learning, and would, in turn, produce more fair and accurate AI systems.
Reducing bias in training data will require a sustained, multi-pronged investment in the creation of a more just society. And the companies who are currently advertising these values needs to do more to stand behind them.