Print Friendly, PDF & Email



Artificial intelligence (AI) describes machines that can simulate some forms of human intelligence, such as identifying patterns and making predictions and decisions. Today, AI is used by organizations across many sectors for a variety of purposes, from hiring employees, to assessing risk, to making investment recommendations, to recommending criminal sentencing. However, it is well-known that social relations and contexts are reflected and reproduced in technology, and AI is no exception: it has the potential to reinforce underlying biases, discrimination, and inequities. Although AI can be used to benefit marginalized groups, a concerted focus on equity in AI by businesses and governments is necessary to mitigate possible harms. Here we provide a resource for scholars and practitioners for viewing AI through the lens of equity, with the objectives of synthesizing existing research and knowledge about the connection between AI and (in)equity and suggesting considerations for public and private sector leaders to be aware of when implementing AI.

The key Insight: AI is a double-edged sword, with potential to both mitigate and reinforce bias:

  • Because AI uses statistical prediction methods that can be audited, it has the potential to create outcomes that help groups facing marginalization in situations where human decisions may be clouded by cognitive biases.
  • Despite this potential, because inequality and inequity are often reflected in technologies, some AI can and has reinforced marginalization of certain groups, such as women, gender minorities, and racialized and low-income communities. AI-powered products and services mayuse biased data sets that reproduce this bias; amplify stereotypes and marginalization, sometimes for profit; and/or widen asymmetries of power.
  • The reinforcement of inequity and inequality has occurred because of embedded bias or significant omissions in datasets; the complexity and trade-offs involved in aligning AI with social values when profits are also at stake; a lack of transparency from those creating and implementing AI; a lack of accountability to the public or other users of AI; and limited participation by marginalized and diverse groups in the technology sector.
  • There are also varied potential impacts of AI and automation on jobs and labour. It is possible that women, racialized, and low-income groups may be more susceptible to job loss or displacement due to automation across an increasing number of blue-, white- and pink- collar jobs.

These results suggest the following considerations for businesses and governments:

  • Technology companies and governments can focus on initiatives for equitable representation in AI development
  • Creators, researchers and implementors of AI can prioritize aligning AI with social values such as fairness, despite possible trade-offs for efficiency and profit
  • Governments can create policies for AI that prioritize accountability and transparency, and require organizations to adhere to these principles
  • Governments and companies can work towards economic security for workers who are being doubly impacted by new technologies and a global pandemic through attention on reskilling and/or upskilling programs
  • Academic researchers can deepen knowledge on AI and inequity, such as by continuing cross-disciplinary work on the social, political and environmental impacts of AI and developing new and different alternatives that prioritize mitigation of harm.




Carmina Ravanera, Sarah Kaplan


Fuselight Creative

“An Equity Lens on Artificial Intelligence” is co-funded by the Social Sciences and Humanities Research Council and the Government of Canada’s Future Skills Program Grant #872-2020-0011.

Research Overview prepared by

Carmina Ravanera, Sarah Kaplan


September, 2021