Nonprofit research group Montreal AI Ethics Institute has outlined some ways that companies can tackle inherent bias in AI models.
In its first-ever State of AI Ethics report, the organization notes that ranking and recommendation algorithms often remove subjects from their cultural and contextual social meanings. According to the researchers, developers are often “trying to reduce gender and race into discrete categories that are one-dimensional, third-party, and algorithmically ascribed.”
Instead of this, the researchers propose a framework that looks as subjective categories based on a pool of “diverse” individuals, or “determinantal point process” (DPP). Essentially, this means that the model will gather data that a person feels represents them, which AI models can then learn to make predictions.
It’s important to note that the report acknowledges that DPP means developers will need to source ratings from people about what represents them well and encode them in a way that an algorithmic model can learn. Still, the report suggests that DPP is a solution that’s worth researching further.
The full 128-page report touches on a variety of subjects, including the role social media plays in misinformation, the implication of automation on labour and the privacy and effectiveness of COVID-19 apps. The latter topic might be particularly interesting to Canadians, given the federal government’s plans to launch a COVID-19 contact tracing app in the coming weeks.
The full State of AI Ethics report can be found here.