Visit UNICEF Global
Assessing common mistakes that lead to unintended side effects. Information summarized from Fairness and Abstraction in Sociotechnical Systems, a paper published at the 2019 ACM Conference on Fairness, Accountability, and Transparency.
Failure to model the entire system over which a social criterion, such as fairness, will be enforced.
For this trap, we will want to look at our outcome variables. Are these variables a proxy of the actual outcome you wish to achieve? What evidence of existing negative bias currently exists with regards to these variables?
Failure to understand how repurposing algorithmic solutions designed for one social context may be misleading, inaccurate, or otherwise do harm when applied to a different context.
Here, you will want to be fully understanding of the context in which this model is being built for and the context in which you will be using it. There should be clear documentation distributed across the entire team focusing on this. What stakeholders do you expect to have an impact on where this technology will be used. Are they informed?
Failure to account for the full meaning of social concepts such as fairness, which can be procedural, contextual, and contestable, and cannot be resolved through mathematical formalisms.
How does your chosen outcome continue to implement and solve existing procedural “catches” in order to ensure the decision making process is fair? What method of recourse are available for those who are unfairly judged?
Failure to understand how the insertion of technology into an existing social system changes the behaviors and embedded values of the pre-existing system.
How do you think the introduction of this recommendation system will affect your users? What changes in behavior do you intend to change? Can you think of any possible changes in behavior that you don’t intend as a result of your software?
Failure to recognize the possibility that the best solution to a problem may not involve technology.
Will this technology elevate social values which can be quantified? Will it devalue those which cannot? What values might take a back seat if this technology is implemented?
Thanks to the publishers of the original paper:
Updated on 21 Jun 2022
Model cards help you document how your model was made. Use this overview to make your own model cards.
Recommendations for building AI policies and systems that uphold child rights. Part of UNICEF AI for children project.