
Algorithmic Fairness in Recommendations
Explore the concept of fairness in algorithmic recommendations, focusing on removing biases from models to treat people equally. Learn about examples, challenges, and the importance of fairness in various applications.
Download Presentation

Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.
E N D
Presentation Transcript
Fairness in Fairness in Recommendations Recommendations MARTIN GORA
What is fairness? What is fairness? Highly subjective Cambridge: the quality of treating people equally or in a way that is right or reasonable Merriam-webster: the quality or state of being fair : lack of favoritism toward one side or another
Algorithmic Fairness Algorithmic Fairness A subset of machine learning problem Aims to remove bias from models A model is unfair if it systematically discriminate a group of people Sensitive variables: gender, origin, age
Algorithmic Fairness Examples Algorithmic Fairness Examples Amazon recruitment system historical injustice - women missing in dataset women partially excluded from the process A system for breast cancer recognition 95% mammographs from white women provides lesser precision to women of colour
Why is it a difficult problem? Why is it a difficult problem? Explicit differential treatment Based on hard-wired sensitive variables in data Remove or not? loss of precision Implicit differential treatment based on inferred information about sensitive variables in data make-up is bought solely by women There is no sufficient solution :(
Fairness in Recommendation Fairness in Recommendation Personalization Collaborative filtering ignoring demography and item attributes? Example: A system where people are recommended job offers. Ensure fairness of salary.
Multiple Multiple- -stakeholder recommendations stakeholder recommendations A lot of application involve transaction between groups The main goals is to suffice conditions of all stakeholders Stakeholders: Consumers end users Providers provide products System platform itself Previous example Providers provide jobs -> Ensure fairness of distribution
Fairness classes Fairness classes C-Fairness fairness aspects of consumers only Bank P-Fairness fairness aspects of providers only Etsy, Kiwa.org CP-Fairness a combination of both
Non Non- -Algorithmic fairness classes Algorithmic fairness classes Based on Universal Design for Learning Learning platform Allow user with disabilities Fairness in: Engagement Representation Action & Expression
Sources: https://dl.acm.org/doi/abs/10.1145/3450614.3461685 https://arxiv.org/pdf/1707.00093.pdf https://dictionary.cambridge.org/dictionary/english/fairness https://www.merriam-webster.com/dictionary/fairness https://towardsdatascience.com/programming-fairness-in-algorithms-4943a13dd9f8 https://www.researchgate.net/publication/312566249_Universal_Design_for_Learning