Protecting Rights in Machine Learning Systems

protecting the rights to equality n.w
1 / 16
Embed
Share

Explore initiatives like the Toronto Declaration, IEEE standards, and EU Commission proposals aimed at safeguarding human rights, equality, and non-discrimination in the development and use of AI technologies. Learn about the importance of ethical design, accountability, transparency, and unbiased decision-making in ensuring the well-being and autonomy of individuals in a rapidly evolving technological landscape.

  • Machine Learning
  • Human Rights
  • AI Ethics
  • Equality
  • Transparency

Uploaded on | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. Protecting the rights to equality and non discrimination in machine learning systems Mike Rebeiro

  2. The Toronto Declaration The Declaration sets out a charter of safeguards to which governments and organisations can adhere when using or developing AI in order to protect rights and equality and non discrimination in machine learning systems From policing to welfare systems, online disclosures, and healthcare to name a few examples systems employing machine learning technologies can vastly and rapidly change or reinforce power structures or inequalities on an unprecedented scale and with significant harm to human rights

  3. Institute of Electrical and Electronic Engineers (IEEE) IEEE standards for ethical design, development and implementation of machine learning technologies Human rights ensuring that AI does not infringe on internationally recognised human rights Well being prioritizing metrics of human well being in AI s design and use Accountability ensuring designers and operators of AI systems are responsible and accountable Awareness of misuse minimising the risks of AI being misused

  4. EU Commission proposals Human agency and oversight AI systems should support human autonomy and decision making acting as enablers to democratic society A preventative approach to risk, minimising unintentional harm and preventing unacceptable harm Robustness and safety Privacy and data governance AI systems must guarantee privacy and data protection Data sets and decision making processes must be documented to allow for the greatest possible transparency Transparency Diversity, non discrimination and fairness Identifiable and discrimination bias should be removed from data sets and processes wherever possible Societal and environmental well being The environment, broader society and other sentient beings should be considered as stakeholders Mechanisms should be put in place to ensure accountability for AI systems and their outcomes Accountability

  5. Unbiased decision making? Algorithms can be supportive of good decision-making, reduce human error and combat existing systemic biases. But issues can arise if, instead, algorithms begin to reinforce problematic biases, for example because of errors in design or because of biases in the underlying data sets. When these algorithms are then used to support important decisions about people s lives, for example determining whether they are invited to a job interview, they have the potential to cause serious harm. UK Centre for Data Ethics and Innovation AI is unique in that it is capable of learning, making independent decisions and deciding the criteria upon which it makes those decisions AI learns through: Exposure to historic data sets for training Exposure to live data inputs in live operation

  6. AI learns from Data

  7. How does AI become biased? Bias in historic data sets Bias in live interactions Bias in development teams Inherent bias in existing processes

  8. When AI goes wrong bias in processes A beauty contest was judged by AI and the robots didn't like dark skin If you have not that many people of color within the dataset, then you might actually have biased results, said Zhavoronkov, who said he was surprised by the winners. When you re training an algorithm to recognize certain patterns you might not have enough data, or the data might be biased. The Guardian 8.9.2016

  9. When AI goes wrong bias in live interactions

  10. Inherent Bias in data processes Facial Recognition Heath Data In February 2018 MIT found that three of the latest gender recognition AIs (from IBM. Microsoft and Chinese Company Megvil) could correctly identify a person s gender from a photograph with a 99 % accuracy rate in the case of Caucasian men. This dropped to 35% for dark skinned women The Babylon chatbot offered very different diagnoses for symptoms and circumstances that were identical save for one thing: the gender of the supposed patient. For a male 59-year-old smoker presenting sudden-onset chest pain and a feeling of nausea, the program concluded that the cause could be one of several cardiological conditions including a possible heart attack. For a female with exactly the same circumstances and reported symptoms, the bot presented two possible causes: depression;or a panic attack" New Scientist 12.04.2018 Public Technology,net October 2019

  11. Bias in development teams According to the World Economic Forum s Global Gender Gap report 2018, just 22 % of AI and 12 % of machine learning professionals world wide are women. For other marginalized groups, the news is even worse: An April 2019 report from the AI Now Institute found that just 2.5 % of Google s workforce is black, while Facebook ad Microsoft are each at 4%. There is no public data on transgender people or other gender minorities within the tech industry according to the report Fortune Magazine 26 October 2019

  12. Engendering Trust Clear policy framework Black box technologies? Open and consistent decision making Clear legal frame work Clear framework of liability Registration? Accountability Ethics by design Can we adopt a universal ethical framework? Minimum ethical standards

  13. How to mitigate bias What are the sources of historic data sets used to train AI ? Source Are the data sets large enough and from sufficient sources to remove bias? Quantity What steps have been taken to ensure that bias is removed from the historic data sets? Removal of bias What steps are taken to prevent learned bias in live operation? Learned bias What steps are taken to prevent bias attacks in live operation? Bias attacks What steps have been taken to remove inherent bias in processes the AI is replacing? Processes How diverse are the data science and AI development teams? Diversity

  14. How to engender trust To what extent is the AI a decision maker or decision enabler? Decision making Human involvement Are Humans in the loop? Ethics Is there an ethics by design framework? Policy Is there a clear policy framework? Is decision making clear, transparent, consistent and understandable? Understanding Legislation Is there a clear legislative framework Is it clear who is accountable? Is insurance available? Accountability

  15. Q & A

  16. Protecting the rights to equality and non discrimination in machine learning systems Mike Rebeiro

More Related Content