Ethics by Design for AI: Ensuring Ethical Requirements in Technology
Addressing the necessity of considering ethical implications in AI design and deployment, Ethics by Design for AI aims to prevent unforeseen ethical issues by providing structured guidance and tools. Part of the values by design approach, it emphasizes incorporating universal ethical values such as respect for human agency, privacy, and fairness, derived from EU principles and the UN Charter on Human Rights.
Download Presentation

Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.
E N D
Presentation Transcript
1 Ethics by Design for AI Dr. Brandt Dainow
2 Background
3 Context The problem AI messes up after it is deployed The solution impose ethical requirements on AI Ethical AI initiatives are global IEEE, ISO, IETF, WEF, UNESCO, Governments (EG: Singapore, NYC, California, White House, Australia, Denmark), Industry (FATML, XAI, CertNexus, Google, Microsoft, IBM) The cause consequences were not anticipated Our solution think about ethical issues during design Ethics by Design for AI
What is Ethics by Design for AI? D6.3, p.52 AIM: prevent ethical issues arising because they were not anticipated HOW: give people structured guidance and tools BECAUSE: not everyone is a professional ethicist NOT utopian issues will still arise BUT some thought is better than no thought
What is Ethics by Design for AI Part of values by design approach Privacy by Design, Security by Design Not a methodology, but an approach Can be incorporated into any design methodology Ethics by Design for AI Values by Design Values Norms Design requirements
What is Ethics by Design for AI? Will be used for ethics review in EU AI funding applications Part of values by design approach eg: Privacy by Design Derive norms from values Requirements are what a technology does in order to conform to norms Features, ways of being used, etc. Values Norms Things you have to do
7 What is Ethics by Design? Values as universal as possible Requisites = characteristics which make an AI ethical Guidelines = specific tasks which must be done in order to create requisites
8 Values
Values As universal as possible: EU principles, UN Charter on Human Rights Unethical AI = violates values Incorporate detailed ethical values EG: Human Agency = Human autonomy, dignity, freedom Values lead to ethical requisites 6 categories of values: 1. Respect for Human Agency 2. Privacy and Data Governance 3. Fairness 4. Individual, Social & Environmental Well-being 5. Transparency 6. Accountability and Oversight
Respect for Human Agency Autonomy, dignity, freedom AI applications should be designed to give humans control of the system as possible AI should not try to control people: Make decisions normally decided by humans (personally or socially) Especially issues of well-being, individual rights, economic, social or political decisions. Remove basic freedoms Subordinate, coerce, deceive, manipulate, or dehumanize people Stimulate dependency or addiction It should be clear to people that they are interacting with an AI They should be informed about its abilities and limits
11 Privacy and Data Governance AI must respect the right to privacy. AI s use of data must be actively governed Supervised, modified if necessary Must follow the EU s Guidance Note On Ethics And Data Protection Data usage should be auditable by humans EG: Model Cards, Datasheets for Datasets, XAI
12 Privacy and Data Governance If relevant, application must explain: How can an individual withdraw consent How it will ensure lawfulness, fairness, transparency of data processing Technical & organisational measures to safeguard the rights of data subjects Strong security measures to prevent data leakages Including logging data access
13 Fairness People should be given equal rights and opportunities People should not be advantaged or disadvantaged undeservedly Avoidance of algorithmic bias in input data, modelling, algorithm design Algorithmic bias is a specific concern which needs specific mitigation techniques Applications should specify: How to ensure data about people is representative and reflects their diversity How errors will be avoided in input data How the algorithmic design will be checked to ensure it does not target certain groups of people unfairly
14 Fairness Universal accessibility: AI systems should be designed so that they are usable by different types of end-users with different abilities. Applications should explain how this will be achieved, such as by compliance with relevant accessibility guidelines Fair impacts: Evidence that possible negative social impacts on certain groups have been considered What steps will be taken to ensure the system does discriminate or cause others to discriminate
15 Individual, Social & Environmental Well-being AI systems should not harm, individual, social or environmental well-being AI systems should consider the welfare of all stakeholders Documented efforts to consider environmental impact If needed, steps to mitigate negative impacts Consider and mitigate harm to online communications EG: fake news, filter bubbles, echo chambers, political manipulation Not reduce safety in the workplace EG: IEEE P1228 (Standard for Software Safety)
16 Transparency Humans must be able to understand how The AI functions How the AI decisions are arrived at Enables human agency, data governance, accountability, oversight, human governance. All elements of the AI Data, functionality, processes by which it is designed, deployed and operated. Best practice = XAI - eXplainable AI
17 Transparency Enable traceability of the AI system during its entire lifecycle It must be clear to end-users that they are interacting with an AI Open communication of AI s purpose AI s capabilities & limitations Benefits & risks of the AI The decisions made by the AI Governance processes Especially important if accountability is required Keep records of decisions about ethics made during construction
18 Accountability & Oversight Accountability: people who build or operate are responsible for the AI s actions/effects Developers must be able to explain how and why a system acts the way it does Unless compelling reasons provided to show oversight not required Applications must explain how undesirable effects will be detected, stopped, and prevented from reoccurring Formal ethical risk assessment may be required
19 Accountability & Oversight Oversight: humans can understand, supervise, control design and operation Documented procedures for risk assessment and mitigation How will people be able to report concerns? How they will be evaluated and actioned? All AI systems should be auditable by independent third parties Including the development process by which it was created Not just what was done, but why
20 Implementing Ethics by Design for AI
21 Building an Ethics by Design for AI regime 1. Determine values (eg: freedom, transparency) 2. Derive ethical requisites Characteristics a technology (or product) must possess Proscriptive norms technology must not infringe values Prescriptive norms (?) - support/enhance values The Issue: Is something unethical if it does not actively try to improve things? 3. Treat ethical requisites just like other desired characteristics e.g. reliability, efficiency Therefore must be considered throughout the design process 4. Determine ethical guidelines Specific tasks required during development to create ethical products
22 Define your technology SIENNA option: Anticipatory Ethics for Emerging Technologies Aims & broad features (eg: concepts, methods, aims) Technology Product Product Devices & processes Application Application Application Application How people use it
23 Building an Ethics by Design for AI regime Given (eg: SIENNA AI) or internally developed (eg: IEEE P7000) Values The technology (or product or use) must (or must not) . Requisites What tasks must I perform to create such a technology (or product or usage) ? Guidelines Technology Product Application
24 Development Models Need a model of the development process So you can determine What needs to be done When it needs to be done SIENNA Generic Development Model
Generic Development Model Requisites are mapped onto development as concrete tasks 1. Specification of objectives What will it do? 2. Specification of requirements What do we need to build it? 3. High-level design High-level architecture 4. Data collection and preparation 5. Detailed design and development Actual construction the system 6. Testing and evaluation
26 Requisite Tasks - Format For each phase For each value Specific tasks PHASE XYZ REQUISITE 1 Task 1 Task 2 Task 1 Task 2 Task 1 Task 2 Task 1 Task 2 REQUISITE 2 PHASE ABC REQUISITE 1 REQUISITE 2
27 Requisites and Tasks
28 Sample Requisites 1. Respect for Human Agency (autonomy, dignity, freedom) AI should give humans control as much as possible AI should not try to create dependency Privacy and Data Governance GDPR Assume all data is biased Fairness Non-discrimination Individual, Social & Environmental Well-being Don t promote fake news Transparency Humans must be able to understand how the AI makes its decisions Accountability and Oversight People who build/operate are responsible for its actions/effects Must be able to change the system if problems emerge Best practice = XAI - eXplainable AI 2. 3. 4. 5. 6.
29 E.G.: Design Phase Privacy & Data Governance Check whether the objectives are compatible with the privacy and data governance requirements. Non-adherence to any of these would result in serious non-compliance. Assess whether the plans for what data will be used are fair and appropriate. If the proposed data source is unfair or inappropriate, either change the data source or modify the objective so that that data source is not needed.
30 E.G.: Specification of requirements Build EbD into the project Create an EbD implementation plan which specifies: How EbD will be embedded in the development process. People responsible for actions and monitoring. Design an ethical compliance architecture: Tools. Organisational structures and procedures.
31 E.G.: High-level Design Fairness Undertake an accessibility assessment of the interface. Ensure that the system meets relevant accessibility standards. Transparency Design mechanisms by which the AI system will record its own decisions in a way humans can review them. Individual, and Social and Environmental Well-being Demonstrate how the system will be constructed in an environmentally friendly way. Evaluate whether the system could cause physical harm to people, animals or property. Design features to minimise this.
32 E.G.: Data collection and preparation Fairness Make sure data from one demographics group is not used to represent another unless it is justifiably representative Transparency Ensure that you can explain how personal data is used, shared, and stored. Accountability & Oversight Make sure you have clearly established what kind of sample you need, what kind of sample you have taken, and that you can explain what it will be used for.
33 E.G.: Detailed design and development Fairness Check for algorithmic bias, particularly computational bias, once data commences to be processed Privacy & Data Governance Make sure that roles and responsibilities are clear for governance and management of data assets and that all relevant staff understand them Transparency Make sure the code is actively explained and documented within the software program (as appropriate to the language(s) and methodology) and in appropriate ancillary documentation
34 E.G.: Testing and evaluation Transparency Test whether users understand that they are interacting with an AI. Accountability & Oversight Develop and deliver training to users to help develop accountability practices (including teaching about the legal framework applicable to the system). Whenever possible, ensure practical processes exist for third parties (e.g. suppliers, consumers, distributors/vendors) or workers to report potential vulnerabilities, risks, or biases in the system. Ensure mechanisms exist to examine and action such reports.
35 Conclusion
36 Generalisation Two levels Generalise the approach (values to requisites to tasks) The applicability of the Generic Development Model Can apply the approach to any technology Some values will be specific to the technology Some are probably universal Eg: support for basic human rights Generic model works for all software-driven technologies Probably works for engineering technologies Works for those we have examined (eg: biochemical engineering, mechatronics) Seems to fit well for emerging technologies To be determined: Very mature technologies (eg: cars) incremental innovation may not suit this model Accidental technologies (ie: not intentionally designed, used in unexpected ways, dual use)
37 The complete picture 1. 2. 3. 4. Values get some or create your own Requisites define an ethical product Model select an appropriate development model Guidelines place specific tasks into the relevant phases in your development model Who should do what When should they do it EG: VALUE = Safety REQUISITE = The product should be safe to use GUIDELINE = Once the initial design is completed, test for compliance with ISO 13849-1 (Safety of Machinery)