
Understanding the Impact of Example-Based Explanations in Machine Learning
Explore the significance of example-based explanations in machine learning interfaces, aiding in the interpretability of complex algorithms. Discover how these explanations align with views on bounded rationality and accountability in human-machine systems, offering insights for enhancing user trust and system transparency in projects.
Uploaded on | 0 Views
Download Presentation

Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.
E N D
Presentation Transcript
The Effects of Example-Based Explanations in a Machine Learning Interface Authors: Carrie J. Cai, Jonas Jongejan, and Jess Holbrook Discussant: Steve Wang
Relevance of Example-Based Explanations Today Increasing complexity of ML algorithms/models make it harder and harder to explain them Example-Based Explanations leverages human intelligence in machine interpretability Promising direction for future research in ML understanding Contributes to Explainable AI
Relevance to Simons View Simon s view Bounded rationality: humans make decisions with limited information and cognitive thinking. Complex problems can be easier solved when broken down into smaller parts Example-based explanations fits into Simon s concept of bounded rationality, since they provide users with manageable, relevant information to aid with the decision-making process. Finding complete explanations for a system can also be thought as a complex problem that is divided into parts, such as normative and comparative explanations.
Relevance to Lucys View Lucy s view - Importance of accountability in human-machine systems, these systems should provide clear reasoning behind their actions. Advocate for systems that are collaborative with humans, leveraging strengths of both human intelligence and machine capabilities. Example-based explanations is an approach to improve algorithm transparency, offering users with examples to understand machine decisions which aligns with Suchman sview. These explanations also offer additional human-computer interactions, where the system gives the user examples to infer about the decision making of the system.
What can be Applied to Our Projects? Building User Trust and Understanding of Our System The research paper emphasizes the importance of explaining ML algorithms to users, especially when algorithms make inexplicable errors it can degrade user experiences. We could use example-based explanations in our projects to enhance user understanding or explore different methods to explain our models to users. Balance Cognitive Workload The paper also recognizes that these explanations increase cognitive workload for users, since they need to make inferences given the examples. It also notes that there are trade-offs between explanatory power and human effort required. In our projects, we need to take into consideration the cognitive burden on users, as requiring excessive interaction/thinking can negatively impact user experience.