
Unlocking the Tradeoff: Correctness vs. Completeness in Argumentative AI
Delve into the intricacies of the balance between correctness and completeness in argumentative explainable AI. Explore the concept of what constitutes a good explanation and the factors that make an explanation trustworthy, all while addressing potential problems and solutions in this evolving field.
Download Presentation

Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.
E N D
Presentation Transcript
On the Tradeoff Between Correctness and Completeness in Argumentative Explainable AI Nico Potyka, Xiang Yin and Francesca Toni 1st International Workshop on Argumentation for eXplainable AI (ArgXAI)
What is a good Explanation? Black-Box Classifier School Bus A plausible explanation is not necessarily a correct explanation Black-Box Classifier Ostrich
What makes an Explanation Trustworthy? Faithfulness: explanation explains what the model actually does (which ,unfortunately, is not necessarily what we want it to do) Instantiation for argumentative explanations: Reinforcement [1,2] Supporter: should increase confidence in class Attacker: should decrease confidence in class School Bus Black-Box Classifier School Bus Tires Leg Yellow 1: Amgoud L, Ben-Naim J. Weighted Bipolar Argumentation Graphs: Axioms and Semantics. In: Lang J, editor. International Joint Conference on Artificial Intelligence, IJCAI. ijcai.org; 2018. p. 5194-8 2: Rago A, Baroni P, Toni F. Explaining Causal Models with Argumentation: the Case of Bi-variate Reinforcement. In: KR; 2022. p. TBA.
Potential Problems Faithfulness/Reinforcement can be seen as correctness property Problem: correctness can be satisfied in trivial ways School Bus Tires School Bus Leaves Yellow
Correct Explanation BAGs for boolean data
Setting Focus on tabular data Age Income Education Approve And, for now, boolean features Young Middle-Aged Senior Inc_low Inc_med Inc_high University degree Approve
Naive Classification Arguments Create one argument per feature and class Young Middle-Aged Senior Inc_low Inc_med Inc_high University degree Approve Middle -Aged University degree Young Senior Inc_low Inc_med Inc_high Approve
Classification BAG Classification BAG is formed by adding support and attack edges Middle -Aged University degree Young Senior Inc_low Inc_med Inc_high Approve
Reinforcement Classification BAG satisfies reinforcement if Supporter increases confidence in class Attacker decreases confidence in class Black-Box Classifier Middle -Aged University degree Young Senior Inc_low Inc_med Inc_high Approve
Correctness Alone is not Meaningful The empty graph is correct/faithful/satisfies reinforcement Middle -Aged University degree Young Senior Inc_low Inc_med Inc_high Approve Even when adding all edges that respect reinforcement, the graph may miss many important relationships
What does Completeness mean? Defining completeness is difficult Defining (and eliminating) sources of incompleteness is easier Joint effects of features Non-monotonic effects of ordinal features (supporting in some/ attacking in other regions) Combinations of the two
Source of Incompleteness: Joint Effects Example: If the income is high, the other features are irrelevant
Tackling Joint Effects: Joint Relations University degree Inc_high + Approve
Tackling Joint Effects: Joint Arguments not Inc_high and University degree Approve
Potential Limitation Additional structure may improve overall correctness/completeness, but can result in less comprehensible explanations Human Comprehensibility Completeness Formal Guarantees Correctness
Source of Incompleteness: Non-Monotonicity P(Anomaly) Feature Deviation from Mean
Tackling Non-Monotonicity: Binning Refining arguments (binning) can again help to improve correctness/completeness tradeoff BMI<10 10 BMI 30 BMI>30 Weak Immune System
Conclusions Focussing on correctness (reinforcement/faithfulness) alone does not seem sufficient for explainable AI More structure can help improving the Correctness/Completeness tradeoff but too much details may result in incomprehensibility Human Comprehensibility Completeness Formal Guarantees Correctness
Some Interesting Questions Can we characterize which classifiers can be correctly and completely explained by which argumentative explanation models? Conjecture: naive explanation BAG can satisfy both correctness and completeness if and only if the classifier is strongly monotonic For which classifiers and argumentative explanation models, can we quantify correctness/completeness (efficiently)? Which building blocks are most comprehensible to humans and which are most effective in improving correctness/completeness?