Good Explanation

From GM-RKB
(Redirected from good explanation)
Jump to navigation Jump to search

A Good Explanation is an explanation that is both a sufficient explanation and a comprehensible explanation.



References

2023

  • (Wikipedia, 2023) ⇒ https://en.wikipedia.org/wiki/The_Beginning_of_Infinity Retrieved:2023-8-4.
    • The Beginning of Infinity: Explanations that Transform the World is a popular science book by the physicist David Deutsch first published in 2011.
    • Deutsch views the Enlightenment of the 18th century as near the beginning of an infinite sequence of purposeful knowledge creation. Knowledge here consists of information with good explanatory function that has proven resistant to falsification. Any real process is physically possible to perform provided the knowledge to do so has been acquired. The Enlightenment set up the conditions for knowledge creation which disrupted the static societies that previously existed. These conditions are the valuing of creativity and the free and open debate that exposed ideas to criticism to reveal those good explanatory ideas that naturally resist being falsified due to their having basis in reality. Deutsch points to previous moments in history, such as Renaissance Florence and Plato's Academy in Golden Age Athens, where this process almost got underway before succumbing to their static societies' resistance to change.

      The source of intelligence is more complicated than brute computational power, Deutsch conjectures, and he points to the lack of progress in Turing test AI programs in the six decades since the Turing test was first proposed. What matters for knowledge creation, Deutsch says, is creativity. New ideas that provide good explanations for phenomena require outside-the-box thinking as the unknown is not easily predicted from past experience. ... Deutsch speculates on the process of human-culture development from a genetic basis through to a memetic emergence. This emergence led to the creation of static societies where innovation occurs, but most of the time at a rate too slow for individuals to notice during their lifetimes. It was only at the point where knowledge of how to purposefully create new knowledge through good explanations was acquired that the beginning of infinity took off during the Enlightenment.

2021

2019

  • (Confalonieri et al., 2019) ⇒ Roberto Confalonieri, Tarek R. Besold, Tillman Weyde, Kathleen Creel, Tania Lombrozo, Shane T. Mueller, and Patrick Shafto. (2019). “What Makes a Good Explanation? Cognitive Dimensions of Explaining Intelligent Machines.” In: CogSci.
    • QUOTE: What Makes a Good Explanation?
    • Starting out from the cognition of explanations, this symposium will foster scientific discourse about what functions an explanation needs to fulfill and the criteria that define its quality. Some of the aspects to be addressed are: Objective and subjective value of explanations, Dimensions of explanations: complete vs compact, abstract vs concrete, reduced vs simplified, ..., Anchoring to known concepts, Counter-factual explanations and actionability, Personalisation, Legal requirements, Grounding in personal and social experience and intuition.
    • A panel of recognised scholars and researchers will bring insights and expertise from different points of view, including psychology, cognitive science, computer science, and philosophy, and will foster knowledge exchange and discussion of the multiple facets of explanation:
      • Kathleen Creel will talk about ‘Understanding Machine Science: XAI and Scientific Explanations’, drawing on the literature on scientific explanation in philosophy and cognitive science, and arguing that for scientific researchers, good explanations require more access to the functional structure of the intelligent system than is needed by other human users.
      • Tania Lombrozo will talk about ‘Explanatory Virtue & Vices’, considering the multiple functions and malfunctions of human explanatory cognition with implications for XAI. In particular, she will suggest that we need to differentiate between different possible goals for explainability, and that doing so it highlights why human explanatory cognition should be a crucial constraint on design.
      • Shane Mueller will talk about ‘Ten fallacies of Explainable Artificial Intelligence’, reviewing some of the assumptions made until now about what properties lead to good explanations, and describing how each constitutes a fallacy that might backfire if used for developing XAI systems. He will then describe a framework developed for the DARPA XAI Program for measuring the impact of explanations that incorporates cognitive science theory related to mental models, sensemaking, context, trust, and self-explanation that can provide a principled approach for developing explainable systems.
      • Patrick Shafto will talk about ‘XAI via Bayesian Teaching’, raising questions about the use of modern machine learning algorithms in societally important processes, and theoretical questions about whether and how the opaqueness of these algorithms can be ameliorated, in the framework of Bayesian teaching.
      • Roberto Confalonieri and Tillman Weyde will talk about ‘An Ontology-based Approach to Explaining Artificial Neural Networks’, addressing the challenges of extracting symbolic representations from neural networks, exploiting domain knowledge, and measuring understandability of decision trees with users both objectively and subjectively.

2011