Reasoning System

Jump to: navigation, search

A Reasoning System is a intelligent system with a reasoning ability (that applies a reasoning algorithm to solve a reasoning task.




  • Saul Amarel. (?). “On Representations of Problems of Reasoning About Actions."
    • QUOTE: Many problems of practical importance are problems of reasoning about actions. In these problems, a course of action has to be found that satisfies a number of specified conditions. Everyday examples include planning an airplane trip, organizing a dinner party, etc. ... A problem of reasoning about actions is given in terms of an initial situation, a terminal situation, a set of feasible actions, and a set of constraints...The task of a problem solver is to find the best sequence of permissable actions that can transform the initial situation into the terminal situation.


  • (Barr & Feigenbaum, 1981) ⇒ Avron Barr, and Edward A. Feigenbaum. (1981). “The Handbook of Artificial Intelligence, Volume 1.” Kaufman
    • QUOTE: When the system is required to do something that it has not been explicitly told how to do, it must reason - it must figure out what it needs to know from what it already knows. For instance, suppose an information retrieval program 'knows' only that Robins are birds and that All birds have wings. Keep in mind that for a system to know these facts means only that it contains data structures and procedures that would allow it to answer the questions: Are Robins birds? ⇒ Yes. Do all birds have wings? ⇒ Yes. If we then ask it, Do robins have wings? the program must reason to answer the query. In problems of any complexity, the ability to do this becomes increasingly important. The system must be able to deduce and verify a multitude of new facts beyond those it has been told explicitly.