Automated Reasoning Task

From GM-RKB
Jump to: navigation, search

An Automated Reasoning Task is a reasoning task that is an automated information process task.



References

2016

  • (Wikipedia, 2015) ⇒ https://www.wikiwand.com/en/Algorithmic_inference Retrieved 2016-07-10
    • Algorithmic inference gathers new developments in the statistical inference methods made feasible by the powerful computing devices widely available to any data analyst. Cornerstones in this field are computational learning theory, granular computing, bioinformatics, and, long ago, structural probability (Fraser,1966).

      The main focus is on the algorithms which compute statistics rooting the study of a random phenomenon, along with the amount of data they must feed on to produce reliable results. This shifts the interest of mathematicians from the study of the distribution laws to the functional properties of the statistics, and the interest of computer scientists from the algorithms for processing data to the information they process.

2016

  • (Vardi, 2016a) ⇒ Moshe Vardi. (2016). “The Automated-Reasoning Revolution: From Theory to Practice and Back."
    • QUOTE: For the past 40 years computer scientists generally believed that NP-complete problems are intractable. In particular, Boolean satisfiability (SAT), as a paradigmatic automated-reasoning problem, has been considered to be intractable. Over the past 20 years, however, there has been a quiet, but dramatic, revolution, and very large SAT instances are now being solved routinely as part of software and hardware design. In this talk I will review this amazing development and show how automated reasoning is now an industrial reality. I will then show describe how we can leverage SAT solving to accomplish other automated-reasoning tasks. Counting the number of satisfying truth assignments of a given Boolean formula or sampling such assignments uniformly at random are fundamental computational problems in computer science with applications in software testing, software synthesis, machine learning, personalized learning, and more. While the theory of these problems has been thoroughly investigated since the 1980s, approximation algorithms developed by theoreticians do not scale up to industrial-sized instances. Algorithms used by the industry offer better scalability, but give up certain correctness guarantees to achieve scalability. We describe a novel approach, based on universal hashing and Satisfiability Modulo Theory, which scales to formulas with hundreds of thousands of variable without giving up correctness guarantees.

2013

2005

  • (Pedrycz, 2005) &rArr& Pedrycz, W. (2005). “Algorithmic Inference in Machine Learning" — B. Apolloni, D. Mal. IEEE Transactions on Neural Networks, 16(4), 1007. DOI: 10.1109/TNN.2005.853003 [1]
    • QUOTE: The next chapter focuses on mechanisms of algorithmic inferences realized in the statistical setting. The reader finds here ideas of sufficient statistics, confidence intervals, adequacy of sample size, point estimators, and issues of entropy. The first two chapters naturally form the introductory part of the book as dealing with the foundations of the subject. The second part is about the applications of machine learning and consists of three comprehensive chapters: on computational learning theory, regression theory, and subsymbolic learning. This combination very much reflects the main directions of the key developments encountered in machine learning. Chapter 3 is devoted to computational learning and includes its core concepts. Learning Boolean functions along with their well known probably approximately correct (PAC) principle occupies a prominent position in the theory and the same becomes true in this chapter. A lot of attention is paid to computing the hypotheses, confidence intervals for learning errors, high level abstraction and learnability. The chapter on regression theory (Chapter 4) is quite extensive and includes aspects of linear and nonlinear regression, PAC learning of regression functions, point estimators and confidence intervals.

1992

  • (Lincoln & Mitchell, 1992) ⇒ Lincoln, P., & Mitchell, J. C. (1992, February). “Algorithmic aspects of type inference with subtypes". In: Proceedings of the 19th ACM SIGPLAN-SIGACT symposium on Principles of programming languages (pp. 293-304). ACM. DOI:10.1145/143165.143227 [2]
    • QUOTE: We study the complexity of type inference for programming languages with subtypes. There are three language variations that effect the problem: (i) basic functions may have polymorphic or more limited types, (ii) the subtype hierarchy may be fixed or vary as a result of subtype declarations within a program, and (iii) the subtype hierarchy may be an arbitrary partial order or may have a more restricted form, such as a tree or lattice. The naive algorithm for infering a most general polymorphic type, undervariable subtype hypotheses, requires deterministic exponential time. If we fix the subtype ordering, this upper bound grows to nondeterministic exponential time. We show that it is NP-hard to decide whether a lambda term has a type with respect to a fixed subtype hierarchy (involving only atomic type names). This lower bound applies to monomorphic or polymorphic languages. We give PSPACE upper bounds for deciding polymorphic typability if the subtype hierarchy has a lattice structure or the subtype hierarchy varies arbitrarily. We also give a polynomial time algorithm for the limited case where there are of no function constants and the type hierarchy is either variable or any fixed lattice.