# Machine Learning Algorithm

(Redirected from machine learning technique)

A Machine Learning Algorithm is a learning algorithm that is an automated algorithm.

**AKA:**Automated Learning Technique.**Context:**- It can be implemented by a Machine Learning System (to solve machine learning tasks).
- It can range from being a Supervised ML Algorithm to being an Unsupervised ML Algorithm.
- It can range from being an Eager Learning Algorithm (based on all available examples) to being a Lazy Learning Algorithm (based only on 'relevant' examples).
- It can range from being a Model-based Learning Algorithm to being an Instance-based Learning Algorithm.
- It can range from being a Single Model Algorithm to being an Ensemble Learning Algorithm.
- It can range from being a Feature-based Learning Algorithm to being a Kernel-based Learning Algorithm (that maps the record into a kernel space).
- It can range from being a Tuple-based Learning Algorithm (with independent records) to being a Relational Learning Algorithm (for relational data).
- It can range from being a Traditional ML Algorithm to being a State-of-the-Art ML Algorithm.
- It can range from being a Data-rich Machine Learning Algorithm to being a Knowledge-rich Machine Learning Algorithm.
- It can range from being an Online Learning Algorithm to being a Batch Learning Algorithm.
- It can range from being a Knowledge-based Learning Algorithms (such as inductive learning algorithms) to being a Statistically-based Learning Algorithm (such as statistical modeling algorithms).
- It can range from being a Symbolic Learning Algorithm (for symbolic representations) to being a Black-Box Learning Algorithm (where the learned model is not inspectable).
- It can be related to Computational Statistics Algorithm and Mathematical Optimization Algorithm.

**Example(s):****Counter-Example(s):****See:**Pattern Recognition Algorithm, Learning Method.

## References

- http://en.wikipedia.org/wiki/Outline_of_machine_learning#Machine_learning_methods
- http://en.wikipedia.org/wiki/List_of_machine_learning_algorithms
- http://en.wikipedia.org/wiki/Category:Machine_learning_algorithms
- http://cran.r-project.org/web/views/MachineLearning.html

### 2018

- https://rodneybrooks.com/my-dated-predictions/
- QUOTE: … When anyone says Machine Learning these days (and indeed since the introduction of the term in 1959 by Arthur Samuel) they mean using examples in same way to induce a representation of some concept that can later be used to select a label or action, based on an input and that saved learned material. Kalman filtering uses multiple data points from a particular process to get a good estimate of what the data is really saying. It does not save anything for later to be used for a similar problem at some future time. So, no, it is not Machine Learning, ...

### 2009

- (Lafferty & Wasserman, 2009) ⇒ John D. Lafferty, and Larry Wasserman. (2009). “Statistical Machine Learning - Course: 10-702." Spring 2009, Carnegie Mellon Institute.
- Statistical Machine Learning is a second graduate level course in machine learning, assuming students have taken Machine Learning (10-701) and Intermediate Statistics (36-705). The term "statistical" in the title reflects the emphasis on statistical analysis and methodology, which is the predominant approach in
**modern machine learning**.

- Statistical Machine Learning is a second graduate level course in machine learning, assuming students have taken Machine Learning (10-701) and Intermediate Statistics (36-705). The term "statistical" in the title reflects the emphasis on statistical analysis and methodology, which is the predominant approach in
- (Freund, 2009) ⇒ Yoav Freund (2009). “Statistical Machine Learning (Boosting)." Course: UC San Diego, CSE254, Winter 2009. http://seed.ucsd.edu/mediawiki/index.php/CSE254

### 2008

- (Sarawagi, 2008) ⇒ Sunita Sarawagi. (2008). “Information Extraction.” In: Foundations and Trends in Databases, 1(3).
- … We described Conditional Random Fields, a state-of-the-art method for entity recognition that imposes a joint distribution over the sequence of entity labels assigned to a given sequence of tokens. Although the details of training and inference on
**statistical models**are somewhat involved for someone outside the field of statistical machine learning, the models are easy to deploy and customize due to their fairly nonrestrictive feature based framework.

- … We described Conditional Random Fields, a state-of-the-art method for entity recognition that imposes a joint distribution over the sequence of entity labels assigned to a given sequence of tokens. Although the details of training and inference on

### 2007

- http://www.stat.berkeley.edu/~statlearning/
**Statistical machine learning**merges statistics with the computational sciences --- computer science, systems science and optimization. Much of the agenda in statistical machine learning is driven by applied problems in science and technology, where data streams are increasingly large-scale, dynamical and heterogeneous, and where mathematical and algorithmic creativity are required to bring statistical methodology to bear. Fields such as bioinformatics, artificial intelligence, signal processing, communications, networking, information management, finance, game theory and control theory are all being heavily influenced by developments in**statistical machine learning**.- The field of
**statistical machine learning**also poses some of the most challenging theoretical problems in modern statistics, chief among them being the general problem of understanding the link between inference and computation.

### 2006

- (Mitchell, 2006) ⇒ Tom M. Mitchell (2006). “The Discipline of Machine Learning." Machine Learning Department technical report CMU-ML-06-108, Carnegie Mellon University.
- "Machine Learning research asks “How can we build
**computer systems**that automatically improve with experience, and what are the fundamental laws that govern all learning processes?” ... - "Can machine learning theories and
**algorithms**help explain human learning? ... - "What is the relationship between different
**learning algorithms**, and which should be used when?. Many different learning algorithms have been proposed and evaluated experimentally in different application domains. One theme of research is to develop a theoretical understanding of the relationships among these algorithms, and of when it is appropriate to use each. For example, two**algorithms**for supervised learning,**Logistic Regression**and the Naive Bayes classifier, behave differently on many data sets, but can be proved to be equivalent when applied to certain types of data sets. ...

- "Machine Learning research asks “How can we build

### 1998

- (Dumais et al., 1998) ⇒ Susan Dumais, John Platt, David Heckerman, and Mehran Sahami. (1998). “Inductive Learning Algorithms and Representations for Text Categorization.” In: Proceedings of the seventh International Conference on Information and knowledge management (CIKM 1998) doi:10.1145/288627.288651