# Supervised Binary Classification Task

(Redirected from Supervised Two-Class Classification Task)

Jump to navigation
Jump to search
A supervised binary classification task is a data-driven binary classification task that is a supervised classification task.

**AKA:**Supervised Two-Label Classification.**Context:**- It can be solved by a Supervised Binary Classification System (that implements a supervised binary classification algorithm).
- It can range from being a Ranking Binary Supervised Classification Task to being a Probabilistic Binary Supervised Classification Task.
- It can range from being Fully-Supervised Binary Supervised Classification to being a Semi-Supervised Binary Supervised Classification.
- It can range from being Univariate Binary Supervised Classification to being a Multivariate Binary Supervised Classification.

**Example(s):****Counter-Example(s):****See:**Predictive Modeling Task, Finite-State Sequence Tagging Model.

## References

### 2012

- (Shalizi, 2012) ⇒ Cosma Shalizi. (2012). “Chapter 12 - Logistic Regression.” In: Carnegie Melon University, 36-402, Undergraduate Advanced Data Analysis.
- QUOTE: So far, we either looked at estimating the conditional expectations of continuous variables (as in regression), or at estimating distributions. There are many situations where however we are interested in input-output relationships, as in regression, but the output variable is discrete rather than continuous. In particular there are many situations where we have binary outcomes (it snows in Pittsburgh on a given day, or it doesn’t; this squirrel carries plague, or it doesn’t; this loan will be paid back, or it won’t; this person will get heart disease in the next five years, or they won’t). In addition to the binary outcome, we have some input variables, which may or may not be continuous. How could we model and analyze such data?

### 2010

- (Wikipedia, 2010) ⇒ http://en.wikipedia.org/wiki/Binomial_regression
- In Statistics,
**binomial regression**is a technique in which the response (often referred to as*Y*) is the result of a series of Bernoulli trials, or a series of one of two possible disjoint outcomes (traditionally denoted "success" or 1, and "failure" or 0). In binomial regression, the probability of a success is related to explanatory variables: the corresponding concept in ordinary regression is to relate the mean value of the unobserved response to explanatory variables.A binomial regression model is a special case of a generalised linear model.

- In Statistics,

### 2006

- (Caruana & Niculescu-Mizil, 2006) ⇒ Rich Caruana, and Alexandru Niculescu-Mizil. (2006). “An Empirical Comparison of Supervised Learning Algorithms.” In: Proceedings of the 23rd International Conference on Machine learning. ISBN:1-59593-383-2 doi:10.1145/1143844.1143865
- QUOTE: This paper presents results of a large-scale empirical comparison of ten supervised learning algorithms using eight performance criteria. We evaluate the performance of SVMs, neural nets, logistic regression, naive bayes, memory-based learning, random forests, decision trees, bagged trees, boosted trees, and boosted stumps on eleven binary classification problems using a variety of performance metrics: accuracy, F-score, Lift, ROC Area, average precision, precision/recall break-even point, squared error, and cross-entropy.