# Parameter Fitting Loss Function

A Parameter Fitting Loss Function is a cost function/valuation function that can be minimized in a parameter fitting task.

**Context:**- It can range from being a Convex Loss Function to being a Non-Convex Loss Function.
- It be a Learning Cost Function, that can range from being a Classification Loss Function to being a Ranking Loss Function to being a Regression Loss Function.
- It can be selected by a Loss Function Selection Task.
- …

**Example(s):****Counter-Example(s):**- a Test Statistic.
- a Reward Function (in Regret (Decision Theory)).
- a Cost Function or a Profit Function.
- a Utility Function.
- a Fitness Function.

**See:**Expected Risk, Mathematical Optimization, Parameter Estimation, Regularization Term, Mathematical Optimization, Decision Theory, Parameter Estimation, Economic Cost.

## References

### 2021

- (Wikipedia, 2021) ⇒ https://en.wikipedia.org/wiki/loss_function Retrieved:2021-3-8.
- In mathematical optimization and decision theory, a
**loss function**or**cost function**is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost" associated with the event. An optimization problem seeks to minimize a loss function. An**objective function**is either a loss function or its negative (in specific domains, variously called a reward function, a profit function, a utility function, a fitness function, etc.), in which case it is to be maximized.In statistics, typically a loss function is used for parameter estimation, and the event in question is some function of the difference between estimated and true values for an instance of data. The concept, as old as Laplace, was reintroduced in statistics by Abraham Wald in the middle of the 20th century. In the context of economics, for example, this is usually economic cost or regret. In classification, it is the penalty for an incorrect classification of an example. In actuarial science, it is used in an insurance context to model benefits paid over premiums, particularly since the works of Harald Cramér in the 1920s. In optimal control, the loss is the penalty for failing to achieve a desired value. In financial risk management, the function is mapped to a monetary loss.

In classical statistics (both frequentist and Bayesian), a loss function is typically treated as something of a background mathematical convention.

- In mathematical optimization and decision theory, a