Accuracy Metric

From GM-RKB
Jump to navigation Jump to search

An Accuracy Metric is a measurement metric for the closeness of the measurements to a specific value.



References

2022

  • (Wikipedia, 2022) ⇒ https://en.wikipedia.org/wiki/Accuracy_and_precision Retrieved:2022-1-22.
    • In a set of measurements, accuracy is closeness of the measurements to a specific value, while precision is the closeness of the measurements to each other.

      Accuracy has two definitions:

      1. More commonly, it is a description of systematic errors, a measure of statistical bias; low accuracy causes a difference between a result and a "true" value. ISO calls this trueness.
      2. Alternatively, ISO defines accuracy as describing a combination of both types of observational error above (random and systematic), so high accuracy requires both high precision and high trueness.
    • Precision is a description of random errors, a measure of statistical variability.

      In simpler terms, given a set of data points from repeated measurements of the same quantity, the set can be said to be accurate if their average is close to the true value of the quantity being measured, while the set can be said to be precise if the values are close to each other. In the first, more common definition of "accuracy" above, the two concepts are independent of each other, so a particular set of data can be said to be either accurate, or precise, or both, or neither.


2013

2010

  • (Ge et al., 2010) ⇒ Mouzhi Ge, Carla Delgado-Battenfeld, and Dietmar Jannach. (2010). “Beyond Accuracy: Evaluating Recommender Systems by Coverage and Serendipity.” In: Proceedings of the fourth ACM conference on Recommender systems (RecSys-2010).
    • QUOTE: ... Over the last decade, different recommender systems were developed and used in a variety of domains [1]. The primary goal of recommenders is to provide personalized recommendations so as to improve users’ satisfaction. As more and more recommendation techniques are proposed, researchers and practitioners are facing the problem of how to estimate the value of the recommendations. In previous evaluations, most approaches focused only on the accuracy of the generated predictions based, e.g., on the Mean Absolute Error. However, a few recent works argue that accuracy is not the only metric for evaluating recommender systems and that there are other important aspects we need to focus on in future evaluations [4, 8]. The point that the recommender community should move beyond accuracy metrics to evaluate recommenders was for example made in [8]. There, informal arguments were presented supporting that accurate recommendations may sometimes not be the most useful ones to the users, and that evaluation metrics should (1) take into account other factors which impact recommendation quality such as serendipity and (2) be applied to recommendation lists and not on individual items.