Machine Learning (ML) Model Monitoring Task

From GM-RKB
(Redirected from ML model monitoring)
Jump to navigation Jump to search

A Machine Learning (ML) Model Monitoring Task is a software monitoring task for productionized ML models.



References

2023

2022

  • https://sites.google.com/view/model-monitoring-tutorial
    • Presentation: https://docs.google.com/presentation/d/1iH4i1j9TpZNRlYRbtGUd_qhVGLnpVZnhuYOZFZBa9Sg/
    • QUOTE: Artificial Intelligence (AI) is increasingly playing an integral role in determining our day-to-day experiences. Increasingly, the applications of AI are no longer limited to search and recommendation systems, such as web search and movie and product recommendations, but AI is also being used in decisions and processes that are critical for individuals, businesses, and society. With AI based solutions in high-stakes domains such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI are far-reaching. Consequently, it becomes critical to ensure that these models are making accurate predictions, are robust to shifts in the data, are not relying on spurious features, and are not unduly discriminating against minority groups. To this end, several approaches spanning various areas such as explainability, fairness, and robustness have been proposed in recent literature, and many papers and tutorials on these topics have been presented in recent computer science conferences. However, there is relatively less attention on the need for {\em [[monitoring machine learning (ML) modelpps once they are deployed} and the associated research challenges.

      In this tutorial, we first motivate the need for ML model monitoring, as part of a broader AI model governance and responsible AI framework, from societal, legal, customer/end-user, and model developer perspectives, and provide a roadmap for thinking about model monitoring in practice. We then present findings and insights on model monitoring desiderata based on interviews with various ML practitioners spanning domains such as financial services, healthcare, hiring, online retail, computational advertising, and conversational assistants. We then describe the technical considerations and challenges associated with realizing the above desiderata in practice. We provide an overview of techniques/tools for model monitoring. Then, we focus on the real-world application of model monitoring methods and tools, present practical challenges/guidelines for using such techniques effectively, and lessons learned from deploying model monitoring tools for several web-scale AI/ML applications. We present case studies across different companies, spanning application domains such as financial services, healthcare, hiring, conversational assistants, online retail, computational advertising, search and recommendation systems, and fraud detection. We hope that our tutorial will inform both researchers and practitioners, stimulate further research on model monitoring, and pave the way for building more reliable ML models and monitoring tools in the future.

2018

  • (Vartak & Madden, 2018) ⇒ Manasi Vartak, and Samuel Madden. (2018). “MODELDB: Opportunities and Challenges in Managing Machine Learning Models.” IEEE Data Eng. Bull. 41(4)
    • ABSTRACT: Machine learning applications have become ubiquitous in a variety of domains. Powering each of these ML applications are one or more machine learning models that are used to make key decisions or compute key quantities. The life-cycle of an ML model starts with data processing, going on to feature engineering, model experimentation, deployment, and maintenance. We call the process of tracking a model across all phases of its life-cycle as model management. In this paper, we discuss the current need for model management and describe MODELDB, the first open-source model management system developed at MIT. We also discuss the changing landscape and growing challenges and opportunities in managing models.
    • QUOTE: ... To understand the requirements of a model management system, we begin with a brief overview of the lifecycle of a machine learning model. We divide the ML life-cycle into five phases, namely: (1) Data Preparation - Obtaining the training and test data to develop a model; (2) Feature Engineering - Identifying or creating the appropriate descriptors from the input data (i.e., features) to be used by the model; (3) Model Training and Experimentation - Experimenting with different models on the training and test data and choosing the best; (4) Deployment - Deploying the chosen model in a live system; and (5) Maintenance - Monitoring the live model performance, updating the model as needed, and eventually retiring the model. While we describe these phases as occurring in a linear sequence, the empirical nature of building an ML model causes multiple phases to be revisited frequently. We define a model management system as one that follows a model throughout the five phases of its life-cycle and captures relevant metadata at each step to enable model tracking, reproducibility, collaboration, and governance. ...
    • QUOTE: ... Model Monitoring. While model testing takes place before a model is deployed, model monitoring takes place once the model is deployed in a live system. Model monitoring today is largely limited to monitoring system-level metrics such as the number of prediction requests, latency, and compute usage. However, we are already seeing the need for data-level monitoring of models (e.g., as noted in [25]) to ensure that the offline and live data fed to a model is similar. We expect model management systems to encompass model monitoring modules to ensure continued model health, triggering alerts and actions as appropriate. ...