Patient Engagement Measure

From GM-RKB
Jump to navigation Jump to search

A Patient Engagement Measure is a person measure for a patient performing some measurable medical behavior.



References

2020a

2020b

  • Ingersgaard MV, Helms Andersen T, Norgaard O, Grabowski D, Olesen K. Reasons for nonadherence to statins - a systematic review of reviews. Patient Prefer Adherence. 2020;14:675-691.

2019

  • Baumel A, Muench F, Edan S, Kane JM. Objective user engagement with mental health apps: systematic search and panel-based usage analysis. J Med Internet Res. 2019;21(9):e14567-e14567.

2018a

  • (Short et al., 2018) ⇒ Camille E. Short, Ann DeSmet, Catherine Woods, Susan L. Williams, Carol Maher, Anouk Middelweerd, Andre Matthias Müller et al. (2018). “Measuring Engagement in EHealth and MHealth Behavior Change Interventions: Viewpoint of Methodologies.” Journal of medical Internet research 20, no. 11
    • QUOTE: ... However, to test the hypotheses generated by the conceptual modules, we need to know how to measure engagement in a valid and reliable way. The aim of this viewpoint is to provide an overview of engagement measurement options that can be employed in eHealth and mHealth behavior change intervention evaluations, discuss methodological considerations, and provide direction for future research. To identify measures, we used snowball sampling, starting from systematic reviews of engagement research as well as those utilized in studies known to the authors. A wide range of methods to measure engagement were identified, including qualitative measures, self-report questionnaires, ecological momentary assessments, system usage data, sensor data, social media data, and psychophysiological measures. ...

2018b

2017

  • (Perski et al., 2017) ⇒ Olga Perski, Ann Blandford, Robert West, and Susan Michie. (2017). “Conceptualising Engagement with Digital Behaviour Change Interventions: A Systematic Review Using Principles from Critical Interpretive Synthesis.” Translational behavioral medicine 7, no. 2
    • QUOTE: ... “Engagement” with digital behaviour change interventions (DBCIs) is considered important for their effectiveness. Evaluating engagement is therefore a priority; however, a shared understanding of how to usefully conceptualise engagement is lacking. ...

      ... Engagement has traditionally been conceptualised differently across the behavioural science, computer science and HCI literatures, which might be due to the different epistemologies subscribed to, the differing research contexts and the different objectives pursued. In the computer science and HCI literatures, engagement has traditionally been conceptualised as the subjective experience of flow, a mental state characterised by focused attention and enjoyment [18]. This kind of conceptualisation might have emerged as a result of the focus on entertainment and usability of interactive technology. In the behavioural science literature, engagement has typically been conceptualised as “usage” of DBCIs, focusing on the temporal patterns (e.g. frequency, duration) and depth (e.g. use of specific intervention content) of usage [19, 20]. This kind of conceptualisation has emerged due to the observation that while many download and try DBCIs, sustained usage is typically low [21–24]. Henceforth, two working definitions of engagement as used in the computer science and HCI literatures (“engagement as flow”) and the behavioural science literature (“engagement as usage”) are used to scope the space within which this review is conducted. ...

      ... The following two synthetic constructs were developed: “engagement as subjective experience” and “engagement as behaviour”. ...

      ... The majority of articles reviewed from the behavioural science literature conceptualised engagement in behavioural terms, suggesting that it is identical to the usage of a DBCI or its components. Engagement has further been described as the extent of usage over time [19, 52], sometimes referred to as the “dose” obtained by participants or “adherence” to an intervention [25, 53, 54], determined by assessing the following subdimensions: “amount” or “breadth” (i.e. the total length of each intervention contact), “duration” (i.e. the period of time over which participants are exposed to an intervention), “frequency” (i.e. how often contact is made with the intervention over a specified period of time) and “depth” (i.e. variety of content used) [20, 53]. In the computer science and HCI literatures, engagement has been conceptualised as the degree of involvement over a longer period of time [55], sometimes referred to as “stickiness” [56]. A distinction has also been made between “active” and “passive” engagement; while the former involves contributing to the intervention through posting in an online discussion forum, the latter involves reading what others have written without commenting, also known as “lurking” [57]. Engagement has also been conceptualised as a process of linked behaviours, suggesting that users move dynamically between stages of engagement, disengagement and re-engagement [28]. As conceptual overlap was observed between these definitions, the authors propose that engagement involves different levels of usage over time. ...

2014

  • Bower P, Brueton V, Gamble C, et al. Interventions to improve recruitment and retention in clinical trials: a survey and workshop to assess current practice and future priorities. Trials. 2014;15:399.

2012

  • (Kelders et al., 2012) ⇒ Saskia M. Kelders, Robin N. Kok, Hans C. Ossebaard, and Julia EWC Van Gemert-Pijnen. (2012). “Persuasive System Design Does Matter: A Systematic Review of Adherence to Web-based Interventions.” Journal of medical Internet research 14, no. 6
    • QUOTE: ... A percentage of adherence was calculated to enable us to compare the different interventions. We did this by calculating the percentage of participants that adhered to the intervention. For example, when the intended use of an intervention was “complete 8 modules” and 60 out of 100 participants completed 8 modules, the adherence was 60%. For each intervention that was included, we calculated one overall adherence percentage. When more studies about the same intervention yielded different adherence percentages, we calculated the overall adherence percentage using a weighted average, based on the number of participants in each study. Furthermore, when the study included a waiting list and the respondents in this waiting list received access to the intervention at a later stage, the adherence was calculated based on usage data for all participants, including the waiting list group. ...

2012b

2005