Cimino JJ et al. 2001 "Studying the human-computer-terminology interface."

Reference
Cimino JJ, Patel VL, Kushniruk AW. Studying the human-computer-terminology interface. J Am Med Inform Assoc 2001;8(2):163-173.
Abstract
"OBJECTIVE: To explore the use of an observational, cognitive-based approach for differentiating between successful, suboptimal, and failed entry of coded data by clinicians in actual practice, and to detect whether causes for unsuccessful attempts to capture true intended meaning were due to terminology content, terminology representation, or user interface problems. DESIGN: Observational study with videotaping and subsequent coding of data entry events in an outpatient clinic at New York Presbyterian Hospital. PARTICIPANTS: Eight attending physicians, 18 resident physicians, and 1 nurse practitioner, using the Medical Entities Dictionary (MED) to record patient problems, medications, and adverse reactions in an outpatient medical record system. MEASUREMENTS: Classification of data entry events as successful, suboptimal, or failed, and estimation of cause; recording of system response time and total event time. RESULTS: Two hundred thirty-eight data entry events were analyzed; 71.0 percent were successful, 6.3 percent suboptimal, and 22.7 percent failed; unsuccessful entries were due to problems with content in 13.0 percent of events, representation problems in 10.1 percent of events, and usability problems in 5.9 percent of events. Response time averaged 0.74 sec, and total event time averaged 40.4 sec. Of an additional 209 tasks related to drug dose and frequency terms, 94 percent were successful, 0.5 percent were suboptimal, and 6 percent failed, for an overall success rate of 82 percent. CONCLUSIONS: Data entry by clinicians using the outpatient system and the MED was generally successful and efficient. The cognitive-based observational approach permitted detection of false-positive (suboptimal) and false-negative (failed due to user interface) data entry."
Objective
"To explore the use of an observational, cognitive-based approach for differentiating between successful, suboptimal, and failed entry of coded data by clinicians in actual practice, and to detect whether causes for unsuccessful attempts to capture true intended meaning were due to terminology content, terminology representation, or user interface problems."
Size
Large
Geography
Urban
Other Information
The study took place at the outpatient clinic of New York Presbyterian Hospital.
Type of Health IT
Informational resource
Type of Health IT Functions
The technology is not well described. The article suggests that the user would enter one or more words into a free text box and the database would search for matching words, creating a list of suggested terms.
Context or other IT in place
"The [electronic medical record (EMR)] was an ambulatory record application that provided a variety of clinical applications, including progress notes, review of reports from ancillary systems and health maintenance reminders."
Workflow-Related Findings
"The user interface was responsible for 3 of the 16 suboptimal results and 12 of the 66 failures, or 15 of 82 (18 percent) of the problems. A spell-checking feature would have mitigated this failure rate, suggesting that the application was quite usable with respect to data entry. Some of the remaining causes of user interface problems were navigation problems.""
"The time required to enter, review and select a term [to enter as a code in the system]...ranged from 2 to 225 sec (average 40.4 sec).... Response time of the system varied from instantaneous to 10 sec.... We found the average response time to be 0.75 sec when results were returned."
"The number of terms returned by the search ranged from none (in 30 cases) to 1,488. When the 30 search failures and 2 cases of extremely high results...were excluded, the average list size was 15.6 terms."
In "the 447 attempts by users to enter coded data[, u]sers chose some terms from the list 381 times (86 percent)," although in 4 percent of cases the result was not optimal.
The analysis "showed that in 49 cases (60 percent of the 82 cases [of searches for medication terms], 11 percent of all cases), the [system] actually did contain the desired term but the user did not find it (sometimes choosing a suboptimal term but generally choosing no term). In general, the reason for failure was that the [system] lacked a synonym or abbreviation... In 14 cases, we attributed the failure to problems with the user interface, including the lack of phonetic spell-checking and failure to hit the 'Enter' key, which was required inconsistently in an early version of the [system]."
"The success rates for different tasks, and the reasons for failure, are comparable for all the tasks except the drug route and frequency data entry. These two tasks differ from the others in that they allow only a very limited terminology and provide optional pull-down lists. Users appeared to be generally familiar with these restricted terminologies, and most problems were due to lack of recognition (for example, 'p.o.' was not recognized as 'po'...)."
The researchers "studied...whether the users could find in the terminology the terms suggested by the scenarios. They successfully demonstrated that their system was easy to use and that their users were comfortable with their terminology."
Study Design
Only postintervention (no control group)
Study Participants
A convenience sample included 27 volunteering clinicians (18 residents, eight attending physicians, and one nurse practitioner). These clinicians and their patients participated in the study. Thirty-two sessions were audiorecorded.