Will FDA turn to neuroscience-based human factors research? by Heather Thompson
There is a new wind blowing in medical technology — one that many believe is overdue: an overhaul of human factors.
At the recent Patient Safety Initiative, Charles Murphy, Chief Patient Safety Officer at the Inova Heart and Vascular Institute, called out human factors as one of the key factors he believed needed to be improved in medical technology development.
“In healthcare…we’d love to see safety built and designed into the system. So, I think about human factors being incorporated, and I think that’s exceedingly important—we don’t have that to the same level as other safety-critical industries,” Murphy said during the Patient Safety Movement event in March.
The need to focus more on human factors is clear. And recently FDA, has expressed interest in updating the methodologies used in healthcare, which Charles L. Mauro said still rely mostly on an old-school research style.
In particular, Mauro, who leads Mauro Usability Science, referred to methods for simulated use testing, as spelled out in FDA’s guidance on Applying Human Factors and Usability Engineering to Medical Devices, section 6.4.3.2 Simulated-Use Testing, which says:
Data can be obtained by observing participants interacting with the device and interviewing them. Automated data capture can also be used if interactions of interest are subtle, complex, or occur rapidly, making them difficult to observe. The participants can be asked questions or encouraged to “think aloud” while they use the device. They should be interviewed after using the device to obtain their perspectives on device use, particularly related to any use problems that occurred, such as obvious use error.
Mauro said there is an opportunity to improve the observational methodologies using sophisticated neuroscience-based research. He believes the tools can reveal far greater information that helps design and development teams build devices that really meet user needs while providing a greater level of safety and efficacy.
He said the main reason traditional observational research persists is that there are no alternatives yet. “My bet is that the FDA guidance is going to change, and it’s going to change quickly because they’re responsible for the quality of the safety of the patient,“ Mauro said.
“It’s really a sea change in terms of what the methodologies can do,” he said.
The future of human factors research
Mauro Usability Science employs tools such as:
- 3D spatial tracking
- Newtonian force measurement
- High definition electromyography
- High-resolution eye tracking
- Micro-facial expression analysis
- Automated task analysis and data capture
- Cognitive workload analysis
- Information foraging theory
Mauro said these new techniques provide multi-dimensional and scientifically valid data on patient interactions with the entire drug delivery patient experience. Further, they effectively fill in where FDA guidance ends.
For example, detailed human factors specifications for patient populations that have unique limitations do not exist in research data. The effects of BMI index, size ranges, dexterity, strength, vision, or hearing might have on a patient’s ability to use a device, for example, are not well-documented. However, those factors could make usability testing more complicated. Moreover, Mauro said, the lack of objective data on patient limitations does not reduce a corporation’s duty to create a usable product under FDA guidance.
However, he also cautions that using these tools is not something any research group can do successfully. “It is quite complex. A research group can’t go out and just spend $500,000 on a bunch of sensing technology, throw it into a lab and think they’re going to have a system that works.”
The challenges
First, these methods produce vast amounts of data that are useless without advanced data aggregation and data analysis methods, according to Mauro. “If you look at the traditional system where you just had an observer in a room with a respondent, you may have a videotape of the session, but that’s it. The videotape can be viewed over and over again if you want to, but in these more advanced data capture systems, a 60-minute trial with one respondent produces about 30 gigabytes of hard data.”
Each 3D tracker, EMG or micro-facial expression capture produces a huge amount of data, said Mauro. To get a true picture, hardware and software systems need to combine the channels into a unified collection steam that provides hard data on the HFE performance. “You have to be able to visualize all those data points as they happened in real time.”
Achieving that unified system has taken Mauro and his team six years to develop, and he said it is probably the biggest challenge the team has faced. But that doesn’t mean the work is over. There are day-to-day challenges he and his team address to ensure the technology capture works correctly.
“These advanced technologies have to be calibrated before each study and sometimes they have to be calibrated during each study.” Further, he said, many of the suppliers of the technology are small firms. These firms might not have effective customer service, meaning if a portion breaks down, the team on hand can either wait or troubleshoot the systems themselves. The technology is so new, Mauro said his team is often waiting for software updates just to use it.
And that means the HFE team needs to have a certain level of expertise to set up the systems and understand the data coming in. Those skills are something Mauro said don’t usually come with the human factors engineering degree. “These are educated people, but they probably have had no experience with 3D tracking or EMGs because the technology comes from other applications.”
The teams must also have a skill set that includes statistical analysis. Traditional human factor studies have minor levels of statistical validity because the methodologies are unstructured and samples sizes are small. “But, with these new methods, you can apply extremely robust, contemporary statistical methods to the data such as conjoint analysis, multi-dimensional scaling and factor analysis.” Although these methodologies have been used in the marketing sciences but have been rarely employed in the human factors field.
Mauro also mentions that any software built must be in compliance with FDA in order to be used with a medtech product. “That is not so tough if you start with software fresh in the beginning. . . but if you forget and then come back to it, chances are you’ve got to go back and look at 500,000 lines of JavaScript.” Mauro said his team spends a lot of time verifying that the methodologies really do what they claim, including funding their own pilot testing to make sure that the technology is working properly.
As the technology matures, Mauro said he foresees a huge gain in patient outcomes, even if the cost of conducting human factors this way rises. He also said there are less-obvious ways developers can benefit from using the technology:
“When you convert human factors engineering or patient usability data to a format that is basically engineering terms and engineering quantities, the engineering, product, and development teams really adopt the information much more directly than they otherwise might with traditional observation-based subjective studies.”
In addition, the robust level of data provides clear direction for design enhancements that benefit the patient directly. Mauro also predicted that the data could allow companies to write patent claims and intellectual property claims on innovations to get patents based on human factors engineering performance. Companies could theoretically make an IP claim that a device reduces errors and improves efficiency by these human factors engineering innovations. “It produces a whole new area of IP for companies that want to capture and protect their devices.”
Source: https://www.medicaldesignandoutsourcing.com/fda-neuroscience-human-factors-research/