Updated: Mar 30, 2020
The FDA/CDRH/ODE Human Factors Pre-Market Evaluation Team (HFPMET) gave a workshop last month at the HFES Symposium on Human Factors and Ergonomics in Healthcare. I thought it might be helpful to share some new pieces of information I jotted down.
The HFPMET currently consists of five members: Lt. Hanniebey Wiyor, Rita Lin, Capt. Mary Brooks, Dr. Xin Feng, and Dr. Kimberly Kontson (consultant to the team). According to Ms. Lin, the team is growing and hopes to double in the next year. This is good news for manufacturers and should help with turn-around times on human factors reviews.
In 2018, HFPMET performed 1,800 consults on both pre-market and post-market submissions. There were 453 pre-market reviews, which means the team conducted an admirable 38 per month. The majority of these were 510(k)s or Q-Subs. Unfortunately, only 11% of 510(k) human factors sections were deemed adequate in the first review, and only 4% of Q-Subs received approval of their HF validation protocol the first time around. This was surprising to me given that the FDA guidance document has been out for over three years.
The most common deficiencies with human factors submissions include:
Inappropriate or deficient use-related risk analyses;
An incomplete HF/UE process (e.g. no formative studies, did not look at known use issues);
The HF validation data were not collected or analyzed the right way; and
The HF/UE report is not submitted in the format shown in Appendix A of the FDA guidance. (“The guidance has eight chapters. This is our thinking; this is what we expect.”)
Dr. Wiyor said that when he’s asked to review a submission, the first thing he does is look at who signed off on the HF/UE report. If a human factors engineer is not listed, he thinks, “Whoa, I’m in for a big problem…there could be a lot of misinformation.” The second thing he does is print the report’s table of contents because that tells him whether the FDA guidance was followed or not.
The FDA has always required that participants in HF validation studies reside in the U.S. or its territories. However, an exception to this is for devices used by military personnel stationed outside the U.S., in which case it’s acceptable to include those users in your study.
Dr. Kontson discussed special considerations for testing over-the-counter (OTC) products, which must be validated with lay users in real or simulated home environments. Importantly, you should include a “self-selection” study to ensure that consumers understand the labeling well enough to know when they should not use the product. Statistically valid sample sizes are required for these studies.
Someone in the audience asked, “If we include five users in a simulated use test and 10 users in an actual use test, can we call that a total of 15?” The FDA team said yes, but make sure the actual use participants also complete the knowledge-based tasks.
Someone else in the audience asked, “If a device is on the market, and the company wants to release a You Tube video to help with training, does the video need to be validated?” The answer was, if it’s used to mitigate a post-market issue, then yes; if not, no. However, if it’s part of a new 510(k), then yes.
I’d like to thank the FDA HFPMET panel for sharing their time and advice with us again this year!