The new article addresses the matters related to data collection and describes in detail different types of information and the ways it could be collected.
The Food and Drug Administration (FDA or the Agency), the US regulating authority in the sphere of healthcare products, has published a guidance document dedicated to the application of human factors and usability engineering to medical devices. The scope of the guidance covers, inter alia, the matters related to simulated-use human factors validation testing, including the aspects related to testing participants and their training, as well as the instructions for use and tasks and use scenarios. The document is intended to provide additional clarifications regarding the existing regulatory requirements, as well as recommendations to be considered when interpreting the respective provisions of the applicable legislation. At the same time, the authority also mentions that the document is non-binding in its legal nature, and is not intended to introduce new rules or requirements the parties involved should follow. Moreover, an alternative approach could be applied, provided such an approach is in line with the respective regulatory requirements and has been agreed with the authority in advance.
Data Collection: Key Points
According to the guidance, the test protocol for the human factors validation testing should explicitly indicate the types of data to be collected. The authority mentions that the particular method to be applied to collect the data should be determined depending on the nature of the data and its specific features. For instance, certain types of data could be collected by the device of observation. The Agency further explained that the scope of data to be collected should be reasonable from the device use perspective – for instance, it makes sense to measure the time it takes for a user to perform specific actions using the device only in case the time is of the essence when the device is used for its intended purpose. Consequently, the scope of evaluation should not include time measurement for tasks that are not time-critical. At the same time, some of the aspects cannot be assessed through observation only, so additional questioning of study participants could be needed to collect important information. Thus, the results of the study could be based on both data collected through observation and subjected data collected through asking questions regarding the device after its use. The document further describes in detail each type of data and outlines the most important points associated thereto.
As it was mentioned before, the human factors validation testing should include the collection of data by observing the way the device in question is used by study participants for its intended purpose. During the test, study participants should conduct respective critical tasks, while study sponsors will collect information and evaluate performance. About the test protocol, the authority states that it should:
- Describe in advance how test participant use errors and other meaningful use problems were defined, identified, recorded, and reported; and
- Be designed such that previously unanticipated use errors will be observed and recorded and included in the follow-up interviews with the participants.
The scope of observational data to be collected in the course of a study usually includes various use problems and user errors, e.g., the situations when study participants cannot complete the task as intended. Attention should be also paid to situations when there were use-related issues that could potentially result in the harm caused to the patient, but further actions taken by the user prevented such harm. All such cases should be duly recorded for further discussion with study participants and analysis. Apart from this, numerous unsuccessful attempts to complete a task could indicate a potential issue, so such information should be subject to a rigorous assessment as well.
Knowledge Task Data
As previously explained by the FDA, some of the use-related aspects could be assessed through observation. At the same time, in some cases observation does not provide the necessary information, especially about the way the user takes decisions when using the device. For example, it is vitally important for the users to understand contradictions and warnings associated with the device, as well as potential risks and hazards to be able to use the device safely and efficiently, while it is difficult to assess through observation whether the user understands the said aspects and correctly interprets them. According to the guidance, the user interface components involved in knowledge tasks are usually the user manual, quick start guide, labeling on the device itself, and training. The actual knowledge of a user is based on the information the aforementioned elements contain. Thus, to assess the clarity of the information and analyze the way a user interprets it, a study sponsor can perform interviews during which study participants will be asked use-related questions. To ensure the accuracy and reliability of study results, all the questions should be worded neutrally and require an answer to be provided by the study participant.
According to the guidance, the observation of participant performance of the test tasks and assessment of their understanding of essential information (if applicable) should be followed by a debriefing interview, since interviews enable the test facilitator to collect the users’ perspectives, which can complement task performance observations but cannot be used instead of them. The authority further explains that the information to be collected by the device of these two methods is different, so these types of information supplement each other – for instance, in case the responses provided by a study participant during the interview confirm the information collected through observation. Moreover, in cases when information deriving from these two methods has certain contradictions, an additional assessment would be necessary to identify the underlying issues. For instance, an interview conducted after the use could provide information about potential issues with interpreting safety-related warnings even if there were no use-related issues during the testing itself.
During the interview, a study participant should describe any issues he/she faced when using the device. According to the guidance, the interview should be composed of open-ended and neutrally-worded questions that start by considering the device overall and then focus on each critical task or use scenario. Furthermore, the authority also mentions that any use-related errors should be subject to a rigorous investigation that should follow the testing itself. The Agency additionally emphasizes the importance of ensuring that the interviewer accepts all responses provided by study participants without any bias, as otherwise the accuracy and reliability of the study results could be affected.
How Can RegDesk Help?
RegDesk is a next-generation web-based software for medical device and IVD companies. Our cutting-edge platform uses machine learning to provide regulatory intelligence, application preparation, submission, and approvals management globally. Our clients also have access to our network of over 4000 compliance experts worldwide to obtain verification on critical questions. Applications that normally take 6 months to prepare can now be prepared within 6 days using RegDesk Dash(TM). Global expansion has never been this simple.
Want to know more about our solutions? Speak to a RegDesk Expert today!