Determine applicability: Consider this question when analyzing various risk factors that can arise from the healthcare AI system, and determine if the requirement has been satisfied.
• Risk management includes identifying, analyzing, evaluating, and treating risks. You must continuously and repetitively perform these four activities at each stage of the life cycle to remove and prevent risks and, ultimately, ensure trustworthiness. “ISO 31000:2018 — Risk management — Principles and Guidelines” introduces the idea, definition, and overall flow of risk management.
• But the methodology of identification, analysis, and evaluation of risk factors that could interrupt the process of ensuring trustworthiness in AI may differ from existing software and hardware systems. ISO/IEC 24028:2020 and “ISO/IEC 23894:2023 — Guidance on risk management” provide the classification of risk factors to be examined from the perspective of trustworthy AI. ISO 14971, the standard related to the risk management of medical devices, can also be referred to.
• The healthcare sector necessitates meticulous analysis of risks due to its nature. It possesses the risk of issues such as uncertainty and unexplainability arising from technological limitations of AI; diagnostics errors such as false positive or false negative; and damage to life due to security issues, including leakage of personal data or biometrics by a third party.
• The International Medical Device Regulators Forum (IMDRF), the US Food and Drug Administration (FDA), and the EU define risk levels of software as a medical device (SaMD). This definition categorizes the risk level of healthcare AI systems by grade, and there are varying processes for obtaining authorization or approvals for each grade [3].