03-1Have you designed a test environment in consideration of the AI system’s features?
Determine applicability: Consider this question if the possibility of accidents and impact due to malfunction is high based on the analysis of risks in AI systems, and determine if the requirement has been satisfied.
• Healthcare AI systems have high applicability such as being used for improving diagnosis and clinical management, developing drugs, tracking and responding to diseases, and advancing healthcare systems. However, they entail a problem with ensuring trustworthiness in the AI’s decision-making, as they are directly connected to the issue of individual lives. Thus, a test and validation plan needs to be developed to ensure the safety and transparency of healthcare AI systems.
• According to UNESCO’s Recommendation on the Ethics of Artificial Intelligence, AI systems identified as potential risks to human rights should be broadly tested by stakeholders, including in real-world conditions if needed, as part of UNESCO’s Ethical Impact Assessment, before releasing them in the market.
• Although real-world testing is appropriate to ensure accuracy, it may not be suitable if an AI system has complex operating conditions since the test must be performed within a reasonable timeframe and budget. In addition, real-world testing for AI that physically interacts with patients raises concerns about dangerous situations. In this case, virtual testing can be considered.
• Design a test environment after determining an environment suitable for the healthcare AI system’s properties. Below are examples of considerations when designing a test environment:
✓ Is the operating environment of the AI system complex and involves various stakeholders?
✓ Does the AI system have potential risks to human rights?
✓ Can the test be performed within a reasonable timeframe and budget?
✓ What elements require concrete tests for validation before the clinical trial?