바로가기 메뉴 본문 바로가기 주메뉴 바로가기
  • 10-1Do you provide evidence for users to accept the generation process of the model’s inference results?
    Determine applicability: Consider this question if an explanation to users is required on the inference results of the healthcare AI model or the reason for drawing such inference results, and determine if the requirement has been satisfied.

    • Medical staff and patients must be able to understand how the AI model made such an inference in order for them to trust the AI model’s inference result and the AI system’s operation. It is ideal to provide users with an explanation and evidence.

    • You may consider implementing explainable AI (XAI) that suggests evidence for the model’s decision understandable by humans. You may also deploy XAI methods like class activation mapping (CAM) and meaningful perturbations (MP) depending on the elements needing explanation and the AI model’s features.

    • If a model’s inference result can be explained using XAI, it could be helpful for medical staff in making clinical decisions. Understanding causal relations is clinically essential; hence it can be said that the explainability of a model in medicine is particularly important. However, a consultation with medical staff is needed when deploying XAI since explanations that are difficult for medical staff to understand can instead have a negative impact on their decision-making.

    • Since evidence of the AI model’s inference result is not always explainable, you may need an alternative other than XAI to ensure the AI system’s transparency. Therefore, after evaluating the applicability of XAI technology, stick to this requirement if the application is feasible, and if application is challenging, refer to "10-1b."