바로가기 메뉴 본문 바로가기 주메뉴 바로가기
  • 03-2Have you organized a negotiation system to design the test for the AI system?
    • Most AI systems have difficulty ensuring transparency as their high complexity lowers reproducibility. Also, the complexity of systems can be problematic in a test oracle that determines the expected output, making it difficult to determine whether the test has passed or failed.

    • The assessment criteria on explainability* may differ for each user that checks the system output if an explanation about the AI system’s inference results is required. The assessment criteria on interpretability, the understanding of how AI operates, also depends on the user.
    * ISO/IEC TR 29119-11:2020 defines explainability as the level of understanding how the AI-based system came up with a given result, and interpretability as the level of understanding how the underlying AI technology works.

    • As such, it would be appropriate to: (a) form a consultative group by establishing a negotiation system required for decisions on the expected output of the AI system or for developing assessment criteria on explainability and interpretability of the system output; and (b) design the test by reaching an agreement between the members.