• When a problem occurs in the system, check whether exception handling is performed, such as stopping functions, switching screens, recovering to the initial state of the provided service, rejecting inputs, and avoiding decisions.
• Provide an explanation to the AI system user about the system’s response and the reason why system operation was not adequate when such an exception handling is conducted.
• The public institution should determine the standard for human oversight or decision-making interventions in the AI system according to the risk assessments of the AI system. Refer to for relevant information.
✔ If there is high uncertainty in the decision made by the AI or a problem is very likely to occur, avoid making decisions or provide information about the situation to the user
✔ Encourage human intervention if a problem occurs in the system during automated and autonomous operation
• For systems operated by public institutions, examine and implement standards such as establishment and operational guidelines and development security for information systems and privacy suggested by the Ministry of the Interior and Safety.