Determine applicability: Consider this question if you intend to provide an explanation of the result or cause of the system operation with an AI algorithm or model to ensure the trustworthiness of the public service system, and determine if the requirement has been satisfied.
• System users must be able to understand how the AI model made such an inference in order for them to trust the AI model’s inference result and the AI system’s operation. It is ideal to provide users with an explanation and evidence.
• Explanations about the outputs of the model should be taken into account, as the public service can lead to results of public benefits, convenience, and beneficiary criteria. For instance, in terms of the eligibility of an user for welfare, you must consider giving an explanation of which user’s information greatly affected the result.
• Also consider providing an explanation about the model’s inference result if the system does not operate properly or needs information for an upgrade. Below are examples of model output and utilization you can consider to improve the model’s performance internally.
✔ Image and speech processing in an AI interview system: Input data and other information (e.g. background of the video, location of the interviewer in the video) to be used in the analysis for performance improvement when the predictive value of the processing result is a low confidence value
✔ Image processing in an illegal dumping detection model: Input data and other information (e.g. complexity of buildings, undetected, misdetection time, weather) to be used in the analysis for performance improvement when the dumping detection result and vice versa is a low confidence value
• Review diverse studies, trials, and implementations of explainable AI (XAI) for AI systems that can be utilized in public services.
• You may also review classical decision tree methods or causal learning technologies currently being researched to implement them in areas unexplainable with XAI technologies.
• After reviewing the applicability of XAI, stick to "10-1a" if the application is feasible, and if application is challenging, refer to "10-1b."