바로가기 메뉴 본문 바로가기 주메뉴 바로가기
Data
Technical Factors
AI Accountability
[Definition]

AI accountability means that a system is in place to ensure accountability throughout the entire lifecycle, including the development, deployment, and use of AI systems. In other words, mechanisms must be in place to ensure proper accountability in the event of adverse impacts.[1] AI accountability is closely related to risk management, which involves identifying and mitigating risks in an appropriate manner.


[Commentary]

AI Accountability and Risk Management
The ability to report on actions or decisions that affect the outcome of an AI system, and the ability to respond to those outcomes, must be ensured. Users need to identify, evaluate, document, and minimize potential adverse effects of AI systems. Whistleblowers, non-governmental organizations, trade unions, or other groups should be provided with adequate protection when reporting concerns about AI systems. As the above requirements are implemented, tensions may arise between them. This will naturally lead to trade-offs, which will need to be resolved in a reasonable and methodological manner at the technical level. This means identifying the relevant interests and values implied by AI systems and explicitly recognizing and evaluating trade-offs in terms of risks to safety and ethical principles, including fundamental rights when conflicts arise. Decisions about what trade-offs to make should be well-reasoned and appropriately documented. There should be a system in place to enable accurate and immediate remediation in the event of adverse impacts.

EC's AI Accountability
Accountability means being responsible for your actions and being accountable for the consequences of your actions, and being able to explain your goals, motivations, and reasons for your actions. There are two types of accountability: legal accountability and ethical accountability. For example, an organization that processes an individual's data must have security measures in place to prevent a data breach, but must also report if the security measures fail, while a company is ethically responsible for not investing in facial recognition technology even though the law does not prohibit it.

AI Accountability in ISO/IEC International Standards
The development and application of AI systems are all about the use of information and communication technologies in a multi-stakeholder environment. Defining responsibilities and accountability among stakeholders is critical to building and maintaining trust in this environment.[2] Because AI systems can exist in both complex international commercial value chains and transnational social frameworks, all stakeholders need to share an understanding of the responsibilities they have to other stakeholders and how they will be held accountable for those responsibilities. A key reason for agreeing on such a framework is that it can define decision-making points throughout the lifecycle of an AI system. Responsibility for decision-making within an organization and accountability for the consequences of those decisions are typically included in governance frameworks. ISO/IEC 38500 recommends that senior decision-makers in an organization understand and fulfill their legal, regulatory, and ethical obligations in the use of IT. It defines the tasks of assessing, directing, and monitoring IT aspects in implementing the principles of accountability, strategy, acquisition, performance, conformance, and human behavior. Achieving trust in AI-driven autonomous systems requires addressing responsibility and accountability when autonomous systems fail. This ensures that relevant stakeholders are held legally accountable if an autonomous system causes harm. In a 2018 statement, the European Group on Science and New Technologies emphasized that AI-based systems cannot be autonomous in the legal sense and that a clear framework of responsibility and liability needs to be established to enable redress for any harm caused by the operation of autonomous systems.

World Economic Forum Guidelines on AI Accountability[3]
Organizations responsible for AI services should establish an internal governance organization with clear roles and responsibilities for the ethical deployment of AI. Responsibility and oversight for the various steps and activities involved in deploying AI systems should be assigned to the appropriate people and departments, and if necessary, consider establishing a coordinating body with relevant expertise across the organization. People and departments with internal AI governance functions should be aware of their roles and responsibilities, receive specialized training, and be provided with the resources and guidance they need to perform their duties. Use your organization's operational risk management framework to manage risk, assessing and managing potential adverse impacts on individuals (including who is most vulnerable, how they may be affected, how to assess the magnitude of the impact, and how to get feedback from those affected) and the risks of AI proliferation. Determine the appropriate level of human intervention during AI decision-making. Manage the AI model training and selection process, and review communication channels and interactions for model maintenance, monitoring, documentation, and review feedback. People who work directly with AI models should be trained to interpret AI model outputs and decisions and to detect and manage data bias. Ensuring that those who do not work directly with AI systems are at least familiar with the benefits, risks, and limitations of using AI is part of being responsible with AI.
References
[1] High-Level Expert Group on Artificial Intelligence, Assessment List for Trustworthy Artificial Intelligence (ALTAI) for Self Assessment, EC, July 2020, p.21.
[2] ISO/IEC, Information Technology – Artificial Intelligence – Overview of Trustworthiness in Artificial Intelligence, ISO/IEC TR 24028-2020, March 2021, p.13.
[3] Model Artificial Intelligence Governance Framework Second Edition, World Economic Forum, January 2020, p.22.