Determine applicability: Consider this question if the AI service in development could have an ethical impact on society as a public service or if there is a concern about intellectual property disputes, and determine whether the requirement was fulfilled.
• Organizations associated with AI need to develop a governance system to ensure trustworthiness in AI systems. This is to prevent any ethical, intellectual property rights, security, and privacy issues in the training or inference of the AI system. There are policies in intellectual property disputes because social issues caused by unethical public services have a big ripple effect, and public institution services are mostly carried out by private companies. Hence, there must be internal guidelines and policies on AI governance to stay alert on these risk factors.
• NIST’s AI Risk Management Framework states that transparency and effective implementation of internal policies, processes, procedures, and practices in place must be assured according to the life cycle of the AI system. Thus, you must fully understand and document the requirements of laws and regulations for AI, as well as manage risk management procedures and outputs transparently and systematically.
• You can establish internal policies in two ways by taking into account their utilization:
✔ First, establish internal guidelines and policies by adopting and organizing laws, regulations, policies, standards, and guidelines related to AI.
✔ Second, clarify and document the roles and responsibilities of the organization according to the life cycle of the AI system.