16 January 2023, Singapore Infocomm Media Development Authority (IMDA), in collaboration with the AI Verify Foundation, has announced a public consultation on the draft Model AI Governance Framework for Generative AI (Draft GenAI Governance Framework), indicates potential areas for future policy interventions related to generative AI and options for such interventions.
The draft GenAI Governance Framework can be accessed here. Comments on the draft GenAI Governance Framework may be provided to IMDA at info@aiverify.sg.
Below is a brief overview and key points of the draft GenAI Governance Framework.
Singapore’s efforts on AI governance
The Singapore government has been closely monitoring the AI landscape through the implementation of the following key initiatives:
- National AI Strategy: In 2019, Singapore announced its first National AI Strategy, detailing initiatives aimed at strengthening AI integration to boost the economy. To highlight the practical application of AI, Singapore has launched national projects in areas such as education, healthcare, and safety and security. Additionally, investments were made to strengthen the entire AI ecosystem. The National AI Strategy was last updated in 2023.
- Model AI Governance Framework: The Model AI Governance Framework was developed in 2019 to provide private sector organizations with detailed, ready-to-implement guidance to address key ethical and governance issues when deploying AI solutions. It was first introduced in . The second edition of the Model AI Governance Framework was published in 2020.[1].
- AI Verify Foundation and AI Verify Testing Tool: Announced in June 2023, IMDA released AI Verify, an open source AI governance testing framework and software toolkit developed by IMDA. IMDA also established the AI Verify Foundation to leverage the collective power and contributions of the open source community to further develop AI Verify testing tools for the responsible use of AI.[2].
- Draft Recommendation Guidelines on the Use of Personal Data in AI Recommendation and Decision-Making Systems: In July 2023, the Personal Data Protection Commission (PDPC) will introduce the Personal Data Protection Act 2012 on the use of personal data for the development of machine learning. We have published a draft recommendation guideline based on (M.L.) Collection and use of personal data in AI models or systems and such ML systems for decision-making, recommendations and predictions.[3]
- Discussion Paper on Generative AI: Implications for Trust and Governance: In June 2023, IMDA, in collaboration with Aicadium, published a discussion paper outlining Singapore’s plans for the trustworthy and responsible implementation of Generative AI. This paper discusses risk assessment methods and proposes his six key dimensions for policymakers to strengthen AI governance. This means addressing immediate concerns while investing in long-term outcomes.
- MAS’ FEAT Principles and Veritas Toolkit: In June 2023, the Monetary Authority of Singapore (MAS) introduced an open source toolkit aimed at promoting the responsible use of AI within the financial industry. The toolkit, known as Veritas Toolkit version 2.0, enables financial institutions to conduct assessments based on fairness, ethics, accountability, and transparency (Special skill)principle. These principles provide guidance for companies in the financial sector to responsibly use AI and data analytics in their products and services.
Against this backdrop, the draft GenAI Governance Framework has emerged as the latest means to drive AI development in Singapore.
Overview of the draft GenAI Governance Framework
Aligned with Singapore’s National AI Strategy, the draft GenAI Governance Framework aims to propose a systematic and balanced approach to addressing generative AI concerns while continuing to foster innovation.
The draft GenAI Governance Framework emphasizes the importance of global collaboration in policy approaches, highlighting the need for policymakers to collaborate with industry, researchers, and like-minded jurisdictions.
To that end, the draft GenAI Governance Framework identifies nine aspects that address generative AI concerns while balancing fostering continued innovation. They are summarized in the table below.
S/N | size | Main recommendations |
1 | accountability | The draft GenAI Governance Framework proposes to allocate responsibilities in the generative AI development chain according to each stakeholder’s level of control. It also proposes to strengthen end-user protection by providing compensation and updating the legal remedies and safeguards framework. This ensures end users have an additional measure of protection against potential harm from their AI-enabled products and services. |
2 | data | The draft GenAI Governance Framework advises policymakers to clarify the application of existing personal data laws to generative AI and encourage research to create safer and culturally representative models. . Additionally, policymakers should encourage open dialogue between copyright holders and generative AI companies to promote balanced solutions to copyright issues related to data used in AI training. You are being asked to do so. |
3 | Reliable development and deployment | The draft GenAI Governance Framework proposes that the industry should standardize several aspects of generative AI. First, we suggest adopting common best practices in the development of generative AI. Second, the framework recommends standardizing the disclosure of models similar to “food labels” to enable comparisons between different AI models. Third, we propose standardizing the evaluation of generative AI models and implementing a baseline set of necessary safety tests. |
Four | Reporting an incident | AI developers should establish processes to monitor and report incidents arising from the use of their AI systems. At the same time, policymakers will need to determine the threshold of severity of an AI incident that requires reporting to the government. |
Five | Testing and warranty | Policymakers are encouraged to establish common standards for AI testing to ensure quality and consistency across the industry. |
6 | safety | The draft GenAI Governance Framework proposes to develop new testing tools to reduce risks associated with generative AI. One example is the creation of digital forensic tools specifically designed for generative AI that aim to identify and extract potentially malicious code hidden within models. |
7 | Source of content | AI-generated content can amplify misinformation, requiring policymakers to collaborate with stakeholders in the AI content lifecycle. Together, they can work on solutions such as watermarking and cryptographic provenance to reduce the risk of misinformation. |
8 | Safety and alignment research and development | Policymakers are urged to accelerate investment in research and development to ensure that AI models align with human intentions and values. Additionally, fostering global collaboration between AI safety research and development institutions is essential to optimize limited resources and keep pace with commercial growth. |
9 | AI for the common good | The draft GenAI Governance Framework encourages governments to democratize access to AI by educating the public about identifying deepfakes and using chatbots safely. It also emphasizes the role of government in leading innovation within the industry, particularly among small and medium-sized enterprises, through measures such as the use of sandboxes. Additionally, the framework recommends increasing efforts to improve workforce skills and promote the sustainable development of AI systems. |
Important points
The draft GenAI Governance Framework reflects Singapore’s broader efforts to contribute to AI governance and provides useful insights into policymakers’ concerns regarding the development and deployment of generative AI systems.
While the Draft GenAI Governance Framework helps organizations understand the key policy implications for generative AI, it is more of a discussion paper and provides specific guidance for organizations to adopt or implement when deploying generative AI solutions. It does not prescribe practice. At this stage, this approach is completely unexpected, as the technology is still rapidly evolving and policymakers around the world are still grappling with how to address the risks and concerns associated with generative AI. It does not mean.
We are closely monitoring this area to see how policymakers around the world react to upcoming EU AI legislation and whether they follow a similar approach. We also expect the Singapore government to issue additional documents and guidance in the near future.
We would like to thank Judeeta Sibs, Practical Trainee at Ascendant Legal LLC, for her assistance in preparing this update.
[1] Read the overview of the second edition of the Model AI governance framework here: https://www.dataprotectionreport.com/2020/02/singapore-updates-its-model-artificial-intelligence-governance-framework/
[2] Read the AI Verify Foundation overview here: https://www.dataprotectionreport.com/2023/06/singapore-contributes-to-the-development-of-accessible-ai-testing-and-accountability-methodology-with – Starting ai-verify-foundation-and-ai-verify-testing-tool/
[3] Read the summary of the public consultation on this development: Singapore releases draft advisory guidelines on the use of personal data in AI recommendations and decision-making systems | Data Protection Report