On 16 January 2024, Singapore released a consultation document.[1] To elicit public and international feedback on the proposed model AI governance framework for generative AI.
This paper addresses nine “dimensions” relevant to generative AI:
This includes defining responsibilities across all stacks in the AI development chain, including responsibilities to end users.
As a core element of AI model development, data quality issues such as piracy and privacy are relevant and important.
- Reliable development and deployment
From model development to application deployment, standards must be in place for safe and reliable development, evaluation, and “food label” type transparency and disclosure.
Establishing regulatory notification practices will facilitate timely remediation of incidents.
Third-party testing and assurance can help develop common and consistent standards for AI and ultimately demonstrate trust with end users.
Addressing the risks posed by generative AI requires adapting existing frameworks for information security and developing new testing tools.
To avoid misinformation and fraud, we need transparency about where and how content is generated. The use of technological solutions such as digital watermarks and cryptographic provenance should be considered in appropriate situations.
- Safety and alignment research and development (R&D)
Increasing the alignment of models with human intentions and values requires accelerating R&D investment. Singapore hopes to achieve this in parallel with global collaboration between AI safety research and development institutions.
Democratizing access to AI, improving public sector adoption, upskilling workers, and sustainably developing systems will ensure that AI delivers outcomes that benefit the public.
A previous version of the Model AI governance framework was released by Singapore in 2019 and updated in 2020.[2], we discussed certain risks associated with AI, such as bias, misuse, and lack of explainability, but there are also other risks, such as hallucinations and copyright infringement, especially as interest in generative AI increases recently. The need to investigate this aspect in more detail is justified. , and adjusting values. These concerns were raised in a white paper titled: Discussion Paper on Generative AI: Implications for Trust and Governance[3] Published in June 2023.
The consultation will end on 15 March 2024.
Disclaimer: While every effort has been made to ensure the accuracy of the information contained in this article, neither its author nor Sir Patton Boggs accepts liability for any errors or omissions. The content of this article is for general information only and is not intended to constitute or be relied upon as legal advice.
[1] Proposing a model AI governance framework for generative AI – fostering a trusted ecosystemAI Verification Foundation
[2] Artificial Intelligence Governance Framework Model 2nd EditionPersonal Data Protection Commission, Singapore
[3] Generative AI: Implications for trust and governanceInformation and Communication Media Development Bureau