Singapore has released a draft governance framework for generative artificial intelligence (GenAI), saying it will need to address new issues such as incident reporting and content provenance.
The proposed model builds on the country’s existing AI governance framework, which was first released in 2019 and last updated in 2020.
Related article: How generative AI can greatly benefit the service industry
In a joint statement, the AI Verify Foundation and Infocomm Media Development Authority (IMDA) said GenAI has great potential to be transformative “beyond” what traditional AI can achieve, but it also comes with risks.
Singapore government agencies say there is a growing global consensus that consistent principles are needed to create an environment where GenAI can be used safely and confidently.
“The use and impact of AI is not limited to individual countries,” they said. “This proposed framework aims to foster international conversations between policymakers, industry and the research community and enable globally credible development.”
The draft includes suggestions from a discussion paper published by IMDA last June, identifying six risks associated with GenAI, including hallucinations, piracy, and embedded bias, and how to address these. A framework has been identified.
The proposed GenAI governance framework also draws insights from previous efforts, including a catalog on how to assess the safety of GenAI models and tests conducted through an evaluation sandbox.
The draft GenAI governance model includes nine key areas that Singapore believes will play a key role in supporting a trusted AI ecosystem. These revolve around the principles that AI-powered decisions should be explainable, transparent, and fair. According to IMDA and AI Verify, the framework also provides practical suggestions that AI model developers and policy makers can apply as a first step.
Also: We’re not ready for how generative AI will impact elections
One of the nine components examines the origin of content. We need transparency about where and how content is produced so that consumers can decide what to do with their online content. AI-generated content such as deepfakes can exacerbate misinformation because it is so easy to create, Singapore officials said.
He noted that other governments are considering technological solutions such as digital watermarks and cryptographic provenance to address this problem, which are meant to label and provide additional information, and that AI said it is used to flag content that has been created or modified.
According to the draft framework, policies need to be “carefully designed” to ensure that these tools can be used in practice in appropriate situations. For example, in the near future it may not be possible to include these technologies in all content created or edited, and origin information may also be removed. Threat actors can find other ways to circumvent the tool.
The draft framework proposes working with publishers, including social media platforms and news organizations, to support the embedding and display of digital watermarks and other provenance details. These must also be implemented properly and safely to reduce the risk of evasion.
Also: This is why AI-powered misinformation is the biggest global risk
Another important component focuses on security, where GenAI introduces new risks, such as instant attacks that are transmitted through model architectures. This allows threat actors to leak sensitive data and model weights, according to the draft framework.
We recommend that the concept of security by design applied to the system development lifecycle needs refinement. For example, you should consider what challenges the ability to insert natural language as input may pose when implementing appropriate security controls.
The probabilistic nature of GenAI may also pose new challenges to traditional evaluation methods used for system improvement and risk mitigation during the development lifecycle.
This framework requires the development of new security safeguards. This may include input moderation tools to detect unsafe prompts and digital forensics tools for GenAI used to investigate and analyze digital data to reconstruct cybersecurity incidents. there is.
Related article: Singapore focuses on data centers and data models as AI adoption advances
Singapore’s government agency said the draft government framework “needs to strike a careful balance between protecting users and promoting innovation”. “Various international discussions have taken place on relevant and pertinent topics such as accountability, copyright, and misinformation. No single intervention will be a silver bullet.”
Building international consensus is also important as AI governance is still in its infancy, they said, pointing to Singapore’s efforts to work with governments such as the US to align their respective AI governance frameworks. did.
Singapore is accepting feedback on the draft GenAI Governance Framework until March 15.