Overview
These are guidelines for the use of publicly available generative AI. Any proposed use of publicly available generative AI must be approved by your agency CIO. Please reach out to your privacy officer/privacy point of contact or security liaison/CISO about proposed uses to assess risk and provide guidance for use.
What is Publicly Available Generative AI?
Publicly available generative AI is a form of generative AI that is made publicly available to users. It is not procured through the state and is not required to comply with state privacy and security requirements.
This does not include state-licensed instances that have gone through the procurement and risk assessment process and maintain privacy, security, and data protection standards that align with state law and policy.
Publicly available generative AI is the focus of this guidance due to its unique risks to third-party or internal agency information, which when entered into publicly available AI such as ChatGPT “will become part of the chatbot’s data model and can be shared with others who ask relevant questions, resulting in data leakage."*
*Arvind Raman, ChatGPT at Work: What’s the Cyber Risk for Employers?
Guidance
Ethical and responsible use of publicly available generative AI1 requires alignment with the state’s policies, mission and goals. The following guidelines are human-centered with a focus on uses of AI that benefit North Carolinians and the public good. This requires assessment of publicly available generative AI tools and use cases to ensure that any tool used by the state is trustworthy AI.
- Never enter personally identifiable or confidential information into publicly available generative AI tools.
- Seek supervisory and security review and written approval before entering any code into or using code generated by a publicly available generative AI tool.
- Review, revise, test and independently fact check any output produced by publicly available generative AI to ensure it meets the agency’s standards for quality work product. AI tools are not always accurate.
- Identify when content was substantially2 drafted using publicly available generative AI.
- Always check privacy and security settings of the publicly available generative AI tool prior to use.
- For high-risk use-cases, disable chat history and opt out of providing conversation history as data for training publicly available generative AI models prior to use.
- Understand using publicly available generative AI comes with risks, and mitigate risks whenever possible.
The following are best practices for the use of publicly available generative AI:
- Utilize publicly available generative AI tools that are deemed trustworthy by the Enterprise Security Risk Management Office.
- Follow a process established by your agency to document use of publicly available generative AI.
- Utilize trustworthy publicly available generative AI tools that meet the Enterprise Security and Risk Management Office’s security requirements.
- Complete state approved generative AI training.
- Regularly review the state’s publicly available generative AI guidance to keep abreast of changes.
Users should:
- Participate in provided training and awareness programs about the selection, implementation and evaluation of publicly available generative AI tools for a better understanding of the implications.
- Understand how the publicly available generative AI tool uses information as well as risks associated with the use of publicly available generative AI.
- Conduct security assessments of all publicly available generative AI tools prior to use to ensure system safety, reliability and to understand how data is used, stored and destroyed. Use the NIST AI Risk Management Framework (AI RMF) to assess and manage risk to individuals, organizations and society associated with AI.
- Re-assess the tool or use of AI within a tool or function regularly, not less than annually or on major release or function change of the AI enabled product. Any risk assessment documentation provided to the Enterprise Security and Risk Management Office and the Office of Privacy and Data Protection must be updated.
- Evaluate and approve the legal risks associated with legal terms and conditions governing the license and or use of publicly available generative AI tools. These terms and conditions may be enforceable against the employee or the agency.
- Document publicly available generative AI at the agency or entity level for accountability and to ensure the use of publicly available generative AI is for the public good and does not involve the use of PII or other sensitive data.
- Document successful use cases of publicly generative AI to share within your agency as well as with other state agencies that could benefit from its use.
- Each agency is responsible for determining whether use of publicly available generative AI by its employees is acceptable and for setting appropriate employment policies governing the use of publicly available generative AI by its employees.
- All uses of publicly available generative AI for state purposes must be conducted using accounts created specifically for state use (using state email addresses).
- The sharing of login credentials is prohibited.
- Entry of any sensitive information – e.g., personally identifiable information (PII) financial records, trade secrets – in publicly available generative AI prompts is prohibited.
- Any content produced using publicly available generative AI should meet the agency’s standards for quality work product.
- Agencies must develop processes for the review of all outputs from the use of publicly available generative AI to assess risk of any violation of applicable state or federal law, including but not limited to copyright or other intellectual property infringement, when using publicly available generative AI.
- Carefully examine for mistakes all content produced by publicly available generative AI, assessing the trustworthiness of links and references to events or facts. Information should not be relied on without proper, independent verification. Publicly available generative AI should not be relied upon to deliver precise responses. Care must be taken to identify bias in content produced by publicly available generative AI, both in terms of the viewpoints expressed and the data presented and to ensure that content complies with North Carolina law and does not have a detrimental effect on vulnerable populations.
- Disclose when using publicly available generative AI to create substantive content. Citations must include the following: Name of Publicly Available Generative AI System, model type and version employed.
- Citations for guidance, policies and documents released to the public must be cited in a footnote, endnote, header or footer.
- Citations for images must be clearly embedded in the images.
- Citations for audio and video content must be embedded in the audio or video or cited prominently together with the audio or video (e.g., image and citation presented together in a text box, audio preceded by or ending with a citation that it was generated using AI).
- Review complete sources of information whenever possible. Reliance on summaries produced by publicly available generative AI may overlook or mischaracterize information from the original text.
- Seek the approval of their agency public information officer/communications director and chief information security officer before using or publishing visual, audio or video content produced by publicly available generative AI.
- Vet content produced by publicly available generative AI to identify potentially biased, offensive or incorrect information prior to publication.
- Examine information produced for publication by publicly available generative AI to ensure compliance with all state and federal laws (e.g., privacy, data protection, copyright, intellectual property) and data protection standards prior to publication.
- Notify supervisors and agency chief information security officer with concerns relating to AI and the use of publicly available generative AI tools.
- Follow existing security policy and incident reporting protocols for suspected security breaches or data protection violations.
Information provided to a publicly available generative AI tool is considered “released to the public” and may be subject to public records requests under the Public Records Act (PRA).
Releasing information that does not have a public information classification may violate privacy or data protection requirements and laws.
While AI offers tremendous potential benefits to state government and society at large, all uses of generative AI come with some risk. Users must follow the AI Framework when evaluating the risks of using publicly available generative AI on state equipment and/or for state business.
As with all content produced by state government, content produced for publication using publicly available generative AI requires thorough review before use. Special care should be taken when the output has the potential to impact North Carolinians’ exercise of rights, opportunities or access to critical resources or services administered by or accessed through the state. This protection applies regardless of the changing role of automated systems in state government.
When using publicly available generative AI, keep in mind that:
- Publicly available generative AI should be evaluated for accuracy.
- Content produced by publicly available generative AI tools may be inaccurate or unverifiable. For example:
- AI-generated hallucinations (made-up content)
- Incorrect context (data pulled from the internet may not be representative of state government – e.g. policies from private industry v. public, federal v. state
- Cite non-existent sources.
- Publicly available generative AI models and algorithms are often proprietary and end-users who may not have insight into how they were created or function.
- It may not be able to determine how the model was trained and evaluated for bias.
- Results may be based on datasets that contain errors and may be historically biased across race, sex, gender identity, ability and many other factors.
- Publicly available generative AI tools may not comply with state and federal laws and requirements designed to ensure the confidentiality of sensitive information and will require a security risk assessment by the agency before information is uploaded.
- Use of publicly available generative AI may require employees to accept legal terms and conditions governing the license and or use of the tool, which may be enforceable against the employee or the agency.
- Publicly available generative AI may create content that infringes on others’ intellectual property (e.g., patent, copyright, trademark).
- Entering information into a publicly available generative AI tool is equivalent to releasing it publicly. Releasing information that does not have a public information classification may violate privacy or data protection requirements and laws.
- Using generative AI to generate software code could expose existing vulnerabilities and create new ones if systems are not kept current with patches and software updates.
- AI systems and related services rely on computing technology and networks that must also be secured against unauthorized access and manipulation in order to ensure the integrity of the systems and data being used to enable AI tools.
- Uploading or sharing any personal, proprietary or restricted data into publicly available generative AI (e.g., proprietary code, personal information, security sensitive information) is strictly prohibited.
This guidance extends to all PII associated with the public, employees or partners, and includes educational, financial and health records, trade secrets and any other sensitive information entrusted to the state.