Overview

These are guidelines for the use of publicly available generative AI. Any proposed use of publicly available generative AI must be approved by your agency CIO. Please reach out to your privacy officer/privacy point of contact or security liaison/CISO about proposed uses to assess risk and provide guidance for use.

What is Publicly Available Generative AI?

Publicly available generative AI is a form of generative AI that is made publicly available to users. It is not procured through the state and is not required to comply with state privacy and security requirements. 

This does not include state-licensed instances that have gone through the procurement and risk assessment process and maintain privacy, security, and data protection standards that align with state law and policy. 

Publicly available generative AI is the focus of this guidance due to its unique risks to third-party or internal agency information, which when entered into publicly available AI such as ChatGPT “will become part of the chatbot’s data model and can be shared with others who ask relevant questions, resulting in data leakage."* 

*Arvind Raman, ChatGPT at Work: What’s the Cyber Risk for Employers?

Guidance

Ethical and responsible use of publicly available generative AI requires alignment with the state’s policies, mission and goals. The following guidelines are human-centered with a focus on uses of AI that benefit North Carolinians and the public good. This requires assessment of publicly available generative AI tools and use cases to ensure that any tool used by the state is trustworthy AI. 

Users must:

  • Never enter personally identifiable or confidential information into publicly available generative AI tools. 
  • Seek supervisory and security review and written approval before entering any code into or using code generated by a publicly available generative AI tool.  
  • Review, revise, test and independently fact check any output produced by publicly available generative AI to ensure it meets the agency’s standards for quality work product. AI tools are not always accurate. 
  • Be transparent and identify when content was drafted using publicly available generative AI.
  • Always check privacy and security settings of the publicly available generative AI tool prior to use.
  • For high-risk use-cases, disable chat history and opt out of providing conversation history as data for training publicly available generative AI models prior to use.
  • Understand the risks of any use of publicly available generative AI and mitigate risks whenever possible.

Users should work with their agencies to:

  • Utilize publicly available generative AI tools that are deemed trustworthy by the Enterprise Security Risk Management Office.
  • Follow a process established by your agency to document use of publicly available generative AI.  
Tab/Accordion Items

  • Agencies must conduct security assessments of all publicly available generative AI tools prior to use to ensure system safety, reliability and to understand how data is used, stored and destroyed. Use the NIST AI Risk Management Framework (AI RMF) to assess and manage risk to individuals, organizations, and society associated with AI.
  • Agencies must regularly, not less than annually or on major release or function change of the AI enabled product, re-assess the tool or use of AI within a tool or function. This must include updates to any risk assessment documentation provided to the ESRMO.  
  • Agencies must evaluate and approve the legal risks associated with legal terms and conditions governing the license and or use of publicly available generative AI tools. These terms and conditions may be enforceable against the employee or the agency. 
  • Use of publicly available generative AI should be documented at the agency or entity level for accountability and to ensure the use of publicly available generative AI is for the public good and does not involve the use of PII or other sensitive data.
  • Successful use cases of publicly generative AI should be documented to share within your agency as well as with other state agencies that could benefit from its use. 

  • Each agency is responsible for determining whether use of publicly available generative AI by its employees is acceptable and for setting appropriate employment policies governing the use of publicly available generative AI by its employees.  
  • All uses of publicly available generative AI for state purposes must be conducted using accounts created specifically for state use (using state email addresses).
  • The sharing of login credentials is prohibited. 
  • Entry of any sensitive information – e.g., personally identifiable information (PII) financial records, trade secrets) – in publicly available generative AI prompts is prohibited. 
  • Any content produced using publicly available generative AI should meet the agency’s standards for quality work product. 
  • Agencies must develop processes for the review of all outputs from the use of publicly available generative AI to assess risk of any violation of applicable state or federal law, including but not limited to copyright or other intellectual property infringement, when using publicly available generative AI.
  • Content produced by publicly available generative AI should be carefully examined for mistakes, including assessing the trustworthiness of links and references to events or facts. Information should not be relied on without proper, independent verification. Publicly available generative AI should not be relied upon to deliver precise responses. Care must be taken to identify bias in content produced by publicly available generative AI, both in terms of the viewpoints expressed and the data presented and to ensure that content complies with North Carolina law and does not have a detrimental effect on vulnerable populations.
  • Disclose the use of publicly available generative AI in creating guidance, policies, and documents released to the public. Include the name of the publicly available generative AI system used, model type and version employed as well as confirmation that it was independently fact-checked.
    • Citations for images and video must be embedded into every frame of the image or video. Publicly available generative AI can be cited as a footnote, endnote, header or footer.  
    • Citations for generated text content must include the following: Name of Publicly Available Generative AI System used and confirmation that the information was independently fact-checked. Example: “This document was drafted with support from ChatGPT. The content was edited and independently fact-checked by agency staff. Sources for facts and figures are provided as they appear.” 
  • Complete sources of information should be reviewed whenever possible. Reliance on summaries produced by publicly available generative AI may overlook or mischaracterize information from the original text.  

  • Users must seek the approval of their agency public information officer/communications director and chief information security officer before using or publishing s visual, audio or video content produced by publicly available generative AI.  
  • Users should vet content produced by publicly available generative AI to identify potentially biased, offensive, or incorrect information prior to publication. 
  • Information produced for publication by publicly available generative AI must be examined to ensure compliance with all state and federal laws (e.g., privacy, data protection, copyright, intellectual property) and data protection standards prior to publication.

  • Users should notify their supervisors and agency chief information security officer with concerns relating to AI and the use of publicly available generative AI tools. 
  • Users must follow existing security policy and incident reporting protocols for suspected security breaches or data protection violations. 

While AI offers tremendous potential benefits to state government and society at large, all uses of generative AI come with some risk. Users must follow state IT, privacy and security policies and guidelines when evaluating the risks of using publicly available generative AI on state equipment and/or for state business. 

As with all content produced by state government, content produced for publication using publicly available generative AI requires thorough review before use. Special care should be taken when the output has the potential to impact North Carolinians’ exercise of rights, opportunities, or access to critical resources or services administered by or accessed through the state. This protection applies regardless of the changing role of automated systems in state government. 

When using publicly available generative AI, keep in mind that: 

  • Publicly available generative AI should be evaluated for accuracy. 
  • Content produced by publicly available generative AI tools may be inaccurate or unverifiable. Ex. AI-generated hallucinations (made up content), incorrect context (data pulled from the internet may not be representative of state government – e.g. policies from private industry v. public, federal v. state –  cite non-existent sources).   
  • Publicly available generative AI models and algorithms are often proprietary and end-users who may not have insight into how they were created or function.  
  • Users may not be able to determine how the model was trained and evaluated for bias. 
  • Results may be based on datasets that contain errors and may be historically biased across race, sex, gender identity, ability and many other factors. 
  • Publicly available generative AI tools may not comply with state and federal laws and requirements designed to ensure the confidentiality of sensitive information and will require a security risk assessment by the agency before information is uploaded. 
  • Use of publicly available generative AI may require employees to accept legal terms and conditions governing the license and or use of the tool which may be enforceable against the employee or the agency. 
  • Publicly available generative AI may create content that infringes on others’ intellectual property (e.g., patent, copyright, trademark).   
  • Entering information into a publicly available generative AI tool is equivalent to releasing it publicly. Releasing information that does not have a public information classification may violate privacy or data protection requirements and laws.  
  • Using generative AI to generate software code could expose existing vulnerabilities and create new ones if systems are not kept current with patches and software updates. 
  • AI systems and related services rely on computing technology and networks that must also be secured against unauthorized access and manipulation in order to ensure the integrity of the systems and data being used to enable AI tools. 
  • Users are strictly prohibited from uploading or sharing any personal, proprietary or restricted data into publicly available generative AI (e.g., proprietary code, personal information, security sensitive information).  

This guidance extends to all PII associated with the public, employees or partners, and includes educational, financial and health records, trade secrets and any other sensitive information entrusted to the state.