AddToAny share buttons

Someone typing on a keyboard. Showing icons representing AI Policies

New State Policy Sets Clear, Secure Standards for Employee Use of Generative AI

North Carolina has introduced a statewide policy that guides how executive branch employees may use publicly available generative AI tools such as Microsoft Copilot, ChatGPT, Google Gemini, and Anthropic Claude.

The Use of Publicly Available Generative AI Policy provides a safe, consistent framework that helps employees use AI responsibly while protecting sensitive information and maintaining public trust.

“AI moves at the speed of trust,” said I-Sah Hsieh, NCDIT deputy secretary for AI and policy. “The policy is designed to build exactly that – establishing consistent expectations so agencies and employees can innovate with confidence, not uncertainty.”

How Employees May Use AI

Employees already encounter AI tools in their work, and the policy gives them clear, practical guidance.

Employees can use AI to draft documents, summarize reports, brainstorm ideas, simplify complex language, write or troubleshoot code, and streamline repetitive tasks. 

In all these uses, AI should support employees’ judgment and expertise, not replace it.

Setting AI Guardrails

Employees must also follow basic security requirements:

  • Use AI tools only in a browser or cloud environment on state devices
  • Register with a state email address
  • Avoid downloading AI applications without ESRMO approval
  • Keep login credentials private
  • Avoid any tool listed as high‑risk under the statewide High‑Risk Applications Policy

These guardrails help employees use AI confidently and safely while strengthening the state’s overall security posture.

With strong guardrails in place, employees can use generative AI to work more efficiently and deliver better service.

Explore Generative AI Policy & Resources

Related Topics: