State government aims to use artificial intelligence (AI) to improve the lives of North Carolinians and serve the public good. Agencies should safely explore AI whenever it can help them achieve their mission, enhance services and boost efficiency and effectiveness.

The primary goal of using technology, including AI, is always to benefit the people of North Carolina.

To ensure AI is used responsibly, we have established seven guiding principles. These principles provide a framework for ethical behavior, helping us harness AI's benefits while minimizing potential harm.

Agencies must regularly test their AI applications against these principles. If an AI application does not perform as intended or violates these principles, mechanisms must be in place to modify, replace, or deactivate those applications.

The principles and associated practices are as follows.

PrincipleAssociated Practice
Human-CenteredHuman oversight is required for all development, deployment and use of AI. The state should use AI to benefit North Carolinians and the public good. Human oversight should ensure that the use of AI does not negatively impact North Carolinians’ exercise of rights, opportunities or access to critical resources or services administered by or accessed through the state. 
Transparency and ExplainabilityWhen AI is used by the state, the user agency shall provide notice to those who may be impacted by its use. This notice should identify the use of an automated system, explain why it is used and how this use contributes to outcomes that impact individuals. This notice should be accessible and written in plain language. Notice should include clear descriptions of the data, the role automation plays in decision-making and the ability to trace the cause of possible errors.  
Security and ResiliencySystems utilizing AI must undergo pre-deployment testing, risk identification and mitigation, and ongoing monitoring that demonstrates the systems are safe and effective, in keeping with standards for security review for all technology implemented within state government. Systems need to be assessed for resilience to attack, adherence to security standards, and alignment with general safety, accuracy, reliability and reproducibility. 
Data Privacy and GovernanceAny use of AI by the state must maintain the state’s respect for individuals’ privacy and its adoption of the Fair Information Practice Principles throughout the AI lifecycle (development, testing, deployment, decommissioning). This means that privacy is embedded into the design and architecture of IT and business practices. Preservation of privacy should be the default and access to data should be appropriately controlled. Individuals developing or deploying AI systems should be conscious of the quality and integrity of data used by those systems. 
Diversity, Non-Discrimination and FairnessAI should be developed with consultation from diverse communities, stakeholders and domain experts to identify concerns, risks, biases, and potential impacts of the system. AI needs to be developed to be equitable and control for biases that could lead to discriminatory results. AI systems should be user centric and accessible to all people.
Auditing and AccountabilityUsers of AI must be accountable for implementing and enforcing appropriate safeguards for the proper use and functioning of their applications of AI, and shall monitor, audit and document compliance with those safeguards. Agencies shall provide appropriate training to all agency personnel responsible for the design, development, acquisition and use of AI.
Workforce Empowerment Staff are empowered in their roles through training, guidance, collaborations, and opportunities that promote innovation that aligns with state or agency missions and goals. This can help state government make best use of AI tools to reduce administrative burdens on staff where feasible and improve overall public service.
On This Page Jump Links
Off