State government should use artificial intelligence (AI) to benefit North Carolinians and the public good. Agencies should consider AI in instances where it can help further the agency’s mission, enhance service delivery and improve efficiency and effectiveness. The overarching goal for state government in exploring and using technology, including technology that includes AI, should always be to benefit the people of North Carolina.
These seven principles and associated practices form a blueprint of ethical behavior to guide the state in using AI responsibly to harness its benefits to serve the public while minimizing potential harm.
Agencies need to ensure that their AI applications are regularly tested against these principles. Mechanisms should be maintained to modify, supersede, disengage, or deactivate existing applications of AI that demonstrate performance or outcomes that are inconsistent with their intended use or these principles.
The principles and associated practices are:
- Human-centered: Human oversight is required for all development, deployment and use of AI. The state should use AI to benefit North Carolinians and the public good. Human oversight should ensure that the use of AI does not negatively impact North Carolinians’ exercise of rights, opportunities or access to critical resources or services administered by or accessed through the state.
- Transparency and Explainability: When AI is used by the state, the user agency shall provide notice to those who may be impacted by its use. This notice should identify the use of an automated system, explain why it is used and how this use contributes to outcomes that impact individuals. This notice should be accessible and written in plain language. Notice should include clear descriptions of the data, the role automation plays in decision-making and the ability to trace the cause of possible errors.
- Security and Resiliency: Systems utilizing AI must undergo pre-deployment testing, risk identification and mitigation, and ongoing monitoring that demonstrates the systems are safe and effective, in keeping with standards for security review for all technology implemented within state government. Systems need to be assessed for resilience to attack, adherence to security standards, and alignment with general safety, accuracy, reliability and reproducibility.
- Data Privacy and Governance: Any use of AI by the state must maintain the state’s respect for individuals’ privacy and its adoption of the Fair Information Practice Principles throughout the AI lifecycle (development, testing, deployment, decommissioning). This means that privacy is embedded into the design and architecture of IT and business practices. Preservation of privacy should be the default and access to data should be appropriately controlled. Individuals developing or deploying AI systems should be conscious of the quality and integrity of data used by those systems.
- Diversity, Non-discrimination, and Fairness: AI should be developed with consultation from diverse communities, stakeholders and domain experts to identify concerns, risks, biases, and potential impacts of the system. AI needs to be developed to be equitable and control for biases that could lead to discriminatory results. AI systems should be user centric and accessible to all people.
- Auditing and Accountability: Users of AI must be accountable for implementing and enforcing appropriate safeguards for the proper use and functioning of their applications of AI, and shall monitor, audit and document compliance with those safeguards. Agencies shall provide appropriate training to all agency personnel responsible for the design, development, acquisition and use of AI.
- Workforce Empowerment: Staff are empowered in their roles through training, guidance, collaborations, and opportunities that promote innovation that aligns with state or agency missions and goals. This can help state government make best use of AI tools to reduce administrative burdens on staff where feasible and improve overall public service.