AI Resources

AI Framework for Responsible Use 

The N.C. Department of Information Technology has developed the North Carolina State Government Responsible Use of Artificial Intelligence Framework, a living document for state agencies, designed to provide a comprehensive risk management approach to the use of AI.

Aligned with existing privacy laws and IT policies, the AI Framework applies to both existing and new uses of all AI. It consists of principles, practices and guidance to provide a measured and consistent approach for state agencies to support innovation while reducing privacy and data protection risks.

AI Assessment

The Privacy Threshold Analysis (PTA) is used to assess privacy and security. It addresses the use of AI and requires a description of the project or system that identifies the proposed use of AI and how that aligns with the state's principles of responsible use. The PTA is used to analyze risks for projects or systems using AI or generative AI from a privacy perspective. Privacy risk assessment is based on the Fair Information Practice Principles (FIPPS), which underpin NCDIT’s Principles for Responsible Use of AI, a foundational component of the North Carolina State Government Responsible Use of Artificial Intelligence Framework.

State agencies seeking additional information about the PTA or privacy AI risk assessment can send an email to the Office of Privacy and Data Protection. Those seeking support to mature generative AI use cases can also send an email to the AI Working Group.  

Use of Publicly Available Generative AI 

Ethical and responsible use of publicly available generative AI requires alignment with the state’s policies, mission and goals. These guidelines are human-centered with a focus on uses of AI that benefit North Carolinians and the public good. This requires assessment of publicly available generative AI tools and use cases to ensure that any tool used by the state is trustworthy AI. 

Infographic: AI Governance in Practice Report 2024

IAPP and FTI Consulting AI Governance in Practice Report 2024

AI Governance in Practice Report Infographic

Identifying AI Risks & Facilitating Responsible Deployment of Gen AI

Identifying & Mitigating Bias in AI

NIST SP 1270, Towards a Standard for Identifying and Managing Bias in Artificial Intelligence: Identifies three categories of bias in AI — systemic, statistical and human — and describes how and where they contribute to harms; describes three broad challenges for mitigating bias — datasets, testing and evaluation and human factors — and introduces preliminary guidance for addressing them.

Other Resources

AI Ethics & Strategy

AI News

Foundational Models

Foundation model acceptable use/Terms of Service policies:

AI Training

The N.C. Department of Information Technology has compiled various from around the internet for those seeking to understand more about AI. Access the training.