AI Resources
AI Framework for Responsible Use
The N.C. Department of Information Technology has developed the North Carolina State Government Responsible Use of Artificial Intelligence Framework, a living document for state agencies, designed to provide a comprehensive risk management approach to the use of AI.
Aligned with existing privacy laws and IT policies, the AI Framework applies to both existing and new uses of all AI. It consists of principles, practices and guidance to provide a measured and consistent approach for state agencies to support innovation while reducing privacy and data protection risks.
- View the AI Framework
- View Principles for Responsible Use of AI
- View AI Terms & Definitions
AI Assessment
The Privacy Threshold Analysis (PTA) is used to assess privacy and security. It addresses the use of AI and requires a description of the project or system that identifies the proposed use of AI and how that aligns with the state's principles of responsible use. The PTA is used to analyze risks for projects or systems using AI or generative AI from a privacy perspective. Privacy risk assessment is based on the Fair Information Practice Principles (FIPPS), which underpin NCDIT’s Principles for Responsible Use of AI, a foundational component of the North Carolina State Government Responsible Use of Artificial Intelligence Framework.
State agencies seeking additional information about the PTA or privacy AI risk assessment can send an email to the Office of Privacy and Data Protection. Those seeking support to mature generative AI use cases can also send an email to the AI Working Group.
Use of Publicly Available Generative AI
Ethical and responsible use of publicly available generative AI requires alignment with the state’s policies, mission and goals. These guidelines are human-centered with a focus on uses of AI that benefit North Carolinians and the public good. This requires assessment of publicly available generative AI tools and use cases to ensure that any tool used by the state is trustworthy AI.
Infographic: AI Governance in Practice Report 2024
IAPP and FTI Consulting AI Governance in Practice Report 2024
Identifying AI Risks & Facilitating Responsible Deployment of Gen AI
- GAO-25-107651, Artificial Intelligence: Generative AI Training, Development, and Deployment Considerations: (October 2024): Contains information on common practices commercial developers use to facilitate responsible development and deployment of generative AI technologies.
Identifying & Mitigating Bias in AI
NIST SP 1270, Towards a Standard for Identifying and Managing Bias in Artificial Intelligence: Identifies three categories of bias in AI — systemic, statistical and human — and describes how and where they contribute to harms; describes three broad challenges for mitigating bias — datasets, testing and evaluation and human factors — and introduces preliminary guidance for addressing them.
Other Resources
- Government Use of AI - Federal site for use of AI in government, including links to AI policy, use case inventories and more
- Copyright and Artificial Intelligence - Information from the U.S. Copyright Office about how the U.S. is handling copyright and AI.
- U.S. Artificial Intelligence Safety Institute
- AI Alliance
AI Ethics & Strategy
- AI Ethics Guidelines Global Inventory: This inventory, created by Algorithm Watch, a non-governmental, non-profit organization based in Berlin and Zurich, can be searched by sector/actor, type (binding agreement, voluntary commitment, recommendation), region and location. You can find the ethical principles, recommendations, and decisions of private companies, nonprofits, governments and more.
- AI Safety Institute's Strategic Vision (National Institute of Standards and Technology)
- Ethics of Artificial Intelligence (UNESCO)
- Blueprint for an AI Bill of Rights (White House Office of Science and Technology Policy)
AI News
- FTC Announces Crackdown on Deceptive AI Claims and Schemes (Sept. 25, 2024)
- Operation AI Comply: Detecting AI-Infused Frauds and Deceptions (Sept. 25, 2024)
Foundational Models
Foundation model acceptable use/Terms of Service policies:
- Adept AI [Terms of Use]
- Adobe [Gen AI User Guidelines]
- Adobe [Gen AI Product-Specific Concerns]
- AI21 Labs [Responsible Use]
- Aleph Alpha [Terms & Conditions]
- Amazon [AWS Responsible AI Policy]
- Anthropic [Acceptable Use Policy]
- Character AI [Terms of Service]
- Cohere [Usage Guidelines]
- Deepseek [User Agreement]
- ElevenLabs [Terms of Service]
- Google [Generative AI Prohibited Use Policy]
- Inflection AI - Pi [Acceptable Use Policy]
- Meta - Llama 2 [Acceptable Use Policy]
- Midjourney [Terms of Service]
- Mistral AI [Terms of Use]
- OpenAI [Usage Policies]
- Perplexity AI [Terms of Service]
- Reka AI [Terms of Use]
- Runway [Terms of Use]
- Stability AI [Acceptable Use Policy]
- Together AI [Terms of Service]
- Twelve Labs [Terms of Use]
- WRITER [Terms of Use]
AI Training
The N.C. Department of Information Technology has compiled various from around the internet for those seeking to understand more about AI. Access the training.