AI Resources
AI Framework for Responsible Use
The N.C. Department of Information Technology has developed the North Carolina State Government Responsible Use of Artificial Intelligence Framework, a living document for state agencies, designed to provide a comprehensive risk management approach to the use of AI.
Aligned with existing privacy laws and IT policies, the AI Framework applies to both existing and new uses of all AI. It consists of principles, practices and guidance to provide a measured and consistent approach for state agencies to support innovation while reducing privacy and data protection risks.
- View the AI Framework
- View Principles for Responsible Use of AI
- View AI Terms & Definitions
AI Assessment
The Privacy Threshold Analysis (PTA) is used to assess privacy and security. It addresses the use of AI and requires a description of the project or system that identifies the proposed use of AI and how that aligns with the state's principles of responsible use. The PTA is used to analyze risks for projects or systems using AI or generative AI from a privacy perspective. Privacy risk assessment is based on the Fair Information Practice Principles (FIPPS), which underpin NCDIT’s Principles for Responsible Use of AI, a foundational component of the North Carolina State Government Responsible Use of Artificial Intelligence Framework.
State agencies seeking additional information about the PTA or privacy AI risk assessment can send an email to the Office of Privacy and Data Protection. Those seeking support to mature generative AI use cases can also send an email to the AI Working Group.
Use of Publicly Available Generative AI
Ethical and responsible use of publicly available generative AI requires alignment with the state’s policies, mission and goals. These guidelines are human-centered with a focus on uses of AI that benefit North Carolinians and the public good. This requires assessment of publicly available generative AI tools and use cases to ensure that any tool used by the state is trustworthy AI.
Gen AI Risk Management
- NIST AI Risk Management Framework
- NIST AI 600-1 Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile
- NIST AI 100-4 Reducing Risks Posed by Synthetic Content (NIST Trustworthy and Responsible AI)
Identifying AI Risks & Facilitating Responsible Deployment of Gen AI
- GAO-25-107651, Artificial Intelligence: Generative AI Training, Development, and Deployment Considerations: (October 2024): Contains information on common practices commercial developers use to facilitate responsible development and deployment of generative AI technologies.
- AI Risk Atlas: A risk atlas from IBM to understand some of the risks of working with generative AI, foundation models, and machine learning models.
Identifying & Mitigating Bias in AI
- NIST SP 1270, Towards a Standard for Identifying and Managing Bias in Artificial Intelligence: Identifies three categories of bias in AI – systemic, statistical and human – and describes how and where they contribute to harms; describes three broad challenges for mitigating bias – datasets, testing and evaluation and human factors – and introduces preliminary guidance for addressing them.
AI Training
The N.C. Department of Information Technology has compiled various from around the internet for those seeking to understand more about AI. Access the training.
Stanford University Machine Learning Lectures
- Probability for Computer Scientists
- Machine Learning Full Course
- Natural Language Processing with Deep Learning
- Machine Learning Explainability
- Reinforcement Learning
- Deep Generative Models
- Building Large Language Models (LLMs)
- Machine Learning with Graphs
- Transformers United
Infographic: AI Governance in Practice Report 2024
IAPP and FTI Consulting AI Governance in Practice Report 2024
AI Ethics & Strategy
- AI Ethics Guidelines Global Inventory: This inventory, created by Algorithm Watch, a non-governmental, non-profit organization based in Berlin and Zurich, can be searched by sector/actor, type (binding agreement, voluntary commitment, recommendation), region and location. You can find the ethical principles, recommendations, and decisions of private companies, nonprofits, governments and more.
- AI Safety Institute's Strategic Vision (National Institute of Standards and Technology)
- Ethics of Artificial Intelligence (UNESCO)
- Blueprint for an AI Bill of Rights (White House Office of Science and Technology Policy)
AI News
- FTC Announces Crackdown on Deceptive AI Claims and Schemes (Sept. 25, 2024)
- Operation AI Comply: Detecting AI-Infused Frauds and Deceptions (Sept. 25, 2024)
Foundational Models
Foundation model acceptable use/Terms of Service policies.
Other Resources
- Government Use of AI - Federal site for use of AI in government, including links to AI policy, use case inventories and more
- Copyright and Artificial Intelligence - Information from the U.S. Copyright Office about how the U.S. is handling copyright and AI.
- U.S. Artificial Intelligence Safety Institute
- AI Alliance