Gen AI Risk Management
- NIST AI Risk Management Framework
- NIST AI 600-1 Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile
- NIST AI 100-4 Reducing Risks Posed by Synthetic Content (NIST Trustworthy and Responsible AI)
- DHS Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure
Identifying AI Risks & Facilitating Responsible Deployment of Gen AI
- GAO-25-107651, Artificial Intelligence: Generative AI Training, Development, and Deployment Considerations: (October 2024): Contains information on common practices commercial developers use to facilitate responsible development and deployment of generative AI technologies.
- AI Risk Atlas: A risk atlas from IBM to understand some of the risks of working with generative AI, foundation models, and machine learning models.
- Atlas of AI Risks
Identifying & Mitigating Bias in AI
- NIST SP 1270, Towards a Standard for Identifying and Managing Bias in Artificial Intelligence: Identifies three categories of bias in AI – systemic, statistical and human – and describes how and where they contribute to harms; describes three broad challenges for mitigating bias – datasets, testing and evaluation and human factors – and introduces preliminary guidance for addressing them.
Infographic: AI Governance in Practice Report 2024
IAPP and FTI Consulting AI Governance in Practice Report 2024
AI Ethics & Strategy
- AI Ethics Guidelines Global Inventory: This inventory, created by Algorithm Watch, a non-governmental, non-profit organization based in Berlin and Zurich, can be searched by sector/actor, type (binding agreement, voluntary commitment, recommendation), region and location. You can find the ethical principles, recommendations, and decisions of private companies, nonprofits, governments and more.
- AI Safety Institute's Strategic Vision (National Institute of Standards and Technology)
- Ethics of Artificial Intelligence (UNESCO)
- Blueprint for an AI Bill of Rights (White House Office of Science and Technology Policy)
Foundational Models
Foundation model acceptable use/terms of service policies:
- Adept AI [Terms of Use]
- Adobe [Gen AI User Guidelines]
- Adobe [Gen AI Product-Specific Concerns]
- AI21 Labs [Responsible Use]
- Aleph Alpha [Terms & Conditions]
- Amazon [AWS Responsible AI Policy]
- Anthropic [Acceptable Use Policy]
- Character AI [Terms of Service]
- Cohere [Usage Guidelines]
- Deepseek [User Agreement]
- ElevenLabs [Terms of Service]
- Google [Generative AI Prohibited Use Policy]
- Inflection AI - Pi [Acceptable Use Policy]
- Meta - Llama 2 [Acceptable Use Policy]
- Midjourney [Terms of Service]
- Mistral AI [Terms of Use]
- OpenAI [Usage Policies]
- Perplexity AI [Terms of Service]
- Reka AI [Terms of Use]
- Runway [Terms of Use]
- Stability AI [Acceptable Use Policy]
- Together AI [Terms of Service]
- Twelve Labs [Terms of Use]
- WRITER [Terms of Use]
Other Resources
- Government Use of AI - Federal site for use of AI in government, including links to AI policy, use case inventories and more
- Copyright and Artificial Intelligence - Information from the U.S. Copyright Office about how the U.S. is handling copyright and AI.
- U.S. Artificial Intelligence Safety Institute
- AI Alliance