As artificial intelligence continues to shape the ways we work, govern and serve the public, ensuring its responsible use is essential.
In August 2024, North Carolina took a significant step forward by publishing the Responsible Use of Artificial Intelligence Framework, which outlines seven guiding principles for ethical and effective AI deployment.
One of these principles is "Data Privacy and Governance."
This principle emphasizes that any use of AI by the state must maintain respect for individuals' privacy and adopt the Fair Information Practice Principles throughout the AI lifecycle – including development, testing, deployment and decommissioning. Privacy is embedded into the design and architecture of IT and business practices, ensuring that the preservation of privacy is the default. Access to data must be appropriately controlled, and those developing or deploying AI systems must remain conscious of the quality and integrity of the data used by these systems.
A key aspect of this principle is understanding how state data will be used by AI/GenAI tools.
Agencies must carefully evaluate how data will be collected, processed, stored and shared by these systems. This includes determining whether the AI system aligns with the intended purposes of the data, identifying potential risks of misuse and ensuring compliance with state laws and privacy and security policies. By thoroughly analyzing data usage, agencies can mitigate risks, prevent unintended consequences and foster public trust.
As state agencies increasingly adopt AI technologies, both through internal development and procurement from third-party vendors, safeguarding privacy remains a top priority.
Agencies must ensure that AI systems adhere to Fair Information Practice Principles and prioritize privacy throughout the AI lifecycle, regardless of whether they are developed in-house or acquired from external sources. This includes assessing vendors’ data handling practices, ensuring transparency in how AI processes personal information, and embedding privacy protections from the outset.
To support state agencies in adhering to the Responsible Use of AI Frameworks principles, the Office of Privacy and Data Protection (OPDP) has developed an internal AI/GenAI questionnaire. This tool is used during the Privacy Threshold Analysis (PTA) process to assess projects that incorporate AI or generative AI technologies. The questionnaire ensures that agencies identify and address potential privacy risks early in the project lifecycle, helping to align AI initiatives with both legal requirements and ethical standards.
Privacy in AI governance is not just a compliance requirement; it is fundamental to maintaining public trust. By prioritizing privacy, North Carolina’s state agencies demonstrate their commitment to protecting individual rights and ensuring that AI technologies are used in ways that benefit the state and its residents. During Data Privacy Week, let’s recognize the importance of embedding privacy into AI governance and reaffirm our shared responsibility to uphold the seven guiding principles when initiating projects involving AI/GenAI.
Stay tuned for more insights and resources from the OPDP as we continue to champion privacy and data protection across the state. Visit the AI Corner and the OPDP website for additional information.