Madison AI

AI Governance Resources

16 AI Governance Policy Examples

Heyden Enochson
Published Sep 04, 2024

Welcome to our post on examples of AI Policies. In this post, we will give a quick refresher on what should be in a complete AI Governance Policy and provide you with x real-world examples from other regional governments.

Let’s jump in.

Refresher: What Should be Included in Your AI Governance Policy?

We covered this in detail in our overview post on building your AI Governance Policy, but here is a quick refresher on what should be included in your government’s AI Policy:

1. Define AI Guiding Principles

You must identify core AI guiding principles that align with your government’s values and ethics. These principles will guide decision-making and AI technologies’ development, deployment, and use.

2. Establish AI Governance Structure

Set up a governance structure tailored to your organization’s size and needs. This includes a governing group to manage AI and its guardrails, addressing data governance and cybersecurity.

3. Form an AI Oversight Committee

Create a committee responsible for overseeing ethical guidelines and compliance for AI initiatives. This group ensures AI projects align with ethical standards and protect organizational assets like data and intellectual property.

4. Identify Risks and Implement Ongoing Monitoring

Conduct a risk assessment to identify potential negative impacts of AI. Collaborate with AI experts, legal advisors, and security teams to develop risk mitigation strategies, supporting your legal framework for risk management.

5. Create an AI Learning Hub

Establish a space for your team to experiment and share AI learnings. This hub promotes transparency, ensures compliance, and prevents duplication of efforts. Include key performance indicators to measure the impact of AI initiatives.

6. Develop a Communication Strategy.

Determine the mediums and methods to inform your government team about AI developments and practices. Effective communication is vital for maintaining transparency and ensuring everyone is aligned with your AI strategy.

Examples of Great AI Policies

The City of San Jose, CA

See Their Generative AI Policy

The City of San Jose has outlined their policies and procedures in a document called Generative AI Guidelines. Make sure you check out the full document, but we really appreciated these call-outs and rules that need to be followed as their government uses AI:

  • Information you enter into Generative AI systems could be subject to a Public Records Act (PRA) request, may be viewable and usable by the company, and may be leaked unencrypted in a data breach.
  • Review, revise, and fact check via multiple sources any output from a Generative AI. Users are responsible for any material created with AI support.
  • Cite and record your usage of Generative AI. See how and when to cite in the “Citing Generative AI” section. Record when you use Generative AI through this form.
  • Create an account just for City use to ensure public records are kept separate from personal records.
  • Departments may provide additional rules around Generative AI.
  • Refer to this document quarterly, as guidance will change with the technology, laws, and industry best practices.
  • Users are encouraged to participate in the City’s established workgroups to help advance AI usage best practice in the City and enhance the Guidelines.

The State of Ohio

See Their Generative AI Policy

The State of Ohio realizes that only responsible implementation of AI benefits the state’s citizens while not creating unintended consequences.

Their document is thorough, but we especially liked how they outlined different use cases and functional areas where AI might impact their government:

  • AI Solutions Development: This section covers broad concepts that will help a user develop a use case, get it approved by the AI council, and then use newly created content.
  • Workforce Requirements: The training required for using these tools is laid out and clearly articulates the users’ responsibilities for the materials they produce.
  • AI Procurement: Like other software, the State must vet any new programs before they can be used. The process for gaining access to new AI tools is spelled out.
  • Security and Privacy: The State of Ohio has strict privacy laws with policies and standards to support the laws. This section outlines basic steps to protect the privacy of the data.
  • Data Governance: The Chief Data Officer Council will be responsible for all data governance requirements, not just for AI. Guidelines are set forth to help guide the CDO Council on their role.
  • AI Council: A multi-agency AI council will be created to monitor the use of AI and make recommendations as these tools rapidly evolve. New procedures and documentation will come from the council for the State to use.

Taliaferro County Schools, Georgia

With AI permeating education, any educational institution must clearly outline what teachers and students can and cannot do with AI. The Taliaferro County Schools in Georgia have outlined two broad areas, one for educators and one for students, to highlight what it perceives as inappropriate uses of AI. The educator’s policy and AI guiding principles are highlighted below:

  • Violating Privacy and Data Security: AI must not be used to collect, store, or analyze student data without explicit consent and a clear educational purpose. It is inappropriate to use AI tools that infringe on students’ privacy rights or fail to comply with data protection laws (e.g., FERPA, COPPA).
  • Bias and Discrimination: Implementing AI systems that perpetuate biases or discrimination is prohibited. AI tools should be scrutinized for fairness and bias, ensuring they do not disadvantage any student group based on race, gender, socio-economic status, or ability.
  • Replacing Human Interaction: AI should not replace essential human elements of teaching and mentoring. While AI can augment teaching, it must not substitute for the personalized and empathetic interaction between teachers and students.
  • High-Stakes Decision Making: Using AI for high-stakes decisions, such as determining a student’s academic progression, grading, or disciplinary actions, without human oversight is inappropriate. AI should support, not replace, the professional judgment of educators.
  • Unvetted Educational Content: AI-generated educational content must be thoroughly vetted by educational professionals to ensure accuracy, relevance, and appropriateness. Relying solely on AI to generate and deliver instructional material is not acceptable.
  • Unsupervised Use by Students: Allowing unsupervised use of AI tools by students, especially younger children, can lead to misuse, exposure to inappropriate content, or misinterpretation of information. Educators must guide and supervise AI interactions within the educational framework.
  • Implementation and Monitoring: Educators will receive training on the ethical use of AI and its integration into the curriculum. AI tools must undergo a rigorous evaluation process before being approved for classroom use. Regular audits will be conducted to ensure compliance with this policy and to assess the effectiveness and impact of AI in the learning environment.

13 Other AI Governance Policy Examples

We hope the above three examples have helped you understand what other governmental organizations in the US have been doing to shape the use of AI within their systems.
There are no black-and-white rules that can be applied without modification to ensure that your needs are met. For more examples to help guide you, please see more policies listed below:

Comments

*

All fields are required.

Other Resources

Schedule a Demo