top of page
Want to create a meaningful road trip itinerary?
This FREE workbook will walk you through the steps!

AI in Museums: Ethics and Safety Come First

  • Writer: Heidi Schlag
    Heidi Schlag
  • May 8
  • 7 min read

Updated: 1 day ago



As museums begin to explore artificial intelligence (AI) to support marketing, visitor engagement, and internal operations, one concern rises to the top across the board:

Can we do this responsibly?


The answer is yes—but only if we lead with ethics, transparency, and care.


AI can be a powerful tool for museums and cultural organizations, especially those with small teams and limited resources. But that power comes with responsibility. From ensuring historical accuracy to protecting data privacy and supporting staff, it’s critical that any use of AI aligns with your institution’s values.


Here’s what that looks like in practice.


1. Accuracy Matters—Especially in Interpretation

Unlike a search engine, generative AI tools (like ChatGPT) don’t just retrieve facts. They generate language — and sometimes, they get it wrong. In museum settings, where nuance, context, and truth are foundational, this is a serious issue.


Best Practices for Ensuring Reliability:


  • Train GPTs only on approved materials. 

    Stick to documents that have been vetted by your curators, educators, or communications team, like interpretive plans, exhibit text, and published research.


  • Instruct the GPT to say “I don’t know.” 

    Add clear behavior prompts like “If you cannot find an answer in the provided documents, do not guess. Instead, respond: ‘I'm not sure about that. Please check with a staff member or consult our official materials.’”


  • Use AI tools that provide source references.

    Paid versions of GPT (like ChatGPT Plus with GPT-4) and some enterprise tools now offer features like retrieval-augmented generation (RAG) that include citations and links to the original source documents used to generate the answer. These help staff and users double-check facts.


  • Test for “hallucinations.”

    AI hallucinations happen when the model confidently generates false information. To prevent this:

    • Limit your GPT to “closed” knowledge, only pulling from your uploaded content.

    • Regularly test it with off-topic or complex questions to see how it responds.

    • Have staff audit outputs before they’re published or embedded in visitor-facing systems.


  • Establish a review process.

    Never rely on AI-generated copy alone for interpretive or public materials. Always include human review for accuracy, tone, and cultural sensitivity.


When used within clear boundaries, AI can actually strengthen your interpretive work by helping you organize content, simplify language, or find connections across themes. But it should never be trusted blindly. Accuracy isn’t just a technical issue — it’s a trust issue.


2. Transparency Builds Trust

In the museum world, credibility is everything. When your visitors or staff are interacting with content—especially content generated by AI—they deserve to know where it came from and how it was created.


Transparency isn’t just a nice-to-have. It’s an ethical obligation, and one that directly impacts public trust, internal culture, and your institution’s integrity.


Whether you’re using AI for internal planning, visitor engagement, or public communication, clear and honest disclosure is key.


Best Practices for Transparent AI Use:


  • Label AI-generated content clearly.

    If you publish or display content created with AI, let people know. Add a small note or symbol with language like “This content was generated with the assistance of AI and reviewed by museum staff.” This is especially important for exhibit text, visitor guides, or digital interactives.


  • Distinguish between human and AI responses in chat tools.

    If you're using a GPT-based assistant online or in a kiosk, include an upfront disclosure such as: I’m a virtual assistant trained on official museum content. I can help answer general questions, but I’m not a human staff member.” Offering a clear handoff option (e.g., "Would you like to speak with a staff member?") is also good practice.


  • Use system prompts that reflect your transparency values.

    When building a custom GPT, include instructions like “Inform the user that you are an AI trained on internal museum documents. Do not claim human expertise. Always offer to refer complex questions to staff.”


  • In internal communications, explain what AI is doing and why.

    Staff buy-in improves when people understand why you’re using AI, what it’s being used for, and what safeguards are in place. Use all-staff meetings or internal FAQs to clarify how it fits into your workflows.


  • Keep a record of where AI is used in your organization.

    Document which projects, departments, or tasks are supported by AI tools. This helps ensure accountability and allows you to audit for unintended issues or mission drift.


  • Let staff and stakeholders ask questions.

    Whether it’s a board member, educator, or visitor services team member, people should feel empowered to ask, “Was this created by AI?” and receive a clear answer. Openness invites confidence and reduces suspicion.


  • Disclose limitations.

    Make it known what the AI can and cannot do. For example “This assistant is trained on our museum’s official documents, but may not be able to answer complex historical or interpretive questions.”


Why This Matters

In a cultural environment shaped by authenticity, ethics, and public service, any perception that AI is being used to mislead, cut corners, or hide behind automation can damage trust. But when you lead with transparency, you show that your institution is using new tools thoughtfully, with respect for your audience and your mission.


Transparency builds buy-in, both inside and outside your walls.


3. Privacy and Data Security

Museums handle sensitive data — donor records, internal reports, visitor surveys. AI systems must be used in ways that protect this information and comply with best practices in data privacy.


Best Practices for Security:


  • Never upload personal or sensitive data to public AI tools.

    Avoid inputting names, email addresses, or donor histories into ChatGPT or similar platforms.


  • Use anonymized examples during testing.

    Remove identifiable information from any content you upload to train or prompt a GPT.


  • Choose secure platforms.

    Use enterprise-level tools or private GPT environments if you’re dealing with sensitive materials.


  • Define internal guidelines.

    Establish a clear AI usage policy to help staff understand what’s appropriate and what’s not when working with confidential data.


Responsible AI use begins with responsible data handling.


4. Supporting Your Workforce

One of the most common concerns about AI is whether it will replace jobs. In the museum world, where many roles are already underpaid, undervalued, or reliant on volunteer labor, this fear is valid.


But AI doesn’t have to replace people. When used responsibly, AI can actually improve the quality of museum work by reducing burnout, expanding capacity, and helping your team focus on the meaningful, mission-driven tasks that matter most.


This includes not only staff but also volunteers, who are often the backbone of small museums and heritage organizations. With volunteer recruitment becoming harder in many places, AI can help lighten the load without reducing the human touch.


Best Practices for Responsible Workforce Support:


  • Automate tasks, not relationships.

    Use AI to take care of repetitive or low-stakes work — like formatting emails, summarizing reports, or answering basic questions — not interpretation, storytelling, or human connection.


  • Reinvest saved time into higher-value work.

    Freeing someone from routine tasks should be an opportunity to upskill, increase engagement, or deepen program development, not to cut hours or staffing.


  • Elevate job roles where possible.

    If AI helps reduce workload, consider how that role could evolve, especially in visitor services, where many jobs are entry-level and underpaid. This shift could be a stepping stone to more skilled, better-compensated positions.


  • Train and empower your team.

    Provide basic training on how to use AI effectively and ethically. Give staff and volunteers tools to understand the technology, use it with intention, and identify when it’s not appropriate.


  • Be transparent and collaborative.

    Let your team know where and why AI is being used. Invite them into the conversation early so they feel empowered — not displaced — by these tools.


Museums thrive because of their people. AI should make their jobs better, not obsolete.


5. Align AI Use With Your Mission

The most ethical AI use is the one that helps you deliver your mission more effectively. That means asking hard questions before implementing any new tool or workflow.


Best Practices for Mission Alignment:


  • Start with your values.

    Don’t ask “What can this tool do?” Ask, “How could this help us better serve our audience?”


  • Build cross-departmental buy-in.

    Include interpretation, visitor services, marketing, and leadership in conversations about AI.


  • Pilot before scaling.

    Test new tools in one department before rolling them out across your organization.


  • Be willing to say no.

    If a tool doesn’t fit your mission or your community’s needs, it’s okay to walk away.


AI should be in service of your values—not the other way around.


Let’s Use AI to Amplify, Not Undermine, Our Purpose

AI is not a shortcut. It’s not a replacement for people. And it’s not a magic solution.


But when used thoughtfully, AI can help museums tell stories better, serve communities more consistently, and support staff in meaningful ways. The key is to approach this technology with clear boundaries, strong ethics, and a commitment to your mission.


Interested in Learning More?

This fall, I will be offering a training series on AI for Museums and Heritage Organizations. It will be designed for non-technical professionals who want to explore how AI can be used in thoughtful, ethical, and effective ways.


In these sessions, we will discuss:

  • What AI can and cannot do for heritage organizations

  • Examples of real-world use in museums and nonprofits

  • Hands-on demos of tools you can start using right away

  • How to create internal guidelines for safe and responsible AI use


If you are curious but cautious, you are not alone. AI does not have to be overwhelming or risky. With the right approach, it can help you amplify your impact, save time, and communicate more effectively.


Get on the waiting list. Be the first to learn when the AI in Museums web training is available!




Practicing What I Preach: This blog was written with the assistance of a custom GPT I’ve trained specifically on museum management, heritage tourism, and marketing communications strategy. I use this tool to support idea development and content drafting, but the final draft, including all insights and recommendations, reflect my professional experience working with museums and heritage organizations.

CONNECT WITH ME

  • LinkedIn

© 2025 by Culture-Link Communications, LLC

JOIN THE EMAIL LIST

We value your privacy and will never share your personal information.

bottom of page