Rebecca Bultsma lives in the messy space between AI hype and AI reality, helping education leaders cut through the noise to find what actually works for students and educators. With seven years of experience working directly with school boards and district administration, over 100 AI in Education keynotes and workshops, and two years of focused AI ethics research on K-12 implementation, she’s clear-eyed about AI’s potential—and appropriately worried about its risks.

As an AI ethics researcher at the University of Edinburgh and Chief Innovation Officer at Amplify & Elevate Innovation, Bultsma has developed districtwide AI policies and trained thousands of educators internationally. She specializes in responsible AI adoption that puts students first without compromising educational integrity.


Last month, a district leader told me about a teacher accused of inappropriate behavior after a video leaked online. The teacher claimed it was a deepfake. The district could not confirm it or disprove it. They had a misconduct policy, but they had no playbook for whatever this was (or wasn’t).

The same week, in another state, a mother told me her daughter had been accused of using AI on an essay that she had watched her write herself, simply because a detector had flagged it. There was no proof she could offer, no appeal, and her family did not want to go to war with the school. Her daughter took the zero.

A world with AI everywhere means a world where it’s harder to trust what we see, read or hear from students and staff. Dr. Nadia Naffi, an associate professor of educational technology at Quebec's Université Laval, calls this uncertainty a “crisis of knowing.” I’d liken it to the ground shifting beneath our feet. The results aren’t catastrophic yet, but it’s creating just enough instability to make people question their own sense of what’s real and what’s not.

This credibility problem is the part of AI that schools are least prepared for. Not figuring out the tools, not the workflows, not even the cheating—but the slow, quiet erosion of trust.

The first move toward managing AI use and broaching the credibility problem is not banning it or ignoring it. It isn’t rushing to draft an AI policy, either. It’s talking honestly about how we’re using these tools, what we trust and what we don’t. Only once leaders, teachers and students come to a mutual understanding about how they should use AI can comprehensive guidelines be written. We come to that mutual understanding by having three crucial conversations—and the first one starts with us.

The “Us” Conversation: How Adults Set the AI Norms

Here's the math that doesn't add up. According to Microsoft’s 2025 AI in Education Report, 87% of educators have used AI—but only 44% say they know a lot about it. People are using tools they don't fully understand, wondering if they're doing it right. 

But here's the big question: doing what right? More often than not, there is no map—a problem acknowledged by leaders and teachers alike. In Microsoft’s report, both groups  cite “lack of clear policies” as a top concern. What’s more, half of educators say they have received zero training, so they don't have the vocabulary for the conversation even if they wanted to start it. And while 82% of leaders believe AI is already integrated into curricula, only 54% of educators agree—which means the gap isn't even visible from the top. 

It's the worst kind of pressure: being judged against a standard that doesn't exist yet. No one knows what “right” looks like. No one knows what “wrong” looks like, either. And when no one's talking about it, you assume you're the only one who doesn't have it figured out—so you keep quiet, too. That silence reinforces everyone else's belief that they're alone in not understanding. 

And because of that silence, AI use becomes something people figure out in isolation rather than define together. A principal uses ChatGPT to clean up a parent letter, but doesn't mention it in a staff meeting. A superintendent rewrites a board update with AI but doesn't cite it on the agenda. It's easier to avoid the conversation than navigate it without a map. When no one knows what the rules are, no one has been trained, and leadership thinks it's handled, silence becomes the path of least resistance.

This is why leaders need internal clarity long before they try to build public clarity. The goal isn't to create another policy that lives in a binder, but a shared understanding that people actually use. And you don't need a committee or a consultant to begin. 

Start by finding out who's already using AI. You might discover that your early adopters aren't who you’d expect. Not the tech coordinator, not the innovation committee, but your executive assistant drafting emails or your communications officer cleaning up newsletters—people doing high-volume, language-heavy work who found AI because they needed it, not because they're interested in technology.

Ask around: “Who's tried AI for work?” Make it safe to admit. Then pull those people into a room—not to audit them, but to learn from them. They know where it saves time and where it creates new problems. From these conversations emerge the norms that actually stick—simple ones, like "AI supports our work in these ways, but not here" or "Humans always own the final message." 

But (and here’s the key) norms only work if people see them in action. Leaders have to get visible. Use AI to draft that board report, then tell the board you did so. Mention it in staff meetings when you've used it for research. Name when you've chosen not to use it and why. The goal isn't perfection; it's transparency about the choices you're making.

This is the “Us” conversation—adults getting clear with each other about what responsible AI use looks like in practice, then modeling it openly. Leaders don't have to be perfect. They have to be proactive, visible and intentional. This is crucial, because students learn how to use AI by watching adults, not by reading a list of rules. Whatever adults normalize becomes the map students use to navigate their own decisions—which brings us to what students are already doing while adults figure this out.

The “Them” Conversation: When Students Improvise

Students aren't sitting around waiting for AI guidelines. They're already experimenting with chatbots, image generators, writing tools and humanizer apps. They're figuring out what feels useful, what feels right and what feels wrong. What they're not getting is consistent, grounded guidance from the adults around them. 

More than half of U.S. students say they've never received any guidance on AI use. So they're left to figure it out themselves—experimenting with tools for brainstorming, feedback and study help while simultaneously worrying that doing so might get them flagged or penalized. This is where credibility drifts at the classroom level. Microsoft's data shows that students' top AI concern is being accused of plagiarism or cheating. Students are afraid of the system misjudging them because they’re unsure what constitutes misusing AI. 

This brings us back to that crisis of knowing. Students need to be sure what responsible AI use looks like, how to create and defend their own work, and when AI is helping or harming their thinking. That's epistemic agency. It's not technical. It's relational and reflective. And it depends entirely on what adults model, because students take their cues from what we do long before they trust what we say.

This means having honest conversations about what's acceptable and what's not. Setting boundaries is especially important when students use AI to create fake explicit images of classmates, to impersonate teachers, to harass and humiliate. These aren't merely bad judgment calls. They're violations that cause real damage to real people, so it’s inadequate when schools call them “cyberbullying,” “inappropriate use,” or worse, “pranks.” That language tells students this is just a policy violation, not a serious harm. If we want students to understand where the lines are, we have to be willing to name what crosses them. 

Harmful deepfakes notwithstanding, most student AI use is legitimate. But helping students develop sound judgment means being clear about both responsible use and unacceptable harm. That clarity doesn't emerge from policies. It emerges from conversations where adults are direct about what matters and why.

The problem is most schools aren't having those conversations. They're still using rubrics that haven't changed since before AI existed, still relying on essay formats that no longer work, still treating AI as something to detect rather than something to discuss. That approach doesn’t teach students the standards—it teaches them that school is stuck.

Most students don't need adults to police their AI use. They need adults in conversation with them, helping them build judgment. That's the work—not detecting, not banning, not pretending AI doesn't exist, but building shared understanding about what responsible use looks like and what harm looks like. When that conversation happens, students learn to navigate a credibility-shifting world with confidence and critical thinking skills.

The “Oh No” Conversation: When the Evidence Falls Apart

The strangest part of living in the age of AI is how often something looks real before you discover it is not. A Facebook video of bunnies on a trampoline racks up a million views before someone notices the shadows are wrong. A teen texts a hyperrealistic AI image of a “home invader” to their parents, who panic and call 911. A top-10 country hit climbs the charts before listeners realize it was generated entirely by AI and it’s, uncomfortably, kind of…good? The line between entertainment, misinformation and outright fabrication isn’t just thin anymore. It’s basically translucent.

Schools feel this shift harder because the stakes are higher. These deepfakes are appearing in schools with alarming frequency: Maybe it’s a video of a teacher doing something they insist they never did, a voicemail that sounds exactly like you, or an explicit image of a student built from a yearbook photo. Suddenly, the first question isn't “What happened?” It's “Did this happen at all?” The “Oh No” conversation is what happens when you have to lead before you know what's true.

You can't build judgment during a crisis. You build it before, in the clarity work adults do together, in the agency you cultivate with students. When a deepfake lands in your inbox at 6 a.m. and Facebook groups are ablaze, you don't have time to figure out your values. You need to already know.

Here's the hardest truth: There's no reliable way to prove anything quickly anymore. Detection tools lag behind creation tools. Dr. Naffi warns we're approaching a “synthetic reality threshold,” a point where humans can no longer distinguish authenticity from fabrication without technological assistance. 

And here's the uncomfortable irony: Education spent two decades obsessing over data, building systems to measure and verify everything. We reach for a tool to solve every problem. But deepfakes break the whole logic. We're in the era of what researchers call the “liar's dividend.” Authentic evidence of actual wrongdoing can be dismissed with three words: “That's a deepfake.” And you can't prove it isn't. The uncertainty between real and fake can lead to false accusations—or excuse bad actors. 

But even if we can verify, here's what matters more than detecting a deepfake: who it affects. A student is devastated by an explicit image circulating with their face on it. It’s irrelevant if it's real or fabricated. The harm is real either way. Trust is determined by how you respond, whether you lead with care, and how you center their experience. 

This is where we reclaim the human judgment we've outsourced to technology. The capacity to sit with ambiguity. To act from values instead of data. To prioritize care over certainty.

The Work Ahead

The “Us,” “Them” and “Oh No” conversations build on each other. Adults get clarity. Students build judgment alongside them. Leaders build trust and draw on what's already been established when things get murky.

None of this happens in a policy document. It happens in the rooms where people admit what they don't know; where students understand and help establish the norms; where adults model the messy work of navigating uncertainty instead of pretending they've figured it out.

The reality is we spent two decades building tech systems that promised certainty. Then, students got their hands on tools that obliterated it. Now we're learning that the hardest work was never about the technology. It was always about whether we could sit in a room together and figure out what the technology means, why it matters and how we move forward.

Start these conversations today. Because the ground is shifting whether you're ready or not, and trust and judgment are the only things that hold when everything else gives way.