Last May, the Chicago Sun-Times stirred up controversy when it published, of all things, a summer reading list. At first glance, it was a tantalizing group of titles—like Tidewater Dreams, “a multigenerational saga set in a coastal town where magical realism meets environmental activism,” penned by acclaimed Chilean-American author Isabel Allende. Sharp-eyed readers, however, noticed one small problem: Tidewater Dreams was a fake. Of the 15 books on the list, only five actually existed.
It soon came to light that freelance writer Marco Buscaglia had generated the list with the help of an AI agent, which had hallucinated—or “made up”—the majority of the books, along with fairly detailed plot descriptions. Without checking his work, Buscaglia submitted the piece to third-party content provider King Features, who in turn provided the list to the Sun-Times and other papers.
Some would say this story confirms that AI shouldn’t be used in communication—but we think the reality is a bit more nuanced than that. In a lengthy apology piece, Chicago Public Media CEO Melissa Bell admitted that humans, not technology, were really to blame for the error. “Did AI play a part in our national embarrassment? Of course. But AI didn’t submit the stories, or send them out to partners, or put them in print. People did,” she wrote. “At every step in the process, people made choices to allow this to happen.”
There’s no doubt that using AI comes with risks—but most of them have less to do with the technology itself than with human error. The same is true of most tools. Consider, for example, a table saw: extremely dangerous in careless hands, but powerfully useful if you know what you’re doing. The trick is to treat AI the same way you would a table saw—to approach it with a healthy appreciation for its risks and take the appropriate precautions while using it. So without further ado, here’s a short primer on how not to use AI.
Don’t take yourself out of the equation.
At first glance, AI seems like a magic wand; you put in a prompt and it suddenly supplies exactly what you need. But in reality, it’s rare that a large language model (LLM) or other tool will turn out a perfect product on the first try—which is why you should carefully review anything before making it public. This is an important point for Michele Ewing, APR, a professor of public relations at Kent State University and a member of the Public Relations Society of America’s Board of Ethical and Professional Standards. “AI shouldn’t be replacing humans in the workplace,” says Ewing. “You need human intervention.”
In his book Co-Intelligence: Living and Working with AI, author and researcher Ethan Mollick repeatedly emphasizes the importance of “being the human in the loop”—“incorporating human judgment and expertise” into the use of AI. “By actively participating in the AI process, you maintain control over the technology and its implications, ensuring that AI-driven solutions align with human values, ethical standards, and social norms,” he writes. “It also makes you responsible for the output of the AI, which can help prevent harm.”
Put simply, you are no less responsible for a document you created with AI than one you wrote from scratch—so you can’t afford to give up control of the process. AI can be a great starting point for dozens of tasks, but human judgment should always play a central role. Here are a few common problems to be aware of as you review your AI creations.
Errors
First and foremost, you should always fact-check any output you get from an LLM. This is especially necessary given that LLMs have an unfortunate tendency to hallucinate—a fact the Chicago Sun-Times learned the hard way.
As Mollick explains in Co-Intelligence, AI tools are often operating with several “goals” at once, one of which is to make you happy. “That goal often is more important than another goal—‘be accurate,’” he writes. “If you are insistent enough in asking for an answer about something it doesn’t know, it will make up something, because ‘make you happy’ beats ‘be accurate.’”
You can also build a few safeguards into your prompts to reduce these errors before they happen. For instance:
Request sources. Anytime an AI appears to be providing you a fact, “you have to ask: Where's it pulling this information from? Can I verify it with a source?” says Ewing. Asking it to provide those sources on the front end will make your job easier. Just be sure to check that the sources cited are reputable and that the AI is interpreting them accurately.
Provide context. For some tasks, you may be able to feed the AI reputable sources and specifically instruct it to use only those sources. For example, Google Gemini allows users to upload “knowledge”—documents, PDFs, pasted text—for the model to use exclusively.
Ask the AI to admit when it’s uncertain. In your initial prompt, ask the AI to admit if it’s unsure about a response—or, after it answers, ask it to flag parts of its answer that could be incorrect or that need verification.
And of course, even with these safeguards in place, always take the time to fact-check your work. In this regard, AI is less like a table saw than a smart, but occasionally overconfident intern—helpful, but always in need of supervision.
Algorithmic bias
“AI is trained on human data, and humans are biased,” says Ewing. That means AI outputs often contain harmful bias as well, whether or not you intended them.
As a former school technology leader—and current ed tech consultant—Carl Hooker has been keeping a close eye on the rise of AI in schools. He’s even written a book on the subject. Much of his work centers around incorporating tech like AI into teaching and learning—which means he knows how to help people of all ages recognize algorithmic bias in AI outputs. “When you ask an AI image generator to create an image of a nurse, before you generate the image, talk through it,” he recommends. “What images do we think will be represented here? What biases might be present?” As you might imagine, without more specific instructions, the image generated will likely be of a woman.
“A fun thing to do with students is to show an AI-generated image and ask them to identify the biases present,” Hooker says. “If you do that enough times, they start seeing with a critical lens.” And while it works on kids, you can run this activity on yourself or your central office peers as well.
“You can't ever prevent it from being biased, but you have to recognize the bias,” says Hooker. And once you notice bias, don’t stop there—challenge it directly by asking the AI to show diverse perspectives or to avoid stereotypes.
Lack of specificity
Because they’re trained on huge amounts of text—and because they want to meet your needs, no matter who you are—LLMs often default to the most generic outputs possible. But your school community is anything but generic; it’s as unique as the people comprising it.
Hooker illustrates this difference with a simple activity: asking both a group of humans and an LLM to list things one might find at a barbecue. “The humans come up with about 20 or 30 items, maybe, but the AI can do 100 in a minute,” he says. “However, when you look at what it gives you, they're very basic things: beans, meat, a grill.” Actual people, on the other hand, include more specific, colorful details. “When I did this in south Texas, people were listing things like mariachis and tamales—things with cultural significance to them,” he says. “They’ll mention the sweat, the crying kids—all the things that make a barbecue human.”
So before you use anything generated by AI in your school communication—whether it’s a newsletter, a Facebook post or something else—make sure it feels like it truly belongs to your district. If your messaging feels like it could be coming from any school district in the country, it won’t help build your distinct brand—or your relationships with your unique community.
Don’t enter student data into any AI tool.
In his many conversations with school leaders about AI, Hooker is seeing a few common pitfalls—especially when it comes to data privacy. “Putting in actual student data is one of the big no-nos that I talk to principals and school leaders about,” Hooker says. “You’d be surprised by how many people are copying and pasting PII—personally identifiable information—into ChatGPT or whatever tool.”
But “protecting student data isn’t just an ethical concern—it’s a legal requirement,” Ewing tells SchoolCEO. As you already know, the Family Education Rights and Privacy Act (FERPA) prohibits schools from disclosing any student records or PII to a third party without prior written consent. Inputting any student PII into an LLM—whether their name, their picture or even physical descriptions detailed enough to identify them—therefore constitutes a FERPA violation.
You may be wondering what makes an LLM like ChatGPT less safe for student data than Google Drive. Even if the LLM appears to be an app on your computer or phone, it doesn’t run locally; everything you input into AI is transmitted over the internet to servers that process the information and provide a response. These inputs may be monitored or stored to train future models.
This means that as soon as you input student data into an LLM, you lose control of that data. You no longer know who can access it, where or how it might be stored, or how it might be used in the future. If the system is ever hacked, that data—and therefore your students—will be vulnerable. Even paid, “enterprise” versions of these tools should be approached with caution. Unless your district has a written data privacy agreement in place that explicitly complies with FERPA, you should still avoid entering any student PII.
And it’s not just FERPA you have to think about. Because most public school records fall under the Freedom of Information Act, anything you create, store or share through AI tools could be subject to public records requests. That’s another reason to keep sensitive or identifiable information out of any AI system not explicitly approved by your district.
As Hooker points out, most of the administrators who make this mistake are just trying to help their students, whether by developing better schedules or perfecting IEPs. You can still use AI for these tasks, as long as you’re extremely careful about it. “Use it to come up with IEP ideas for a student that struggles with a certain learning difficulty. Just don’t put in the student's information,” he advises. “Student data is one of the biggest things school leaders need to hold dear. We cannot compromise that trust.”
Don’t keep it a secret.
If you’re still early in your AI journey, you may be afraid to let others know you’re using AI to help with tasks—lest they believe you’ve been “cheating.” But experts like Ewing say that, as with many aspects of school leadership, transparency is best. If you used AI for a given communication, “let your audience know how you used it, why you used it and what you did to verify the output,” says Ewing. “That’s going to build trust and accountability.”
By being transparent about AI, you also model responsible AI use for others, whether students or fellow staff members. “Employees and students are using AI, whether they’re going to admit it or not,” Ewing says. “So make sure you invite people to share when they’re using it, that you’re discussing which uses of AI are effective and which ones are ineffective or unethical.”
AI will keep changing, and so will its risks—but the real safeguard is you: the human asking good questions, checking the facts and keeping ethics at the center. Use AI as a tool, not a crutch, and it can help you do what great educators have always done—think critically, act responsibly and put students first.
