By continuing to browse, you accept the use of cookies. To learn more about the use of cookies and our approach to data privacy, click here.
Blog Post
March 9, 2026
By
András Baneth
AI is becoming part of public affairs work, but hallucinations remain a serious risk. We help teams reduce these errors by improving how they prompt, guide, and verify AI outputs.

As AI becomes part of everyday work in public affairs and policy communications, teams are discovering both its potential and its limits. AI can draft briefings, summarise documents, and analyse policy developments faster than ever.
Yet one major risk stands out: hallucinations, when models produce text that sounds credible but isn’t accurate or real. This happens because AI systems are designed to generate fluent, satisfying answers, often filling gaps when prompts lack clarity or context.
In public affairs, that’s more than a technical flaw. It’s a reputational risk. With complex legislative processes, tight deadlines, and political sensitivity, even one fabricated citation or misinterpreted regulation can erode trust.
But hallucinations usually result from how AI is used, not from bad AI. Unclear prompts, missing details, or unverified outputs increase errors. When teams learn to guide models precisely and check results systematically, AI becomes far more dependable, a powerful aid that enhances rather than replaces professional judgment.
Most hallucinations start with prompts that are broad or undefined, like “What’s the EU position on X?” The model then fills in gaps to keep the answer flowing.
Instead, be specific about the task and include negative instructions (clear constraints). Tell the model what to avoid, what level of certainty you need, and what to do if it’s unsure.
For example:
“Summarise the final political agreement on the AI Act. If you are missing information, say so. Do not guess dates, quotes, or institutional positions. Do not invent sources.”
This doesn’t make AI “smarter.” It makes your request clearer and reduces the model’s temptation to improvise.
If you want accuracy, don’t ask the model to recall “the latest” from general training data. Give it the material you want it to work with.
In practice, that means uploading or pasting the relevant texts: the draft regulation, the compromise text, your internal brief, a stakeholder note, a speech, or the latest agreed language. Then ask the AI to summarise, extract, compare, or reframe based only on those documents.
A simple instruction helps:
“Use only the uploaded documents. If something isn’t in them, tell me what’s missing.”
This shifts the model from “generate an answer” to “work like an assistant reading your file.”
In public affairs and policy comms, the danger isn’t that AI gives you a weak argument. The danger is that it gives you a confident factual claim with no grounding.
Make it standard practice to ask for evidence. For example:
If the AI can’t point to where something comes from, treat it as unverified, even if it sounds plausible.
A surprisingly effective way to reduce hallucinations is to ask the model to critique itself. Don’t accept the first answer as “good enough.” Treat it as a draft that needs pressure-testing.
Useful follow-up prompts include:
This forces the model to slow down, reassess, and highlight risk areas, which is exactly what public affairs teams need under time pressure.
No matter how well you prompt, AI output is not a substitute for verification. For anything that could affect credibility, a date, legal reference, institutional position, quote, procedural step, or “latest update”, build in a quick web check.
In practice: treat AI as the tool that helps you draft and structure, and treat a web search (or official sources) as the tool that confirms reality.
A simple rule works well:
If it’s factual and consequential, it gets verified, even if the AI “sounds sure.”
No, AI can’t be fully trusted in public affairs. It’s a tool, not a substitute for expertise or judgment. While AI can accelerate research, drafting, and analysis, it lacks the political awareness, contextual understanding and nuance needed to navigate complex policy debates. The reliability of its output depends entirely on the quality of the context provided and the ability of experts to interpret and refine it.
It’s ultimately the responsibility of public affairs professionals to apply their expertise: to question results, verify sources, and ensure every insight aligns with institutional realities and stakeholder expectations. Critical thinking, professional discernment, and sometimes good taste, are what turn AI-generated content into credible, strategic communication. In a field built on trust and precision, experts remain the real filter for quality.
From training over 2,500 public affairs professionals on how to use AI safely, we’ve learned that accountability still lies with people. AI can enhance good judgment, but it can’t replace it.
If your public affairs team in the EU is dipping toes into AI but wants to do it smartly and safely, we offer in-person and online AI workshops to help you use AI confidently without the risks.