
With a Detour Through Van Buren County’s AI Policy
Artificial Intelligence is no longer the domain of futuristic novels or that one IT consultant who always insists it will “revolutionize municipal workflows.” In Van Buren County, it’s already quietly reshaping how we draft reports, analyze data, and manage public communications. And, naturally, the initial impulse is to be transparent about it. After all, what could go wrong with a little openness?
Well, potentially, trust.
Welcome to the Trust Penalty
A recent study, The Transparency Dilemma: How AI Disclosure Erodes Trust, drops a small pebble into the serene pond of local government transparency and watches the ripples. Across 13 experiments, researchers found that simply disclosing the use of AI for a task caused people to trust the outcome, and the person responsible, less. This so-called “trust penalty” is especially inconvenient in public service, where credibility is currency and suspicion spreads faster than budget overruns.
Imagine a Van Buren County department head uses AI to draft a clean, informative bulletin about a new recycling program. They decide to note at the bottom, with earnest pride, “This message was created with the assistance of AI.”
According to the research, that one line might quietly unravel the very public trust the rest of the message was trying to build.
Legitimacy, or Why People Still Prefer Humans Making Decisions (Even Imperfect Ones)
The problem, researchers argue, lies in legitimacy. The public expects that government work, particularly the kind that involves decisions, communications, or services, is conducted by trained professionals motivated by civic duty, not algorithms optimizing for character count.
In Van Buren County, this concern is not abstract. Our AI Usage Policy, adopted in March 2025, makes it clear: humans remain accountable. AI may assist, but oversight is not optional. Section 3.1 of the policy notes:
“Users are ultimately responsible for the content and outcomes produced by AI systems. Human oversight is essential to ensure AI decisions are justifiable, align with the county’s ethical standards, and meet public expectations.”
The takeaway? Even our policy reflects what the research confirms: people are more comfortable when a person is ultimately at the wheel, even if the steering is power-assisted.
Other Study Findings That Complicate Things Further
Several other uncomfortable truths emerge from the study that are especially relevant in Van Buren County, where transparency isn’t just encouraged, it’s a cultural expectation.
- Being Outed is Worse Than Self-Reporting If the public discovers you used AI without telling them, it’s worse than just admitting it. So while disclosure can hurt trust, nondisclosure followed by exposure? Catastrophic. This is why Van Buren’s policy doesn’t just allow for AI use, it requires oversight, logging, and incident reporting (see Section 3.4). Quiet AI isn’t rogue AI. It’s just AI on probation.
- “Human Oversight” Doesn’t Help Much Ironically, saying that “a human reviewed the AI output” doesn’t improve trust much. In fact, according to the study, it may worsen it, creating ambiguity about who did the work and who is responsible. It’s the bureaucratic equivalent of “The intern handled it.”
- AI + Human = Less Trusted Than AI Alone The most confounding insight: people trust an autonomous AI more than a human who used AI. Apparently, if you’re going to let the machines help, just don’t admit it unless you want to trigger a minor existential crisis.
Van Buren County’s Approach: A Slightly Smarter Bureaucracy
Thankfully, Van Buren County’s AI Usage Policy shows a level of foresight rare in documents that begin with “WHEREAS.” It acknowledges both the promise and peril of AI.
Key features of the policy that address the transparency trap include:
- Clear Ethical Standards (Section 3.1): including the requirement that solely AI-generated outputs must be “clearly labeled and explainable” in public-facing contexts.
- Oversight Structures: The AI Steering Committee and AI Task Force ensure decisions aren’t made in a vacuum (or, worse, a vendor’s demo video).
- Mandatory Training (Section 3.4): All staff must complete “AI Basics 101” before using AI, ensuring users know both the tools and the ethical minefield they may be walking into.
- Prohibited Uses: Including the processing of sensitive data or using AI in a way that could violate anti-discrimination policies, because we’ve all seen what happens when algorithms learn the wrong lessons from our history.
This is not policy for for the fun of it. It’s a practical framework to help our county employees navigate a world where AI is part tool, part mystery, and part public relations hazard.
So, What Should We Do?
The study may offer cold water on the warm bath of AI enthusiasm, but it also points the way forward, particularly for counties like Van Buren, where both innovation and accountability are expected.
Here’s a refined roadmap:
- Acknowledge the Dilemma Don’t pretend the choice between transparency and trust is easy. It’s not. But being aware of the tradeoffs is better than stumbling into them.
- Normalize, Don’t Just Disclose Help the public understand why AI is being used, and when it isn’t. Make it part of a broader conversation about modernization, not a footnote buried beneath the contact number.
- Let Outcomes Speak Van Buren’s policy emphasizes measuring efficiency, savings, and satisfaction. Let those results guide the narrative. If the problem is fixed faster or public forms are easier to understand, that’s the headline.
- Build Trust in the System, Not the Tool Ultimately, it’s not about the AI, it’s about the process around it. When people trust the system (and the people maintaining it), they’re less likely to panic when the system includes some automation.
The age of AI is here, and in Van Buren County, it’s being ushered in with policies, training, oversight, and, yes, a touch of wary optimism. The transparency trap is real, but it’s not inescapable. With clear policy, thoughtful communication, and a bit of old-fashioned human judgment, we can use these tools without losing the public’s trust, or our own footing in the process.
After all, in local government, the goal isn’t to replace people with AI. It’s to make sure the people still look good when the AI gets involved.
