AI, Confidence, and the Rotary Four-Way Test
A detailed reflection on the AI Insights & Beyond session at the Downtown Wichita Rotary Club, covering the Wichi-Toad case study, how large language models actually work, why AI behaves as a confidence engine, and how Rotary's Four-Way Test provides an actionable ethical framework for responsible AI leadership.
AI, Confidence, and the Rotary Four-Way Test
Last week, I had the opportunity to speak with my fellow Rotarians at the Downtown Wichita Rotary Club about artificial intelligence — not as a technical deep dive, but as a practical conversation about how AI is already affecting our work, our communities, and our responsibilities as leaders.

The goal of the session — AI Insights & Beyond: Wichita Edition — was straightforward: demystify AI, ground it in real-world examples, and evaluate it through Rotary's ethical lens. The Downtown Wichita Rotary Club shared a recap of the program on LinkedIn.
What became clear very quickly is this: AI is no longer a future concern. It is a present leadership responsibility.
A Local Case Study: The Wichi-Toad Moment
We began with a story close to home — the Riverfest "Wichi-Toad" controversy. As many in Wichita will remember, the issue was never really about a toad. It was about authorship, authenticity, and trust when AI-assisted tools enter creative work.
The Riverfest artwork, created by local designer David Allen with AI assistance, triggered a broader community debate that's worth examining:
- Is AI just another tool in the creative process?
- Does AI involvement diminish originality?
- Should disclosure be required when AI contributes to a work?
- How should contests and competitions adapt?
What struck me was how Wichita became a microcosm of a global conversation on artificial intelligence. That framing resonated strongly in the room, because the same questions are now surfacing in marketing teams, software development shops, healthcare workflows, education, and community organizations.
AI does not eliminate responsibility. It concentrates it.
AI Basics — What Leaders Actually Need to Know
A key objective of the talk was cutting through both hype and fear by grounding AI in practical reality.
AI Is About Prediction, Not Understanding
Modern Large Language Models are trained on massive datasets and use deep learning to predict likely next words in a sequence. They don't "know" facts the way humans do. Instead, they analyze patterns, calculate probabilities, and generate statistically likely responses.
This distinction is foundational. When leaders treat AI as if it understands, they over-trust it. When they recognize it as probabilistic prediction, they govern it appropriately.
Tokenization: The Hidden Mechanics
One portion of the presentation that sparked particular interest was the explanation of tokenization — how AI breaks text into smaller units before processing. A simple example helps illustrate the idea:
"I love pizza!" → ["I", "love", "pizza", "!"]
AI is not reading language holistically — it is processing structured fragments. This matters operationally because token volume drives cost, token structure affects accuracy, and token limits constrain context windows. For business leaders, token awareness is quickly becoming part of AI cost governance.
The Family Feud Analogy
One of the most effective moments in the discussion was the Family Feud analogy. Like contestants guessing the most popular survey answer, LLMs select the next token with the highest probability given the context. The model predicts the next token, appends it, and repeats the process.
The key insight here: AI doesn't understand meaning — it predicts patterns.
That single observation explains three major AI behaviors that organizations encounter regularly: hallucinations, bias propagation, and overconfidence.
Why AI Behaves Like a "Confidence Engine"
Another concept worth exploring is how modern LLM training rewards helpful-sounding responses. Because models are reinforced when their answers match human expectations, they can exhibit sycophantic tendencies — sounding agreeable even when uncertainty exists.
This leads to a critical leadership takeaway: AI is optimized for fluency and helpfulness, not guaranteed truth. The result is a pattern organizations are encountering more frequently — plausible-sounding inaccuracy.
One line from the presentation that resonated strongly captures this well: In Star Trek, the computer would say, "That is incorrect." Modern LLMs are trained to give answers you like — not necessarily answers that are right.
That observation landed because it captures the current maturity gap. We are still early in the human-AI interface era — similar to learning touchscreens when smartphones first arrived. The tools are powerful, but the interaction model is still evolving.
Prompt Engineering and Context Engineering
The presentation also emphasized that effective AI use is not accidental — it is engineered. Two disciplines matter here.
Prompt Engineering
Prompts act as the instruction layer — effectively the "programming language" for guiding LLM behavior. Clear, precise instructions dramatically improve outcomes. Vague prompts produce vague results, while well-structured prompts produce outputs that are genuinely useful.
Context Engineering
Context is the environment the model sees before responding — background knowledge, rules, and boundaries that shape behavior. Well-designed context produces more consistent outputs, better alignment with intended goals, and reduced hallucination risk.
This is where experienced practitioners create real leverage. The difference between a generic AI response and a truly valuable one often comes down to how thoughtfully the context was constructed.
Viewing AI Through the Rotary Four-Way Test
The closing portion of the talk intentionally returned to Rotary principles. The core idea: AI cannot pass the Four-Way Test on its own — but we can. That framing transforms AI from a technology conversation into a leadership conversation.
Is it the truth?
- Was the output verified against reliable sources?
- Are the underlying data sources understood?
- Is AI involvement disclosed transparently?
Is it fair to all concerned?
- Could bias be present in the training data or outputs?
- Is human oversight maintained throughout the process?
- Are outcomes equitable across the people affected?
Will it build goodwill and better friendships?
- Does this use of AI increase or erode trust?
- Are we augmenting relationships responsibly?
- Are we communicating transparently about AI's role?
Will it be beneficial to all concerned?
- Does this create durable value for the community?
- Are we introducing hidden risk that others will bear?
- Would we defend this use publicly and confidently?
Few modern governance frameworks are this immediately actionable. The Four-Way Test was written long before AI existed, yet it maps precisely to the ethical questions AI raises today.
Real-World AI in Action
To ground the discussion in tangible outcomes, the presentation highlighted several real projects already built using AI, including Zoom and Teams background generation, greeting automation at ClubBirthday.org, digital presence work for Wichita Sewer and Drain, wine dinner trivia experiences, PromptSpark tooling, and the TeachSpark and ArtSpark educational initiatives.
The message was clear: AI value is not theoretical — it is already being realized in practical, local ways.
The Leadership Moment We're In
The most important takeaway from the Rotary discussion is this: AI is not our replacement. It is our responsibility.
When guided by human judgment and grounded in enduring principles like the Four-Way Test, AI becomes what it should be — a multiplier of human capability, not a substitute for human accountability.
AI is not something happening to us. It is something we are actively shaping through the guardrails we set, the transparency we require, the judgment we apply, and the values we refuse to compromise. Service organizations like Rotary are uniquely positioned to help communities navigate this transition thoughtfully.
Because in the end, AI doesn't know truth, fairness, goodwill, or benefit. We do. And that is exactly why leadership still matters most.
