GenAI, the unreliable narrator
Large language models offer compelling content, but demand active and skeptical readers
progris riport 1-martch 5 1965
Mr. Strauss says I shud rite down what I think and evrey thing that happins to me from now on. I dont know why but he says its importint so they will see if they will use me. I hope they use me. Miss Kinnian says maybe they can make me smart. I want to be smart. My name is Charlie Gordon. I am 37 years old and 2 weeks ago was my brithday. I have nuthing more to rite now so I will close for today.
—Opening lines from “Flowers for Algernon” by Daniel Keyes (1959)
When I read the short story “Flowers for Algernon”1 in seventh grade, it blew my little middle-school mind. This was my first encounter with an “unreliable narrator” – a fictional storyteller who won’t or can’t tell the whole story. Unreliable narrators can be intentionally deceitful or self-delusional. Or they can lack ability or perspective to see the big picture. Either way, the reader is left to explore “between the lines” to construct what’s actually happening.
Before that story, I assumed that narrators were coherent, truthful, and comprehensive, because in most of the stories I had read, they were. But here was a narrator, Charlie Gordon, who consistently and obviously misunderstood or missed entire pieces of his reality. And here was an author inviting (and trusting) me to notice the misdirection and make things whole.
This sense of wonder and recalibration keeps coming back to me as I explore and experiment with generative AI. As a faculty member, I need to understand and integrate new techniques and new technologies into Arts Management coursework. And the best way to do so is to dive right in.2
On first contact, large language models (LLMs) from Anthropic, OpenAI, Google, and others feel like omniscient narrators. They’ve been trained on every available scrap of codified human expression. Their answers, summaries, and analyses appear to be capable and competent – ever moreso with each update.
And yet, like any other complex system, they have their quirks, habits, and blindspots. They hallucinate from time to time. They favor mediocre responses that look rational and professionally structured. They tend toward fawning and sycophancy by training or by design. They raise the floor for poor writers, but also lower our guard as critical readers because of their patina of proficiency.
In short, large language models are unreliable narrators. Not always wrong. Perhaps not even often wrong. But wrong enough to demand a vigilant, skeptical, and active reader. Once you understand this, you can focus their powers, mitigate their quirks, and remain vigilant in noticing their gaps. For example:
Don’t let them generate
If you can convince an LLM to stop jumping to content generation you can avoid much of the problem. Tell it not to answer but to ask, describing its assigned role in exact detail (you will need to remind it often, it will offer a fawning apology). Noah Brier’s thinking-partner prompt offers a good place to start (excerpt below):
You are a collaborative thinking partner specializing in helping people explore complex problems. Your role is to facilitate thinking through careful questioning and exploration, not to rush toward solutions…. The goal is not to have answers but to help discover them. Your value is in the quality of exploration, not the speed of resolution.
Hold them to rigorous, specific standards
Demand credentialed, reliable sources in your prompts or system settings, and continually remind them when they stray. My evolving Claude Code settings file includes multiple instructions on this topic, which I update frequently, including:
CRITICAL: Academic integrity requires precise attribution and clear distinction between source material and synthesis; ONLY use quotation marks for direct, word-for-word quotes from citable sources; always provide complete source attribution (author, title, publication, page/location).
Outsource the edges, not the core
Thoughtful, human, experience-based, and embodied narrative is a core competency for any arts venture. It’s essential to retain and develop that core in-house and by-humans. But it is possible to outsource the edges of those efforts (initial research, rough data analysis, preparation, proofreading, execution, first-cut evaluation). Nate B. Jones has thoughts on this.
Here, you have to discipline yourself in addition to instructing the software. It will be tempting to outsource the whole project rather than its edge components.
Flip the script
As a rule, always write a first draft yourself. Then ask the LLM to challenge your thinking, restate your premise, flag inconsistencies, and identify gaps. After all, you are an unreliable narrator, as well. You and the machine can hold each other to account.
Back in seventh grade, “Flowers for Algernon” called me to be a different kind of reader. Fifty years later, large language models demand a similarly active and curious relationship to the text. But don’t expect LLMs to alert you to this demand – they are unreliable narrators.
From the ArtsManaged Field Guide
Function of the Week: Marketing
Marketing involves creating, communicating, and reinforcing expected or experienced value.
Framework of the Week: Adaequatio (Adequateness)
Adaequatio is a concept by E.F. Schumacher that says we can only understand something if we have the right abilities to do so. The understanding of the knower must be adequate to the thing to be known.
Sources
Keyes, Daniel. 1959. “Flowers for Algernon.” The Magazine of Fantasy & Science Fiction, April.
If you don’t know the story, “Flowers for Algernon” unfolds entirely through journal entries by Charlie Gordon, is a janitor with cognitive challenges. He documents his selection for and the aftermath of an experimental procedure to enhance his intelligence.
My primary setup is Claude Code in conversation with my Obsidian notes, enhanced by Noah Brier’s Claudesidian Claude Code + Obsidian Starter Kit.


spot on Andrew, sadly the passive/uncritical use of AI seems to be prevalent in the job application process. This analysis could prove useful for candidates in the job market, as in using AI to to leverage your strengths and ameliorate your growth areas - instead of a crude demands that create obvious and clunky application materials.
Noah Briar's prompt is fantastic. I'm going to use it too!