Modest doubt is called the beacon of the wise, the tent that searches to the bottom of the worst. – Troilus and Cressida
Hello, fellow lifelong learners,
Another week, another digital sunrise, another flock of articles about artificial intelligence. They flutter onto our screens, each with an urgent claim about progress and inevitability. The chirping never seems to stop, right?
Here in our corner of the internet, we prefer to peek under the hood. We want to know what actually rattles around inside the shiny promises. Before we dive in, let me be clear: I champion good tools. Well‑designed software can free time, sharpen craft, and sometimes delight us. Yet software, however clever, is still software—not sorcery.
Today’s letter is an invitation to hold both ideas at once: use the tools, and stay alert to the sleight of hand that often accompanies them.
A flood of chirps about progress
The dominant storyline in tech right now is that so‑called artificial general intelligence (AGI) is just around the corner. Marketing departments frame it as the next epoch in human history, a miraculous upgrade that could eclipse our own messy talents…nothing less than a holy grail, a genie‑in‑a‑box promised to answer any query or perform any task. In this version of the future, workers become collateral damage on the road to progress, expected to trade their reliable pickup trucks for self‑driving models that might not work well in fog, rain or hail storms.
“AGI has become the argument to end all other arguments, a technological milestone that is both so abstract and absolute that it gains default priority over other means, and indeed, all other ends.” — AI Now Institute, “The AGI Mythology: The Argument to End All Arguments,” in Artificial Power: 2025 Landscape Report(AI Now Institute, June 3, 2025).
Meet the AI Now Institute
This week I sat with a suite of reports from the AI Now Institute, a research center that studies power and accountability in AI systems. Their latest landscape review cuts through the hype and follows the money, the infrastructure, and the lobbying that prop up the bigger‑is‑better paradigm. They argue that AGI has slipped its scientific leash and now functions mainly as a marketing anchor: the claim so large it ends every argument. This caught my attention immediately.
Skepticism from the inside
A personal note. In the 1980s I sold Lisp Machines for Symbolics, a spin‑off from MIT’s AI Lab. Each AI workstation cost about eighty thousand dollars, and my job was to persuade companies that these machines would conjure new value. Sometimes they did. Often they demanded heroic expectations and heroic budgets. I learned then, as now, that glossy demos rarely survive first contact with real‑world complexity. Healthy skepticism is not cynicism; it is self‑defense.
Truthiness and the marketing of inevitability
AI narratives lean heavily on what comedian Stephen Colbert once called "truthiness": statements that feel true, wrapped in academic‑looking white papers that skip peer review. Think of performance claims issued by the same firm that sells the product. The chef is assuring you that his soup is delicious. Maybe it is. Maybe it needs salt. Without independent tasting we have no idea.
The gravity of concentrated power
Building today’s frontier models requires sky‑high capital, specialized chips, and vast energy. A handful of companies own the cloud platforms, the data pipelines, and the narrative machinery. When resources tilt so steeply, public subsidies, community water supplies, and even federal policy can end up serving private infrastructure. That is not an abstract risk; it is a present one.
“The AI industry’s growth model, fueled by the assertion that infinitely increasing scale leads to superior products, has spawned AI firms that are positioned to be too big to fail.” — AI Now Institute, “Too Big to Fail: Infrastructure and Capital Push,” in Artificial Power: 2025 Landscape Report(AI Now Institute, June 3, 2025).
Pattern machines, not thinking minds
Large language models excel at predicting the next likely word. They do not understand, reason, or learn in a human sense. Errors and "hallucinations" are baked into the design. Biases in the training data flow straight into the outputs, often landing hardest on people who already carry the most risk: low‑income families, black and brown communities, immigrants, and gig workers.
“AI is already intermediating critical social infrastructures, materially reshaping our institutions in ways that ratchet up inequality and concentrate power in the hands of the already powerful… It is consistently deployed in ways that make everyday people’s lives, material conditions, and access to opportunities worse.”— AI Now Institute, “Consulting the Record: AI Consistently Fails the Public,” in Artificial Power: 2025 Landscape Report (AI Now Institute, June 3, 2025).
Grounding ourselves as lifelong learners
For those of us who treat learning as a lifelong craft, the Institute’s work offers needed ballast. It reminds us to ask:
Who benefits if I accept this story at face value?
Where are the documented failures, and what do they teach?
Which voices or forms of expertise are missing from the conversation?
How will proposed efficiencies reshape the social and environmental ground on which they stand?
A call to agency
We cannot outsource agency to algorithms or to the shareholders who deploy them. Staying human and remaining curious, critical, and creative means steering the wheel ourselves. By reading beyond the press release and questioning the inevitability narrative, we claim a say in what gets built, who pays, and who is protected.
Sit with these ideas. Let them rub against your own experience. Share them with a friend. Progress is not pre‑ordained, and technology is never neutral. Together we can insist that any future worth having is one rooted in autonomy, dignity, and justice.
Until next week,
Tom
Want to dig deeper? Take a moment to read “Artificial Power: 2025 Landscape Report”—it’s a vital contribution to the conversation. You will feel a vail being lifted and feel more empowered to ask questions of those pushing software tools that ‘might’ not be ready for prime time.
“Artificial Power, our 2025 Landscape Report, puts forward an actionable strategy for the public to reclaim agency over the future of AI. In the aftermath of the “AI boom,” the report examines how the push to integrate AI products everywhere grants AI companies – and the tech oligarchs that run them – power that goes far beyond their deep pockets. We need to reckon with the ways in which today’s AI isn’t just being used by us, it’s being used on us. The report moves from diagnosis to action: offering concrete strategies for community organizers, policymakers, and the public to change this trajectory”
Thank you for your contribution of time, money and attention as a reader. I’ve received many lovely notes and thoughtful insights.. A special thank you to those who have so generously contributed financially to AI for Lifelong Learners. What you do makes a difference and keeps me inspired.
Thank you, Tom. Your posts are on my shortlist of trusted sources.