Learning to prompt with a counterargument can make you smarter
Give it go and see for yourself
“We must be able to distinguish those conditions under which the conclusion may legitimately be drawn from those in which it may not. Hence the need for rebuttals.” — Stephen Toulmin
It's been all over the news for the last couple of weeks, did you miss it? Here's a cluster of eager titles begging for your attention and appealing to your worry:
Is AI Making Us Dumber? MIT Study Raises Concerns
How AI Is Eroding Human Intelligence, According to MIT
MIT Warns: AI Could Be Making Us Less Smart
The Downside of AI: MIT Suggests It’s Making Us Dumber
Are We Getting Dumber Thanks to AI? MIT Weighs In
MIT Researchers: AI’s Convenience Could Be Making Us Dumber
The Unintended Consequences of AI: MIT Says It’s Making Us Dumber
Too much?
Yeah, I think so.
Friends and subscribers forwarded me unsettling coverage of a new MIT-led AI study that has spawned headlines warning that chatbots might dull our minds. Adding weight to the worry, researchers at the Wharton School tracked more than 4,500 participants and compared two research habits: relying on ChatGPT-style language models versus using Google Search.
Those who leaned on chatbots generally walked away with a thinner grasp of their topics than participants who stuck to traditional search. Small wonder people fear we are being seduced by software we scarcely understand; real risks lurk in the gap between what these models can do and what we actually know about them.
What do I think?
It seemed like we all mostly agreed that AI gives us opportunities to learn more. Or that’s what we hoped, someday.
With all due respect to MIT and Wharton research efforts, I wonder what we would learn if we took the contra-argument. I’m reminded of a 2024 TED Next talk titled “The Tipping Point I Got Wrong,” where Gladwell delivers a mea culpa, warning journalists against seizing on tidy but untested causal stories. A difficult lesson he learned, publicly. I’m not saying these studies are wrong, I’m raising a question worth exploring. What is the other side of the story?
Our perspectives can change over time. Facts can change. Prompting AI can open us to different perspectives. It’s possible, right? I realize it takes time to wrap our head around and through a topic in a broader context. Time is friction, it slows you down and social media systems anticipate your behavior. We know what we know and often we’re not keen on challenging it - but, you know, we click on titles because we got scared or wooed by the intensity.
The news cycle thrives on extremes. We’ve all come to accept this influencer-infused content, often with some disdain. Social media algorithms reward the loudest, most provocative takes, doling out quick shots of dopamine with every click. In that rush, we, at times, click on a hot title only to be upset or enraged by the content.
It’s like a trickster sounding the siren during a showing and moments later grinning, “Gotcha.”
Learning from the counterargument
I keep circling back to the same question: What lies behind the inflammatory headlines and opinion-laden prose, and what viewpoints sit just outside my personal narrow spotlight?
The news exists to inform and draw us in. When certain topics pop up, I feel the pull…triggered. I can ignore it, but I don’t want to all the time. I’m drawn in. I need to read that article to see if it matches the title. Was I just tricked into clicking?
This quote is the guiding light for all people marketing products. Pitching for a clickthrough is the sole existence of a title on the web:
"Your headline has only one job — to stop your prospect and compel them to read the next sentence." — Eugene Schwartz
But are you wanting to be compelled or curious? Maybe both. Using a counter-argument prompt unleashes your mind to think more broadly, more boldly. But it requires effort, effort that can benefit you.
Yeah, but what about AI?
When I see headlines about AI these days, I can sense an overly embellished title. I notice a little twinge inside. I’ve been there. I worked in AI marketing and sales for a decade.
So what are the counterarguments to, “AI makes you dumb?”
We could find out that offloading our minds to do something else has value, but that takes an extra step. So the ‘assumption’ is we’re lazy and bingo, that makes us dumb over time. Or said another way, cognitive offloading is outsourcing our brains.
For the older folks on the list, remember when mom and dad kept hammering at you that watching TV will make stupid? I recall mom and dad saying, “don’t waste your time, Tommy.”
Truth: I’ve been critical of people watching YouTube videos and TikTok videos.
We could think of cognitive offloading as setting the treadmill to handle the steady miles, freeing your imagination to sprint in uncharted directions. Some people do this while doodling.
Those uncharted directions our minds take can mean exploring alternative perspectives. Exploring other perspectives is a powerfully interesting interaction when using AI.
Try this next time a headline pulls you in
Take 2 minutes with your AI model (OpenAI, Gemini, Claude, or Grok). These days I’m using the ChatGPT o3 model for greater in-depth answers. For now, you do what is easy to try it out.
1. Copy ‘that’ with the article that grabbed you
2. Paste or upload it into your chatbox, and
3. Give it this prompt:
"Study the article. Draft the sharpest counterargument to its thesis."
See what you discover! Different AI models will give you different perspectives.
I did this for the MIT article and was reminded of all the time-saving devices we use daily that offload our mind and body from work that we enjoy daily. I was reminded again, what do I do with my time after that? It helps to stop, think, and let in the other perspectives. Yes, that takes extra time but isn’t that the goal, time to deepen yourself and your thoughts?
More ways to explore counterarguments
Try one of these more detailed prompt examples after pasting or uploading a ‘hot title’ and article into your chatbot of choice.
For considering audience impact:
"Summarize how a reader with opposite values might react and note which points would push them away."
"Identify passages that could alienate undecided readers and explain why they might backfire."
For examining evidence:
"List the main evidence, then show an alternative response that weakens each one."
"Spot the central assumption and test it with a scenario where that premise fails."
For broadening perspective:
"Present a reasoned case from the most credible opposing viewpoint." (my favorite)
"Rewrite the conclusion as if you fully disagreed, using the strongest logic you can muster."
To summarize
We all reach for shortcuts. And often, there’s good reason.
We let turn-by-turn GPS think for our sense of direction, calculators finish the math before our brain rounds the numbers, and speed-dial remember the phone numbers we once kept in our heads. Power screwdrivers tame IKEA furniture, escalators flatten distances, and pre-written greeting cards carry emotions we might have voiced ourselves. Personally, I love my automatic transmission car with AC and a radio that never runs out of music. No thinking required.
Each tool shaves off a little friction, a welcome relief when we’re tired, hurried, or navigating real constraints. But the trade-off is subtle. As these tools take on the cognitive load, our inner maps fade, number sense dulls, and the small muscles of memory, estimation, and improvisation weaken from disuse.
The pattern is nearly universal: anything that promises “effortless” risks making the underlying skill optional. The goal isn’t to reject convenience but to stay alert to when ease becomes erosion. That awareness lets us choose when to re-engage: to walk to the terminal instead of riding, tally the tip in our heads, or write our own words instead of selecting a prefab phrase. These moments reawaken circuits we otherwise surrender to automation.
As with ChatGPT or any AI tool, the tool itself is neutral. What matters is how we wield it. Because in the end, the shape of our thinking is also shaped by what we choose not to outsource.
Keep learning. Share what you learn.
tp
Thank you Mark, well said. Really good seeing you (again), now on Substack! I look forward to your posts.
This also works in anything related to health, diet, etc.
"Studies say" is a phrase worth looking for the counter-argument or dissenting studies. Because the media frequently over-hypes them.
Nowadays, all news media reports need to be examined for the counter-argument, since one has to extract the limited truth kernel from the government- or corporate-sponsored lies.
Example: Everything you think you know about the Ukraine war is a lie. And everything you think you know about Iran is a lie.