
| It was one of those perfect Thursday evenings in late May when Texans congratulate themselves for choosing to live in the greatest country in the U.S. A purple sky hung over the baseball diamonds at Oak Grove, a slight breeze drifting off the lake as the kids fielded grounders. I glanced at my watch. Thirty minutes of practice left—which meant thirty-five to eighty more minutes of potential small talk with parents I had no interest in speaking to. Then I saw Hudson’s dad—a Tier I parent I had no interest in speaking to—make eye contact with me through the backstop net. He levered himself out of his camping chair and high-stepped over his wagon. (Do you really need a wagon to transport one chair 100 feet? No. You do not.) He approached. “Hey, question for you, Pal.” (He calls everyone Pal.) “You using ChatGPT at work?” “Yeah. You?” “Absolutely. And at home too, with Hudson. Told him he doesn’t have to worry about writing ever again,” he said, pointing to his 13-year-old son who was booting another slow roller at shortstop. “Writing is now, officially, a defunct skill.” I fidgeted, scratching an itch-free section of my neck. He continued. “I told him all this AI stuff is just like when calculators came out. Forget spending all that time learning math when the machines can do it faster, right?” He looked at me, palms upturned. “And you’re Mr. Writer and all, arncha?” he said. “You concerned AI will make you obsolete?” I prayed for lightning. Not to strike Hudson’s dad directly, just near enough to trigger the siren. I waited for help from the heavens. No such luck. I contorted my face into something approximating a smile. “I understand what you’re saying,” I said through gritted teeth, “but teaching kids how to think is important. AI won’t do the thinking for you.” Hudson’s dad ratcheted a super-sized, open-mouth grin. His tongue was yellow. “Not yet anyway,” he said. “Just wait.” |
| Last month, Apple published a research paper titled The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity. The first eleven pages feature 41 graphs, four diagrams, and one information table. Nineteen pages of references and appendices follow, including another 32 graphs. I understood only a fraction of it, but four seemingly important things stood out: ONE ChatGPT is what’s called a Large Language Model (LLM) and its magic comes from incredibly fast pattern recognition. Now there’s this new, advanced thing called a Large Reasoning Model (LRM) that can detail exactly how it “thinks” through a problem. (If all this tech jabber makes your eyes glaze over, well, me too. Moving forward we’re going to clump all this large language lingo into something we’ll simply call Larry, a humanoid mashup of Google-Wikipedia on steroids.) |

TWO
Larry struggles with problem-solving when complexities increase.
THREE
Larry has a bad habit of “overthinking.” He often finds the right answer early but keeps digging into incorrect alternatives.
FOUR
Over time with practice and coaching, Larry will get faster and smarter.
Sounds like Larry and I face similar challenges.
I struggle with complex problems.
I tend to overthink well after I already know what to do.
I can get faster and smarter with practice and coaching.
We all can.
How?
By using AI.
By partnering with Larry.
After using AI for the past 20 months, this equation is indisputable:
Just me < (Me + ChatGPT)
Me, working alone, is less effective than me working with Larry.
Larry may not be able to think, but I can. Meanwhile, comparing Larry’s pattern recognition to mine is like comparing Shohei Ohtani’s baseball talent to that of a North Atlantic codfish.
One example of our partnership is a GPT (Generative Pre-trained Transformer) that does only one thing. I copy and paste something I’ve written—often a sentence that doesn’t sound quite right, maybe an initial humorous attempt comparing Ohtani and codfish—and it answers this single question for me:
Can you improve this for clarity?
Clear thinking resulting in clear communication.
That’s the goal.
(Any humor is a bonus.)
So how, exactly, did I create this custom, single-use GPT that I leave open all day and use to improve my writing?
I asked Larry to do it. Including the custom icon Larry created, the process took 4 minutes and 22 seconds.
For me, writing is proof that I’m thinking. (Or at least trying.)
Sometimes it reveals major flaws and giant holes of logic.
Other times I reread it and think, “Yes. That seems to be true of how things work.”
I’m not taking the advice of Hudson’s dad.
I’m not going to “just wait.”
I’ll keep thinking.
I’ll keep writing.
And I’ll keep asking Larry to help me say things more clearly, not because I can’t think, but because I do. And I’ll take all the help I can get.
As practice (mercifully) came to a close, I offered my own bit of advice:
“Probably still a good idea for Hudson to learn his multiplication tables.”

Copy a single line from an important email today and paste it into ChatGPT with this prompt: Can you improve this for clarity?
Thanks for reading.
I’ll be back next Thursday.

P.S. If you enjoyed this newsletter and haven’t read my latest book, The Air Raid Sales Offense, it’s worth checking out. It’s built specifically for LBM sales pros and packed with stories, proven strategies, and practical tools to help you score new sales—faster.
Subscribe here to get the next edition of The Craft of LBM Sales straight to your inbox—weekly stories and practical advice to master the craft of selling.
Copyright ©2025
Bradley Hartmann & Co.
All rights reserved.

Contact Bradley Hartmann:
bradley@bradleyhartmannandco.com
