The Death of Marketing.
The Rise of the LLM.
And a peek inside the Black Box.
While hard to date precisely,
at some point during the winter of 2025,
the world of marketing and brand strategy
entered a period best described as early
DYSTOPIAN DUMPSTER FIRE.
Almost overnight,
decades of expertise and experience appeared to lose its luster;
eclipsed by a shiny new reality.
The emotional intelligence to
get inside the heads of an entire target audience
in order to tease out a previously-missed insight,
the intellectual rigor to hone a strategy down
to the single thing that makes an offering truly unique,
even the artistry to craft videos that draw people in,
emotionally connect, and send them off motivated to take action;
each seemed suspect. A relic of a past that was suddenly no longer relevant.
And that was just the beginning.
Since then, it seems every month brings with it yet another “epiphany”:
Claude 3.0,
Midjourney 7,
Gemini 2.5.
Each arrives on the runway
with its own retinue of breakthroughs and benchmarks,
metaphors and magnifications:
“Exponential intelligence!”,
“True sentience!”,
“The end of expertise!”
So many claims. So much certainty.
Am I the only one noticing that something’s missing?
Welcome to the rise of the LLMs (Large Language Models).
They’re here. They’re amazing.
And they’ll undoubtedly have a huge impact on the way we’ll live, learn, and go about
both adding value and being compensated for it.
But wait.
Before you run off in a panic or get lost in all the breathlessness,
let’s stop, refill our beverage of choice,
sit back, and sneak a peak inside these much-acclaimed black boxes.
After all, if you're anything like me,
the sooner you get your head around what these machines do
and how they do it,
the sooner you’ll start to understand
what they’re good at,
what they’re not so good at,
and why you (as long as you are a real human being)
have every reason to relax,
“Stop worrying and start to love AI.”
Right up front, let’s clear up a few misconceptions:
LLMs don’t think.
They don’t imagine.
And they definitely don’t understand —
at least not like we do.
They do, however, do one thing exceptionally well.
They predict.
Or, more specifically, they predict what the most likely next word
(or, even more accurately, next token)
in a sentence should be according to past experience.
Say you ask, “How does the poetry of Emily Dickinson transcend everyday life?”
The model reformats that as:
“The poetry of Emily Dickinson transcends everyday life by…”
And then?
It guesses.
Next token.
Next token.
That’s it. No comprehension. No common sense.
Just pattern prediction —
on a scale so massive it can feel like magic.
What changed and what made this kind of prediction possible
was a shift in how we build software.
Rather than instructing a computer to follow hard-coded, rule-based logic —
“If badge is valid > open gate” —
researchers developed an entirely different kind of model.
One that works like the most flexible computer we know:
the human brain.
The result is an architecture called a neural network.
And here’s how it works:
The network is made up of layer upon layer of interconnected artificial neurons.
Each one does a simple job.
It receives an input. A number,
And it performs a tiny calculation on it
(say, add 0.2, divide by 0.8, apply a nonlinear function),
and then it passes the result on to the next layer.
That neuron does its own tiny calculation,
and so on. Layer after layer.
Tiny calculation after tiny calculation.
Changing that number that is being passed on a tiny tiny bit.
Until the final neuron receives the final number,
looks it up on a giant token output table,
and returns “green.” or “old.” Or “rich.”
Or whatever word/token matches the number answer.
And that is its guess.
It sounds simple, almost mechanical:
a number goes in, a word comes out.
A word that is uncanily often right!
Which brings up a pretty big question;
if no one’s programming the rules,
how does each neuron know what tiny calculation to do?
LLMs aren’t programmed in the traditional sense.
They’re not given a list of instructions or rules to follow.
No. Models are trained not to understand, mind you.
But to guess.
Really well.
Here’s how:
The system training a model is given a truth,
say “The sun rises in the east.”
It turns that truth into a training scenario by dropping the last word
“The sun rises in the ______.”
The model then makes a guess at what’s the next word should be.
Let’s say it guesses, “sky.”
Which is close but not true.
So, through a learning process called backpropagation,
the trainer sends a feedback signal backward from that last neuron in the model,
and, as a result, each neuron involved
in producing that wrong answer
makes a tiny adjustment to the tiny calculation it performed.
One might change a multiplier from 0.8 to 0.9.
Another might shift how it weighs a certain input.
Each subtly updates what it does in a process that’s technically referred to as
adjusting the model’s weights.
The model then tries again.
And this time the result is ... “west.”
Still untrue, but warmer.
So more adjustments.
And more tries ...
Some colder.
But most increasingly warmer and warmer until …
Ding! Ding! Ding!
it lands on ... the true answer “east.”
This process is then done over and over,
tens of thousands of times a second,
for trillions of training scenario truths
(think every sentence on the internet … minus one word).
And the result?
The result is a trained model:
a model whose weights have been tweaked into a kind of probability map,
a deeply layered,
highly tuned statistical engine
that is a genius at doing one thing ...
guessing.
All of this brings us to a key question.
And an answer that is at the core of my argument about why
it’s time you really should stop worrying about AI...
If it’s just guessing, why does it seem so… smart?
It seems so smart
because we humans have learned to equate fluency with intelligence.
From a very young age we are taught to
judge people based
on how clearly they speak,
how smoothly they write,
and how confidently they deliver.
So when a machine shows up that can do all three
without pausing,
second-guessing, or hedging,
we assume it must be brilliant.
This is known as the ELIZA effect,
named after one of the earliest chatbots.
Even back then, people were surprised by how quickly
they began to project personality and emotion onto a machine.
And intelligence is no different.
Now multiply that by 10,000 and you have an LLM.
But make no mistake:
the model doesn’t “know” what it’s saying.
No internal world.
It has no concept of truth.
No memory of its past conversations.
No pain and therefore no visceral measure of the price of error.
It’s a paint-by-numbers prophet with an internet-sized palette.
A very clever mirror reflecting the average of every thing it has seen.
And despite the fact that it has seen most of everything on the internet.
In truth, that amount of information pales against the Petabytes of data
You had absorbed by the time
you were two.
Of course, no one should think that even the illusion of intelligence
isn’t powerful. Or valuable.
When language can be manufactured as consistently,
customizably,
and cheaply as LLMs can,
there are definitely places where that output is more than enough for the job at hand.
That is what makes LLMs simultaneously liberating and threatening,
especially for those of us whose work has always relied
on the persuasive power of language.
When machines do most of the talking,
where does that leave the thinker?
In fact, it leaves you exactly where you want to be;
no longer spending the bulk of your time
on what Truman Capote referred to as “That’s not writing. It’s typing.”
But instead focusing on the essence of your best work:
the empathy for your fellow human that you’ve developed over a lifetime of “training.”
Your lived experience.
Your triumphs. And catastrophes.
Each contributed to the human being you are
and many contributed to the best of your best ideas.
What’s more, this thing only you can bring —
this innately human superpower.
I would hazard to say that it will not prove
to be as easily mimicked as smooth talking.
In fact, I’d go so far as to say that
we’ll come to find there’s just as broad an uncanny valley
in the expression of authentic human motivations
as we’ve observed in the expression of realistic-looking human beings.
So take heart.
There is a tomorrow for people whose work centers around human motivation.
The tools are just evolving.
And the key is, as this series proclaims,
to stop worrying. And start moving forward.
Speaking of which, in the next article,
we’ll zoom in on LLMs in marketing specifically,
an industry built on insight, authenticity and storytelling,
and examine how LLMs haven’t just entered the conversation.
They’re becoming an essential tool
of any who’d want to lead it.
This series wouldn’t exist without the insight, patience, and moral support of two people:
My beautiful wife, Cecile Engrand — the best event marketing CD I know — who was showing the world what was possible with AI long before the rest of us caught on, and whose strategic sensibility still grounds everything I do. And my lifelong friend, Thomas Bolton — Princeton-trained, fractional CPO, and AI whisperer — who’s been my teacher, tech advisor, and intellectual sparring partner from day one. And the only person I know who’s building his own AlphaGo model … for fun.Without their very human connection (and the help of my favorite LLM, ChatGPT), none of this would’ve come together.
All film references in this article are used under U.S. Fair Use Guidelines for the purpose of commentary, critique, and cultural analysis. All rights remain with the original copyright holders. If you’re a rights holder and wish to request attribution or removal, please contact me at LiamSherborn@gmail.com.