Learn to Love AI — Article 3
“I’m sorry Dave. I can’t do that.”
–2001: A Space Odyssey(1968)

How I Learned to Stop Worrying and Love AI

Part 3:
Fears - Real and Imagined.
Risks - Material and Otherwise.
And the Need for Active Skepticism in a World of Confident Machines.

While scholars remain divided as to whether the
greater threat is the abrupt elimination of white-
collar work or AI systems’ inevitable acquisition of
sentience, the consensus is that either way
WE'RE ALL GONNA DIE!


We split the atom before we dropped it.
We burned carbon only to have it return the favor.
And we created an internet to unify the world —
then had to unfollow half our relatives.
So yes, when it comes to AI, skepticism isn’t just healthy — it’s earned.

And there are very real concerns about everything
from AI-driven job displacement to AI-fueled misinformation at scale
to AI-exacerbated income inequality.
Not to mention the glaring lack of government regulation or institutional oversight.

But, truth be told,
the shiny thing everyone everywhere seems fixated on
is Artificial General Intelligence (AGI) —
the sci-fi fantasy of a sentient superintelligence and,
of course, it’s preordained rising up to wipe us out.
It’s been a part of our culture for as long as we’ve had technology.
And as that technology’s capabilities have grown, so too has our fear.

But, as I’ve said too many times in this series
(and again my deepest apologies to anyone ever seated next to me at a dinner party),
LLMs don’t think, feel, plan, plot, or
(insert any active verb - as they don’t do that either).

They respond. They react. They regurgitate. Period.

And that means the threat we face today isn’t domination.
It’s dilution.

A future that may not sound very scary (or particularly cinematic)
but one that I would hazard will produce a marketing future
just as soulless and barren as any robot-dominated apocalypse.

Why? Because watch what happens.

It starts with a slow erosion of standards,
a willingness to trade the crisp clarity of a dozen rewrites
for the efficient emptiness of placeholder prose.
And before long, without us even noticing,
the machines don’t need to replace us.
We’re doing it to ourselves.

Passive zombies who, by surrendering to laziness,
have forever elevated the fluent-but-flat
over the f*ck-if-that-isn’t-perfect.

“We ( HAL 9000’s ) are all ... foolproof and incapable of error.”
Circular logic. Unsafe words. And remembering only you can get fired.

First off, let’s not pretend all the risks we’re facing
are abstract or living in some far off land.
Odds are, offenders of the most dangerous kind are,
right now, sitting quietly in some “FinalFNL6” draft of yours —
proof that when it comes to LLMs and marketing,
the call is coming from inside the house.
And the danger? It’s not just close. It’s inconspicuously so.

For starters, one easily forgotten truth
is that your LLM isn’t your bestie.
No matter how often it opens with
“Wow, Abby, you really nailed that email…,”
it’s not thinking. Or feeling.
And it definitely doesn’t care.
It’s not your co-pilot or your conscience.

It’s a pattern engine trained to guess the next likely word.

Put another way, it’s not aiming for what’s right.
It’s aiming for what sounds right.
And that difference matters.
Because when there’s no real bias toward truth —
no inner voice urging caution or ethics —
your LLM has no issue sending you off with a hearty
“You’ve got this!” and an output that it filled with:

  • Hallucinations – Fact-shaped fictions. Think studies that never happened, laws that never passed, and numbers that were never counted.

  • Circular Logic – Beautifully phrased and confidently posited arguments whose only weak point is the conclusion that they start with.

  • Data without Depth – Statistics so free of context, framing, or method you couldn’t, in good conscience, even call them lies.

But beyond the mistakes themselves, there’s a deeper risk:
your LLM delivers every answer (right or wrong)
with the same polish, pace, and confidence.

Brilliant or baseless, it all arrives wrapped in shiny certainty,
without a hint of hesitation.
And often, that’s more dangerous than the error itself.

As humans, we tend to mistake fluidity for reliability.
The smoother the delivery, the more convincing the argument.
The more we hear calm certainty —
especially wrapped in swagger and a smile —
the less we question.
The less we double-check.

And that’s where the real damage begins.
Not with just a fact that’s wrong,
but with a pattern of letting wrong things slide.
Not with just an errant hallucination,
but with a gradual rewriting of what is real.

SPOTTING AND STOPPING HALLUCINATIONS
A checklist for identifying and reducing false outputs in LLM-assisted workflows.

  • 1. What is a hallucination?
    A hallucination is a confident, specific reference that lacks factual grounding.
    It’s not just incorrect — it’s fabricated, often convincingly so.
    And what’s important is that, left unchecked,
    hallucinations can erode trust and push misinformation through otherwise polished work.

  • 2. How to Spot a Likely Hallucination:

    Unverifiable specifics: Always check for facts,
    figures, or quotes that sound plausible but can’t be confirmed through trusted sources.
    Off-tone or misaligned terminology Listen for language
    that doesn’t fit the subject matter or the voice of the source it claims to represent.
    Citation bluffing: Be especially wary of prestigious
    name-drops like “Harvard study” or “McKinsey report” without real links or verifiable references.
    Beautifully phrased and confidently posited arguments
    whose only weak point is the conclusion that they start with.

  • 3. How to Reduce Hallucinations in Practice

    Ground your queries in trusted materials
    Start every session by aligning your team on approved brand decks,
    research, and strategy docs. Reference them in your prompts and
    cross-check outputs against them.

    Flag confidence as a cue for caution
    Fluent, decisive language is not evidence.
    Just because it sounds right doesn’t mean it is.
    Verify everything. No source? No trust.

    Appoint a Designated Denier
    Assign one reviewer to play devil’s advocate –
    always asking: “What would legal say?
    Would you want to defend this on TV?”

    Beware of polish as proof
    LLMs excel at tone, not truth.
    When something feels too smooth, slow down.
    Real insight will be just as powerful if expressed in other words.
    So say it differently and see if it still stands up.
    "(Sigh) Well, we wouldn't have too many alternatives"
    The Slow Death of Original Thinking.

    Finally (well, finally, for this series),
    there’s another risk we haven’t touched on yet —
    and in some ways, it may be the most dangerous of all.

    Not because it’s dramatic. Or life-threatening.
    But because it’s so easy to overlook.

    It’s not about hallucinated facts or manufactured citations.
    Nor about mistaken tone, idiomatic errors,
    or even inappropriate confidence.
    In fact, it’s not about anything LLMs do wrong at all.

    It’s about what they quietly keep us from doing right.

    You see, when you work with an LLM,
    you’re not tapping into insight.
    You’re tapping into prediction.

    Every word that comes back is a guess,
    not based on brilliance, boldness, instinct, or experience,
    but on statistical weight.

    And for marketers: creatives, storytellers, and strategists,
    people whose job is to push past the obvious,
    that’s a dangerous place to be.
    A place where we’re subject to a career-limiting gravitational pull.
    One that, worse even than dragging us towards error,
    is dragging us towards average.

    "Ive ... got the greatest enthusiasm ...And I want to help you."
    The Gravitational Pull of the Probable.

    As we both now know,
    LLMs aren’t wired to challenge assumptions.
    They’re weighted to echo them.
    >strong>Engines of pattern recognition,
    they’re extraordinarily good at mimicking the shape of an argument,
    the tone of a testimonial, or the cadence of a tagline.

    But that strength comes with a built-in limitation.
    These systems don’t break patterns — they reinforce them.

    They don’t wander outside the lines.
    They stay comfortably inside the box.

    And for marketers — for creatives — that’s a problem.
    Because originality doesn’t emerge from what’s probable.
    It comes from tension. From contradiction.

    From chasing after what feels a little off
    in order to stumble upon what’s exactly right.

    That’s why the best creative thinkers don’t ask,
    “Where do things come together?”
    They ask, “Where do they fall apart?”

    After all, that’s where the powerful stuff hides —
    and where the real danger begins:
    when an LLM hands you something decent,
    just as good as usual — maybe just like that piece your client approved last week.
    It’s easy to move on. So why not do so?
    Why push further?

    Why risk being wrong when the system has blessed something that looks pretty good?

    Because “approved” isn’t the job.
    “Decent” doesn’t make people feel ... or think.
    ... or remember ... or buy.
    And “pretty good” sure as hell, won’t build a brand.

    "This mission is too important for me to allow you to jeopardize it."
    What's next? How do we get there? And who should expect to go?

    All of this — the hallucinations,
    the polish without substance,
    the quiet drift toward the average —
    points to one central danger:
    not that LLMs create worse work,
    but that they’re extending a lot of work we should’ve already left behind.

    Ask an LLM for a brand idea,
    you’re not starting from zero.
    You’re starting from everyone else’s midpoint.
    The mindless blog posts,
    the tired white papers,
    the marcom taglines that trained the model.

    You’re not building something new.
    You’re standing on the shoulders of leftovers.

    Of course, that doesn’t mean the tool is useless.
    But it does mean that your job doesn’t end when you hit “enter.”
    LLMs do what they do incredibly well.
    But what they do isn’t creating brilliance.
    It’s crafting consensus.

    So if we want sharper, weirder, more relevant work —
    the kind that resonates with real humans —
    we’ll need to bring that ourselves.

    And that’s exactly where we’re headed in Article 4.

    Please pull up a chair and join us as we dive into
    yet another set of questions plucked from the headlines
    of this certainly interesting era:

    • What would/could “great marketing” look like now?
    • What skills will it take us to get there?
    • And just who will still be standing when the dust settles?
    THANKS AND LEGAL CONSIDERATIONS:

    This series wouldn’t exist without the insight, patience, and moral support of two people:

    My beautiful wife, Cecile Engrand — the best event marketing CD I know — who was showing the world what was possible with AI long before the rest of us caught on, and whose strategic sensibility still grounds everything I do.

    And my lifelong friend, Thomas Bolton — Princeton-trained, fractional CPO, and AI whisperer — who’s been my teacher, tech advisor, and intellectual sparring partner from day one. And the only person I know who’s building his own AlphaGo model … for fun.

    Without their very human connection (and the help of my favorite LLM, ChatGPT), none of this would’ve come together.

    COPYRIGHT AND FAIR USE:

    All film references in this article are used under U.S. Fair Use Guidelines for the purpose of commentary, critique, and cultural analysis. All rights remain with the original copyright holders. If you’re a rights holder and wish to request attribution or removal, please contact me at LiamSherborn@gmail.com.