Hedging client recommendations was playing safe. Now it's where AI can't lose.
- Matthew Ireland
- 6 days ago
- 2 min read
The standard advice for account management teams trying to stay relevant vs. AI is to apply their "unique expertise". But what does this actually look like when one of your account managers is on a Tuesday afternoon status call? How does a client FEEL the difference between a recommendation from your team and the one they could have AI-prompted themselves?
AI simulates intelligence, but can't take a position based on experience. It has no skin in any game. Your team does, and that's the differentiator I believe is worth leaning into.
LLMs are built to lay out options. Ask one for a recommendation and you'll get three: neatly weighted, all defensible, with pros and cons. When your account managers do the same thing, they're competing with AI on the very ground where AI cannot lose.
Is your team still hedging their bets?
In my experience, when account teams hedge recommendations, it's because committing to a point of view feels exposing. Giving options is comfortable – the client picks, nobody's wrong, everyone moves on. One recommendation, owned and defended, makes your account manager accountable if it doesn't land. Accountability is uncomfortable, and AI has now made the comfortable, frictionless multi-option answer easy to generate ad infinitum. Hedging recommendations isn’t always a skill gap. It’s an anxiety-management strategy that’s become a real threat to your team's relevance.
The alternative is a non-neutral, deliberately opinionated, owned position – referencing what your team has seen work for similar clients, what went sideways in this kind of project last time and why, drawing on the leading signals of success and failure they've learned to spot before anyone else does.
It's not about being right. There's always more than one way to skin the proverbial cat, and another agency could get your client there by different means. That's fine.
It's about your team taking one position – their position – that they know THEY can make work for this client, in this situation, referencing their experience that nobody else in the room (or any LLM) has. This is what applying "unique expertise" looks like in practice.
It means getting familiar with the discomfort of committing – precisely the opposite of the non-committal safety net LLMs provide.
AI can only average what already exists. A human who owns a recommendation and stands behind it builds the deeper relational trust that no model can manufacture.


