AI·3 February 2025·6 min read

When Not to Use AI in Marketing

AI is a tool, not a strategy. The specific tasks where AI makes things worse, from our actual list built over two years of running it in production.

By Jay

When Not to Use AI in Marketing

When Not to Use AI in Marketing

After 2 years of building AI workflows into our own operation and deploying them for clients, we have a clear sense of where AI earns its place and where it actively makes things worse. This post covers the second category. Not because AI is bad, but because knowing the limits of a tool is what makes you competent with it.

Crisis Communications

When a client is in a reputation crisis, they need a human voice. A real one. Not because AI cannot write a competent apology or a stakeholder update. It can. The problem is that crisis communications are read in an environment of heightened scrutiny, and any language that feels manufactured or generic will be called out immediately.

A crisis statement written by AI has a specific texture. It acknowledges concerns, it commits to review, it expresses regret in proportionate and measured terms. That is exactly what makes it feel hollow when a brand's customers are angry. People in a state of genuine concern can detect corporate-speak with precision. AI produces polished corporate-speak by default.

Crisis communications also require real-time judgement calls that AI cannot make: which specific stakeholders need to be addressed first, what information can be released and what cannot yet, where the legal exposure is, what the founder actually thinks and how to translate that into usable language. These are not drafting problems. They are strategic communication problems.

Use AI to help draft the communication once you know what you want to say. Do not use it to figure out what you should say.

Highly Personal Client Relationships

Some client relationships are relationships in the full sense of the word. The client knows your team. They have worked with you through difficult projects. They trust the organisation because they trust specific people in it. When something goes wrong, or when a significant decision needs to be made, they need to hear from a person.

Automating the communication layer in these relationships is a mistake that clients notice even when they do not say so directly. An email that is clearly templated or AI-assisted in a high-trust relationship reads as a signal that the relationship does not warrant a real response. That is a credibility problem.

We automate volume. Inquiry responses, campaign status emails, reporting summaries. We do not automate relationship communication. The distinction is whether the client would notice if a human had not written it.

Creative Work Requiring Cultural Insight

Advertising that needs to land within a specific cultural context, a subculture, a regional community, or a particular moment in time, requires genuine cultural knowledge. AI has pattern-matched cultural references from its training data. That is not the same as understanding them.

We have seen AI-generated creative that uses slang incorrectly, references trends that have already faded, or gets the emotional tone of a cultural moment slightly wrong in a way that is obviously off to anyone inside the culture and invisible to anyone outside it. The problem is that the people approving AI creative are often outside the culture the campaign is targeting.

Humour is particularly hard. Timing, reference, and tone in comedy are deeply cultural. AI can write jokes. It cannot reliably write jokes that land with a specific audience in a specific context without extensive human editing by someone who actually belongs to that audience.

Use AI for concept exploration and draft generation in creative work. Keep a human from the target culture in the review chain.

When the Client Needs to Feel Heard

This is distinct from personal relationships. It applies to any situation where the function of the communication is acknowledgement, not information transfer. Customer complaints. Feedback responses. Situations where someone has had a bad experience and needs to feel that a real person received what they said and understood why it mattered.

AI responses to complaints tend to follow a pattern: acknowledge, apologise, explain process, offer resolution. This is structurally correct but emotionally thin. Real acknowledgement includes some evidence that the specific details of the complaint were understood. "I can see that this happened on a Friday evening when you had guests coming" is qualitatively different from "I understand your experience was not up to our usual standards." One is read. One is generated.

Businesses that route all complaint responses through AI templates often see declining review scores over time even when the operational problems that caused the complaints are resolved. The feeling of not being heard is its own complaint.

Our Actual List From 2 Years of Building

Beyond the categories above, here are the specific tasks where we have pulled AI back out of workflows after testing it.

Long-form strategic documents. Strategy presentations, market analysis documents, and annual planning materials that go to senior stakeholders. The documents come out readable but lack the conceptual sharpness that a strategic thinker produces. The ideas are safe. Nobody acts on safe ideas.

Influencer outreach. AI-drafted outreach to creators reads like AI-drafted outreach. The personalisation is technically present but emotionally absent. Response rates dropped for a client when we tested AI-drafted versus human-drafted outreach at scale.

Brand naming. AI naming exercises produce names that are available, pronounceable, and adequately differentiated. They rarely produce names that are genuinely great. The gap between a good name and a great one is hard to articulate but obvious in hindsight, and AI lands on the good side of the line more often than not.

Copy for grief, health, or financial stress contexts. Any category where the reader is vulnerable requires a different kind of care than most marketing copy. The clinical evenness of AI output is wrong for these contexts in a way that is hard to correct with editing.

The Pattern That Causes Most Mistakes

The marketing tasks where AI causes the most damage are not the ones where it produces obviously bad output. They are the tasks where it produces output that is good enough to publish without careful review, and then gets published without careful review.

AI writing sounds authoritative. It is grammatically correct, structurally coherent, and appropriately confident. These qualities make it easy to approve quickly. The problem is that "sounds right" is not the same as "is right" for the communication context at hand.

A crisis response that sounds professionally calm when it should sound personally contrite is a mistake. A heartfelt congratulations message that uses the right words in the right order but feels somehow flat is a mistake. A brand awareness campaign that says exactly what every competitor says is not technically wrong, but it is useless.

Speed is the enemy here. The brands that use AI well in marketing are the ones that slow down at the review step rather than treating AI output as ready to use. The brands that use AI badly are the ones that let speed pressure eliminate the human judgement step entirely.

The Framework We Use

One question determines whether AI belongs in a task: does this task require the reader to feel that a real human made considered choices?

If the answer is no, AI can do it. If the answer is yes, AI supports a human, it does not replace one.

That boundary shifts as models improve. But it has not shifted as fast as the enthusiasm for deploying AI everywhere would suggest. The gap between what AI can produce technically and what a particular communication context actually requires is where the serious mistakes happen.

If you want to build AI into your marketing operation in a way that is genuinely useful rather than superficially impressive, see what we offer or talk to us directly.

AImarketinglimitationsstrategy
Skip the small talk