The Problem With AI-Generated Ad Copy (And How We Fix It)
Generic AI copy sounds confident and says nothing. That is a precise description, not a dismissal.
Ask any major AI model to write a Facebook ad for a restaurant and you will get some variation of "indulge in an unforgettable dining experience." Ask it for a skincare brand and you will get "transform your skin with our carefully crafted formula." Ask it for a gym and it will offer to help you "crush your goals and reach your potential." These lines are not wrong in any factual sense. They are also identical to what every competitor in the category would receive from the same prompt.
The problem is not that AI writes badly. The problem is that generic input produces generic output, and most people using AI for ad copy are giving it generic input.
Why Generic AI Copy Fails
There are 3 specific failure modes in AI-generated ad copy that come from the same root cause: the model has no actual knowledge of the brand, the audience, or the offer.
The first failure is no brand specificity. AI copy sounds like it was written for the category, not for the brand. When BLK MRKT Coffee briefed us, we did not prompt Claude with "write ad copy for a coffee brand." We gave it the name, the aesthetic, the product range, the price point, the target customer demographic, existing customer reviews, the founder's voice, and the specific product we were advertising. The output was different in every way that matters.
The second failure is no audience insight. Who is seeing this ad? What do they already believe? What objection are they going to have immediately? What is the specific thing that would make this offer relevant to them right now? If the AI does not know any of this, it writes for a hypothetical average customer who does not exist. Average customers do not convert.
The third failure is no offer clarity. "Best pizza in Adelaide" is not an offer. "Try our wood-fired margherita, $18 with a glass of wine every Tuesday" is an offer. AI will write around an offer it has not been given. It will generate language that implies a value proposition without stating one. Copy without a clear offer does not convert.
The Pipeline We Built
The fix is not to avoid AI. The fix is to stop giving AI a prompt and start giving it a system.
Our pipeline has 4 stages before Claude writes a single word of ad copy.
Stage 1: brand voice extraction. We build a brand voice document from existing material: website copy, the client's own descriptions of their business, reviews they have highlighted, language they use naturally in emails and messages. This document is specific. It includes words the brand uses, words it avoids, the emotional register it operates in, and examples of copy that has worked before.
Stage 2: audience profiling. For each audience segment we are targeting, we define what they already believe about the category, what problem they are trying to solve, what they are likely to object to, and what language they use to describe the problem in their own words. This comes from customer research, from review mining, from comment analysis on existing ads.
Stage 3: offer documentation. We write out the specific offer in precise terms: what the product is, what it costs, what makes it different from the alternative, what the commitment or risk level is for the customer, and what the single most compelling thing about it is right now.
Stage 4: Claude writes from a populated system prompt. Not a generic request. A system prompt that includes the brand voice document, the audience profile for this specific ad set, the offer in precise terms, the format requirements, the character limits, and examples of copy that reflects the standard we are aiming for.
The output is specific, on-brand, and grounded in a real offer. It is also editable. We review every line and change what does not fit. But the starting point is already 80% of the way to something that can be tested.
A Before and After
The same client. Two different prompts.
Generic prompt: "Write a Facebook ad for An Nam Quan, a Vietnamese restaurant in Adelaide."
Output: "Experience the rich and vibrant flavours of Vietnam right here in Adelaide. An Nam Quan brings authentic Vietnamese cuisine to your table, crafted with care and tradition. Book your table today."
That copy could belong to any Vietnamese restaurant in any city. It says nothing specific. There is no offer. There is no reason to act now.
Populated system prompt prompt, giving Claude the brand voice notes, the current promotion, the audience (Adelaide locals aged 28-45 who follow food and dining accounts, likely have visited once and not returned), and the specific ask.
Output: "You meant to come back. The pho you had that first time kept you thinking about it. We are here, same kitchen, same broth that has been simmering for over twelve hours. Come in this week, bring someone, and try the combination set. Two mains, two entrées, $68 for two. Tuesday through Thursday."
That is not perfect copy. It needs editing. But it is specific, it addresses a real audience behaviour (lapsed customers who have not returned), it has a clear offer, and it is tonally consistent with the brand. The difference is entirely in the input.
Where Human Review Matters Most
Even with a well-structured pipeline, three things require human judgement before AI copy goes live.
The first is offer accuracy. If the price changed, if the promotion ended, if there is a specific detail the AI got wrong, that needs to be caught before it goes to the client or goes live. Claude writes from what it is given. It cannot know what changed yesterday.
The second is platform suitability. Meta has ad policies. Google has specific character limits and quality standards. Copy that works as a concept may need to be adjusted for the platform it is running on. That adjustment requires knowing the platform.
The third is cultural and contextual fit. For hospitality clients in particular, the tone of an ad has to fit what is happening in the world right now. A breezy, celebratory tone is wrong if something serious just happened. AI does not have today's context unless you give it today's context.
What This Means for Your Business
If you have tried AI for ad copy and found it produced generic output, the problem was almost certainly the input, not the model.
Build the inputs first: brand voice, audience profile, specific offer. Then ask Claude to write from them. The output will be unrecognisable compared to what you get from a generic prompt.
If you want copy that actually converts and a team that builds the pipeline to produce it consistently, that is what our Meta Ads work looks like. The tools are the same. The process behind them is not.

