Short answer: no. Longer answer: it depends what you mean by "rewrite" - and the distinction matters more than you think.

Every week I see LinkedIn posts from people sharing their "AI-optimised" CVs. They've pasted their old CV into ChatGPT, asked it to "make this more impactful," and out comes something that reads like a brochure for a management consultancy. Lots of "spearheaded" and "leveraged" and "drove transformational change across cross-functional teams."

It sounds impressive. It also sounds like everyone else who did the same thing. And that's the first problem.

The flattening effect

When you ask an LLM to rewrite your CV, it does something subtle and damaging: it normalises you. It takes your actual experience - the messy, specific, hard-won stuff - and smooths it into generic corporate language.

A CTO who rebuilt a failing platform on a shoestring budget while managing a team exodus becomes someone who "led strategic technology transformation and optimised resource allocation." Which one would you want to interview?

The specificity is what makes you interesting. The AI strips it out because it doesn't know what mattered. It just knows what sounds "professional."

The invention problem

This is the one that genuinely worries me. AI rewrites don't just rephrase - they infer. They fill gaps. If your CV says you "managed a team," the AI might upgrade that to "built and scaled a high-performing engineering team of 15." If you mentioned a project in passing, it might add outcomes you never claimed.

You probably won't notice, because it reads well. But a good recruiter will probe those claims in the first call. And when you can't back up what's on your own CV, the trust is gone. Not just for that role - for that agency, for that recruiter, possibly for that client relationship.

Your CV is a promise. If AI is writing the promises, you'd better be sure you can keep them.

Recruiters can tell

I had a conversation with a recruitment director last month who said something that stuck with me: "I can now tell within two sentences whether a CV was written by ChatGPT. The structure, the verbs, the way it avoids saying anything concrete - it's a pattern."

This isn't snobbery. It's pattern recognition. Recruiters read hundreds of CVs a week. When they start seeing the same cadence, the same filler phrases, the same hollow confidence - they tune out. Your AI-polished CV doesn't stand out. It blends in with every other AI-polished CV in the stack.

Worse, it raises a question: if the candidate can't articulate their own experience, what does that say about their communication skills?

So what should AI actually do?

This is the bit that matters to me, because it's the reason CV Screened exists.

AI is excellent at comparison. It can read a job spec, read your CV, and tell you:

  • Which requirements you've evidenced and which ones are missing.
  • Which keywords the recruiter will be scanning for - and whether you've used them.
  • Where your CV is front-loading the wrong things for this specific role.
  • What a recruiter would flag as a gap versus a hard blocker.

That's analysis. That's useful. It tells you what to change - but it doesn't change it for you. You still write the words. You still decide what's true. You still own the CV.

The difference is: you're making changes based on specific, role-level insight instead of guessing.

Our position

AI should point out the gaps - not put words in your mouth.

That's why CV Screened doesn't generate rewritten bullet points or "optimised" summaries. We show you what's weak, what's missing, and what to move - then you fix it in your own voice. Because your voice is the thing that gets you hired. Not ours. Not ChatGPT's.

If the CV doesn't sound like you, the interview won't go well. And if it does sound like you - but it's aimed at the right things - you'll be surprised how far that goes.