top of page

Let’s
Connect

  • LinkedIn
  • Instagram
  • X
Revive

Re-engage Strategy

Upsell Strategy

Upsell Strategy

Brand Application Strategy

Brand Strategy

Brand Identity Development. (1)

Brand Identity

Visual brand guidelines

Brand Guide

Social Media (2)

Digital Marketing

The fine line between AI-powered Marketing and Manipulation

By Angelika Attwood, Dje'ka Creative Director

“The consumer isn’t a moron; she is your wife.”

That’s what David Ogilvy told marketers back in the 1960s. And I want to believe, he meant it. Respect her intelligence. Respect her attention. 


Today, I’d add one more: respect her autonomy. But we’re losing the plot.

In this AI-obsessed era, where neuromarketing labs look more like NASA than Madison Avenue, advertising has taken a turn. Not just toward efficiency, but toward something far more dangerous—manipulation at scale.



When Subliminal Went Too Far

We’ve danced this line before. Back in the 1950s, James Vicary claimed to boost popcorn sales by flashing “Eat Popcorn” subliminally during films. The result? Outrage. Regulations. And a reminder: consumers don’t like being tricked, even when they don’t consciously know they’re being tricked.


Subliminal Advertising Scandal 1957
Subliminal Advertising Scandal 1957

Today’s tactics are subtler but far more sophisticated. AI now reads your micro-expressions, maps your eye movements, and tailors content to tickle your amygdala into compliance. It’s not subliminal anymore, it’s subconscious warfare. Good advertising persuades. Great advertising respects. But manipulative advertising deceives, and that’s where AI threatens to tip the scales.

This is a professional red flag. When machines are taught to mimic empathy and simulate trust, we start selling to consumers' subconscious without their consent.

By 2022, over 58% of marketers were using AI to tailor messaging based on behavioural predictions (Salesforce, State of Marketing Report). But most consumers were unaware of just how much of their data, and autonomy, was fueling that personalisation.

Sound unsettling? It should.


From Art to Algorithm: When Advertising Forgot Its Soul


Advertising, at its best, is an act of seduction. At its worst, it’s an act of control. Back in the 1960s, we wrote long copy that made people feel smart. In the '80s, we made them laugh. In the '90s, we made them feel cool. And in every decade, we tried, at least in theory, to respect the intelligence of our audience.


From Art to Algorithm: When Advertising Forgot Its Soul
From Art to Algorithm: When Advertising Forgot Its Soul

But today, we’re not just writing copy. We’re injecting it with neuroscience. We’re running EEGs on focus groups. We’re using AI to optimise every beat of a TikTok to match human brain rhythms. And we’re dressing it all up as “data-driven creativity.”

Let’s call it what it is: manipulation in a lab coat. 

“If it doesn’t sell, it isn’t creative.” —Ogilvy 

Advertising used to be about creativity, cultural relevance, and emotional resonance. It was the paintbrush of capitalism. Today, it’s becoming the scalpel of persuasion.

Marketing teams now use AI not just to write, but to simulate emotion, map biometric data, and auto-generate 1,000 versions of a headline based on your past click patterns.

This isn’t marketing. It’s manipulation disguised as optimisation. And it raises uncomfortable questions. 


Is it still creative if a Bot does it?

Imagine standing in front of a perfect 1:1 replica of the Mona Lisa, same brush strokes, same cracks in the canvas. But instead of da Vinci, it was painted by a robotic arm trained on 10 million paintings.

Would you still pay $800 million?

Would you pay $800 million for it?

Of course not. Because we don’t just want beauty, we want authorship. We want the risk and the messiness that comes with real human hands. We want to know someone felt something while making it.


And that’s true for ads, too.


That’s why people still remember Nike’s “If you let me play” or Dove’s “Real Beauty”. They were crafted. 

They weren’t built from A/B tests. They spoke to our souls, not just our synapses.


Nike's "If you let me play" & Dove's "Real Beauty"
Nike's "If you let me play" & Dove's "Real Beauty"

AI can write quite well, actually. But only humans can create art.

And that’s what marketing should still aim to be. A form of public art that earns your attention, not hijacks it.


We’ve been here before: A brief history of Advertising to the Subconscious

This isn't the first time marketers have flirted with the subconscious. Let’s rewind:


  • 1957: The Popcorn Scandal 


James Vicary claimed he could increase sales by flashing “Drink Coca-Cola” and “Eat Popcorn” for 1/3000th of a second during films. The public backlash was swift, and even though he later admitted the experiment was fake, the fear of subliminal advertising stuck. It was creepy. It felt like mind control. And it was eventually regulated.

Hungry? Eat Popcorn Subliminal Advertising
"Drink Coca-Cola"

  • 1984: Apple’s “1984” Ad 


While not manipulative in the nefarious sense, this ad did tap deep psychological fears, Big Brother, authoritarianism, conformity. It made you feel, not just think. But crucially, it didn't exploit subconscious science. It used symbolism and emotion, not brain scans.

Apple Macintosh personal computer commercial directed by Ridley Scott.
Apple Macintosh personal computer commercial directed by Ridley Scott.

  • 2000s: Neuromarketing Enters the Chat 


By the mid-2000s, fMRI machines were showing marketers which parts of the brain lit up when consumers saw certain images. “Aha!” said the strategists. “If the nucleus accumbens lights up, they’re probably gonna buy it!” 

That was the start of brain-based persuasion—and the end of some very good instincts.


The Ethical Crisis of Influence at Scale


AI enables manipulation at industrial scale.

Let’s talk about deepfake endorsements. AI now creates videos of photorealistic personas, complete with “emotion”, promoting products. The average consumer, especially a child or adolescent, can’t tell the difference. It’s not influence anymore. It’s impersonation.

A 2023 study from the University of Amsterdam found that over 70% of teens exposed to influencer-style content couldn't discern AI-generated videos from real ones, but their purchase intent was equally high (Kruikemeier et al., 2023).


This is where things get dangerous. 

Children: The New Frontier in Manipulated Markets

Children are now the primary targets of AI-optimised content. With video loops designed to hijack their attention spans (think TikTok’s 3-second cuts), and music frequencies tuned to stir unease, the line between engagement and exploitation is vanishing.


The American Psychological Association warns that “AI-driven content personalisation can create compulsive behaviour patterns in children” (APA, 2023).


Meanwhile, the UK’s Advertising Standards Authority (ASA) is calling for urgent regulation of influencer marketing aimed at minors, especially when avatars or AI-generated personalities are involved.


And while traditional ads for children have long been regulated, the rise of “kidfluencers” and synthetic endorsements has created an ethical vacuum.


We are using AI to match human brain rythms

The Threat to Marketers: Lawsuits and Distrust

For brands, the legal risks are no less real. As AI-generated ads get more sophisticated, the line between influence and manipulation blurs. Misleading product claims once buried in small print are now encoded in tone, emotion, and subconscious suggestion.


Expect lawsuits. Expect regulations.


Expect brands to be dragged for using AI avatars that “pretend” to be real, or videos that use biometric data without consent. In short, expect a reckoning. 


There’s a reason the FTC in the U.S. and the European Commission are both drafting new AI advertising guidelines. Brands using AI-generated actors or voice clones without clear disclosure may face:


  • Lawsuits for false endorsement

  • Class actions for manipulative practices

  • Fines for violating biometric data regulations


AI can supercharge conversions. In the long term, it can destroy trust.


The Consumer doesn’t know What Hit Them

Imagine you’re watching a “spontaneous” product review from someone who looks real, sounds real, and even emotes perfectly, but they’re 100% synthetic. The ad isn’t disclosed. The brand? Hidden behind an affiliate link.


Where does consent live in that transaction?


Neuromarketing experiments from Stanford (2022) show that “emotionally persuasive content” generated by AI increased purchase intent by 31%, even when participants said they didn’t consciously recall being influenced.


So, we’re not just hacking attention. We’re hacking intuition. That’s not just unethical, it’s unwise.


Gen Z craves Real, and they can Smell Fake a Mile Away

Let’s not forget that this is a generation that grew up with face filters and fake news. They don’t want perfection. They want authenticity. A 2023 Edelman Trust Institute Barometer report revealed that 63% of Gen Z trust creators who “show vulnerability” over those with polished perfection.


In contrast, over 70% said they distrusted AI-generated personalities, regardless of the content’s accuracy or polish.


They’re tired of deepfake influencers with flawless skin and no soul. They want the outtakes, the behind-the-scenes, the vulnerability.


If your ad looks too polished, too robotic, too perfect? They scroll. 


The Most Powerful Ads Still Make You Think

Remember Apple’s “Think Different”? Volkswagen’s “Lemon”? The Economist’s dry, British wit in print?

Apple’s “Think Different” Campaign
Apple’s “Think Different”

These ads didn’t need eye-tracking tech to hold attention. They were bold. Very much human, playful and risky.

Today’s hyper-optimised, brain-scan-informed, AI-predictive content may win short-term attention. But it loses cultural relevance, because it never takes a risk. It never speaks truth because it just optimises safety.

But safety doesn’t go viral.


So, Where do we go from here?

Marketing was never meant to be just a sales boosting tool. It was meant to be a stage and a story. We can’t, and shouldn’t, go backwards. AI is here and it can help.

But we also need to:


  • Disclose when AI is behind the copy

  • Respect the psychological boundaries of consumers

  • Train marketers in ethics, not just tools

  • Bring humans back into the creative process

  • Create space for imperfection, risk, and artistry


Consumers are not algorithms to hack. They’re people. With stories. With scars. With intelligence.

So let’s stop selling to their brains. Let’s start speaking to their hearts. Because manipulation is scalable, but trust is not. And the only ads worth remembering are the ones that respected us enough to tell the truth.


I grew up believing that marketing was a way to connect, not control. That great copy could uplift as much as it could sell. That respect for your audience wasn’t optional—it was the job.

New marketers learn A/B testing before they learn consent. They’re handed dashboards before they’re handed duty.


Worse still, influencers, those with the largest reach and youngest audiences, are paid to promote products without any understanding of advertising laws, transparency guidelines, or their responsibility to vulnerable viewers.


This is not just a problem of virality. It’s a problem of integrity. And it strikes at the very heart of our profession.


Because if marketing is to remain a respected craft, rather than a manipulative science experiment, we must return to teaching the very basics!

 Angelika Attwood Marketing Ethicist. Creative Realist. And lover of great ads with even greater intent.

References:

Salesforce (2022). State of Marketing Report (8th Edition). Retrieved from: https://www.salesforce.com/resources

Kruikemeier, S., de Vreese, C., & van der Linden, A. (2023). Teens and Trust: The Persuasive Power of AI-Generated Influencers. University of Amsterdam, Department of Communication Science.

American Psychological Association (APA) (2023). Psychological Risks of AI-Personalized Advertising on Children. https://www.apa.org/news/press/releases/2023/child-behavior-patterns

Edelman (2023). Trust Barometer: Special Report on Gen Z and the Future of Influence. https://www.edelman.com/trust

Stanford University (2022). The Subconscious Effect of Emotionally Adaptive Advertising Algorithms. Center for Advanced Behavioral Research in Marketing.

Neuromarketing Science & Business Association (NMSBA). Code of Ethics for Neuromarketing Research. https://www.nmsba.com/ethics

UK Advertising Standards Authority (ASA) (2023). Children and Advertising Online: New Frontiers and Concerns. https://www.asa.org.uk

Federal Trade Commission (FTC) (2023). AI Disclosures and Advertising Law: New Guidelines for Synthetic Media. https://www.ftc.gov/news-events

European Commission (2024). AI Act: Proposal for Regulation on Artificial Intelligence and Ethical Use in Media & Advertising. https://digital-strategy.ec.europa.eu

Ofcom (2023). Children and Parents: Media Use and Attitudes Report. https://www.ofcom.org.uk

Common Sense Media (2023). Teens and Social Media: Impact of AI Algorithms on Buying Behavior and Mental Health. https://www.commonsensemedia.org


Comments


Get Featured

bottom of page