Charles Dangibeaud
1 day ago

Is generative AI eroding media's ethical core?

SearchGPT has arrived, and the media industry's ethical reckoning is upon us. Initiative's Charles Dangibeaud demands immediate action: a robust ethical framework for AI-powered media buying is no longer a luxury; it's a necessity.

Is generative AI eroding media's ethical core?

The Gen AI revolution is truly here and there is no doubt it will change our working and personal lives.

But the transformation has not all been positive, with numerous reports of the ethical considerations of a new GenAI-powered world to consider. At a basic level, simply to power this world may have dire consequences on our environment, with AI estimated to need over 0.5% of the world’s electricity to function by 2027. For context, that is equivalent to Argentina’s annual consumption of power. But there’s also less structural, though no less significant, ethical dilemmas as well.

It’s imperative that the media industry, reflect now on how it will maintain ethical industry standards in the rise of GenAI. Because quite frankly, we’re already knee-deep in it. According to IPG’s Magna, a staggering 85% of digital spend in Australia is now traded programmatically, up from just a mere 17% in 2015. That’s a lot of ads being bought and sold on the basis of algorithms.

But don’t let the shininess of GenAI fool you, there’s traps in this game. Let’s go over them.

1. GenAI could wreck our human intuitions

The media industry has come a long way in recent decades about recognising its all-too-human biases and how algorithms can disadvantage groups and individuals. But with the onset of GenAI, if not careful, this good work is in danger of being overrun,  because GenAI tools, while incredibly powerful and efficient, are still quite blunt. For example, if an AI gets fed large sets of data that show white men are more likely to hold positions of power, the AI tool will then, without question, equate senior leaders = white men, and may consequently skew content and job postings to white men  white men, over any other demographic, thus hindering equity and diversity in the workforce.

In the context of media buying, GenAI's inability to reason like humans may limit ad effectiveness. It might, for example, over-target certain demographics based on past engagement, creating a self-reinforcing cycle. In contrast, human media buyers can leverage nuanced thinking and evidence-based marketing principles to identify optimal audiences for brand growth, rather than just relying on historical data. Bringing a human into the loop and leveraging people’s real-world knowledge and insights in addition to  GenAI's capabilities will enable more effective media strategies that reach high value audiences.

2. Personalised or creepy?  Keeping GenAI privacies in check

AI is hungry, and in its quest to be fed, it means it is constantly consuming new data. Currently in Australia there is no general applicable law regulating the use of AI, but there has been a voluntary set of AI Ethics Principles in place since 2019. Likely, this will be an added consideration in the Australian Privacy Act, but it’s not there yet. In the meantime, AI is up close and personal with all our details, from our shopping, our searches to our real time locations. To the industry outsider, it’s only plausible to ask, “is your phone listening to you?” Well, you can’t be sure, but try saying ‘Oreos’ a few times into a phone and see if the all-American favourite cookie comes up in the next round of ads…

For media buying this means ensuring that GenAI supports personalisation without getting downright creepy to consumers. This is a fast-moving game, and it’s only thorough working with consumers and users that we can move at the right pace. That means getting closer to the people we advertise to through focus groups and co-design, with people from diverse walks of life, and listening so as marketers we provide the delicate equilibrium between a free internet, and an advertising experience that feels genuine and reasonable. After all, the goal is to be precise but not watched - no one wants to feel like they’ve had a stalker go through their technological drawers.

3. Honesty is best policy: Towards greater GenAI transparency

You would be surprised at how often Gen AI is referred to as a “black box” from the very people who are building and researching these tools. As ethical and smart media professionals we need to be able to explain to customers and clients why an ad appeared at a particular place to a particular audience. And if it can’t be explained, honesty needs to prevail.

Transparency is in fact important in all uses of AI. Media professionals can – and likely should –be using AI to make their lives easier, but they need to be honest about using its ‘powers’. And yes, in case you are wondering, I did use AI to help me research and ideate this article, does that make this content less valuable, less authentic?

As marketers, we need to demand more transparency from our AI tools. We need to understand how they work and be able to explain it to all our stakeholders, both internally and externally. Imagine trying to explain to a client why their ad was served to a particular audience, or why a certain creative performed better than another. If the answer is simply "the AI did it", that's not good enough. We need to be able to peek under the hood, to understand and explain the logic behind the algorithms.

So, where to from here? I believe we need a clear ethical framework for AI in marketing, built on principles of fairness, transparency, privacy, accountability, and human oversight. We need to regularly audit our AI systems for bias, champion consumer privacy, and always maintain human control.

Some tangible steps the industry can take include:

  • Explainable AI: Strive to use AI systems that can clearly articulate their decision-making processes and avoid 'black box' solutions.
  • Regular audits: Proactively and regularly assess AI tools for bias, fairness, and unintended consequences, and take swift corrective action when issues are identified.
  • Privacy by design: Privacy considerations should be baked into AI strategies from the start, not tacked on as an afterthought.
  • Human-in-the-loop: AI should augment and assist human decision-making, not replace it entirely. Human oversight and accountability are crucial.
  • Transparency & disclosure: If AI is used in marketing efforts, be upfront about it. Transparency breeds trust

The GenAI game isn’t a sprint, it’s a marathon, and if marketers are going to navigate their way through it, it’s best start avoiding the pitfalls now. But hey, the good news? We can ask ChatGPT to help us get started.


Charles Dangibeaud is the head of strategy & WA Govt at Initiative.
 

Source:
Campaign Asia

Related Articles

Just Published

4 hours ago

Humour in advertising is a serious business

A creative, a client and a planner walked into a bar… and then they lost their nerve and forgot that it pays to be funny.

23 hours ago

40 Under 40 2024: Dalton Henshaw, Bullfrog

Henshaw may have provoked doubters when he launched a creative indie shop during the onset of the pandemic. But four years later, armed with a healthy roster of clients and a set of happy employees, who’s laughing?

1 day ago

FCB India's Dheeraj Sinha on commanding agency ...

Marking one year in his role as CEO of FCB Group India and South Asia, Sinha sits down with Campaign to discuss building a culture of “swag, not arrogance," his intense leadership style, and empowering young talent.

1 day ago

Move and win roundup: Week of November 4, 2024

Endeavour Group, The Lux Collective, Apparent, Quiip, Pure Public Relations, and more in our weekly roundup of people moves and account wins.