top of page

Regulation, a future tax on past bad behaviour


Something to think about | Art Midjourney | See footnotes for prompt
Something to think about | Art Midjourney | See footnotes for prompt

The pretzel twisting to comply with GDPR; the moveable feast of the forthcoming Australian legislation on privacy and data; the scramble to compensate for third-party cookie deprecation: all this is in direct response to shabby dealings on the part of our industry over the past 20-odd years.


The fact that consumer trust in adland hovers consistently below that in used car salesmen… coincidence? And our obdurate refusal to learn from history overall, well, that’s not within the scope of this column.


However, when we’re considering the implications of AI for marketing and media, it would be remiss not to connect the dots and anticipate possible futures.


I am no AI expert, and yet having spent the last few months immersed in this world, there are some huge considerations for marketers. Forging ahead without reflecting on how to approach these issues could hamper the benefits and innovation available to us.


We are already behind the 8-ball in thinking about how we should approach the inevitability of the AI transformation in terms of the data and privacy risks, and overall impact on the customer. There are far more issues at stake in terms of media ethics, accountability, copyright and ownership, but for the sake of constraining this piece, I want to focus on the legislative implications.


GDPR - Europe’s General Data Protection Regulation regime - didn’t descend out of a clear blue sky: this top-down imposition of regulatory oversight was a clapback at a consistent lack of respect in the way consumer data, and arguably consumers themselves, were treated by advertisers in the digital environment over a couple of decades.


The EU bloc has now proposed a raft of regulations seeking to govern the way that AI is used, particularly in a commercial context, drawing on GDPR as a model.


The CEO of OpenAI, Sam Altman, appeared before a senate enquiry into AI the US yesterday recommending government oversight and the use of “nutrition labels” for AI. Google and Microsoft, Uncle Tom Cobley and all have announced their renewed focus on responsible AI. Where the EU and US go, Australia often follows.


There is an acknowledgement of the need to constrain and safeguard the way that AI is let loose upon a largely unsuspecting global population.


We find ourselves in a moment which offers us, as media and marketing professionals, a choice. We can plunge headlong into adopting AI in ways which are under-considered and rapacious, focus on grabbing the short-term benefits while they exist - or we could seek to forge a path which is a little more winding and requires some collective thinking and reflection, but might become far more beneficial and sustainable.


If AIs are better and faster at all manner of tasks, might this not be the moment to ramp up the powerfully human skills of empathy and ethical inquiry?


What if we came together as an industry to create a code of conduct, albeit as a work in progress, in anticipation of the emerging legislative oversight?


Some key areas to consider:


Personalisation:

Analysing consumer behaviour and generating personalised messaging, campaign assets and so on could offer incredible benefits in campaign effectiveness. However, under the terms of GDPR, or the proposed AI legislation, to use artificial intelligence to generate personalised communications would need explicit consent from customers.


Interpretability:

The GDPR legislation includes a "right to explanation," where individuals can ask for an explanation about decisions made by algorithms. Currently interpretability, or “how did you get to that answer?” from generative AI is really low, if not non-existent. This could impact how companies use predictive analysis, or large language models like ChatGPT.


Profiling:

The way that we profile people is already verging on dodgy. Chatbots and virtual assistants are going to become increasingly common as consumers see benefits in using AI to do anything from shopping to therapy.


However, they collect and process a vast amount of personal data. Under GDPR, companies need to be transparent about how they use this data and must provide clear mechanisms for individuals to withdraw their consent. Bias and discrimination are among the major flaws of most LLMs trained on data from the internet at large - bias in, bias out.


It is possible to play, to experiment and adopt artificial intelligence with some guardrails around being transparent, responsible, and ethical.


GDPR shows us what lies ahead if we don’t self-regulate. Adland tends to shy away from the idea it has any responsibility to the end user (who - let’s not forget - ultimately pays all of our salaries), or stepping into any role in educating the wider community, but as communications professionals, if not us, then who?


And if not now, then when?


 
humAIn | human creativity x AI logo

12 July 2023

NSW Teachers Federation Auditorium, Surry Hills


bottom of page