Whatever your view, and however you might be using AI platforms such as ChatGPT, you might want to consider the legal framework we typically operate within.
Like all agencies, I have an enormous file of NDAs that are designed to protect the confidential information supplied to us by our clients. Those NDAs place an obligation on us not to disclose confidential information to third parties without prior written approval. They typically also detail the legal remedies available to the client, and the financial penalties if confidential information is disclosed.
So what has that got to do with AI platforms? Well, when your account manager merrily types in the details of your client’s latest announcement into ChatGPT and asks it to produce a draft of a press release, it’s very likely they have just committed a direct breach of an NDA. OpenAI, the developer of ChatGPT, is a third party, after all, and even though it is possible to opt out of allowing ChatGPT to use your content to ‘…help develop and improve our Services’, it’s still a breach.
I’m not an AI-sceptic and believe it can be an incredibly useful tool, but we should all be aware of the legal implications, and have policies in place that define when and how it can be deployed for client work.
The way I see it, you have three options:
- Have a blanket ban on any use of the platform that would involve any client’s confidential information.
- Seek specific approval from clients that would allow its use for some or all content development.
- Ignore the legal implications and hope you never get caught out.
I’ve seen very little evidence of options one or two being implemented but quite a lot of option three. You have been warned.
Ian Hood is chief executive and co-founder of Babel PR