Skip to main content

Leveraging its strengths whilst avoiding its pitfalls

 

The need to go beyond general

When you think about human/computer collaborations, what springs to mind? A positive – if cheesy – Michael and K.I.T.T in Knight Rider? A more ominous Bowman and HAL from 2001: A Space Odyssey? Or a full-on cyborg integration à la The Terminator, Blade Runner or The Borg, to name but a few.

The truth is, in general we feel pretty nervous about the idea of truly collaborating with our computers, which is why most science fiction tends to focus on the more chilling aspects of the idea. But with the rise of Generative AI – and specifically, ChatGPT – it can feel increasingly hard for us to see computers as mere ‘tools’ that we use and control, when they seem to be pulling so much of the weight, and making so many of the decisions.

In this month’s blog, we want to explore why a collaborative mindset will be key in leveraging the best of this new technology within tech-based markets. Do engineers, programmers and communicators really have anything to fear?

 

The limits of language in the hands of ChatGPT

Regardless of the sensationalist reporting in the mainstream media, Generative AI remains just a tool: even if a remarkably sophisticated one that has the potential to fundamentally reshape our existing economy. But there are significant limits in terms of it can achieve. One only needs to chase down the stories of ChatGPT’s spectacular screw-ups to know that the intelligence it is demonstrating might be clever, but it isn’t that smart. It has a tendency to churn out convincing sounding rhetoric that is actually rubbish. ChatGPT has a habit of making up facts, and is notoriously bad at maths, even simple counting.

 

Caption: This seemingly innocent disclaimer on ChatGPT’s front page hides a much deeper, darker problem

 

 

The truth is that to get ChatGPT to work well, you already need to have quite a good idea of what you’re doing. Reporting for Medium, Agata Cupriak observed that this is particularly pertinent in the field of technology:

For complex tasks, it turns out soon that you have to teach it a lot before you get something valuable. Most answers are very simple. Too simple. Worse yet, sometimes they are false. I started typing extensive instructions (so-called prompts), giving outlines, main points, translating, pasting model texts, and even ending commands with “please”… and it turned out that I need to know exactly what I want, and first feed the algorithm with the right input (and it is voracious) to get a satisfactory output. When writing expert texts on tech, I honestly couldn’t go beyond general, generic answers… I really tried everything and eventually went to Google.

 

Collaborative coding

In the field of design and programming, jokes abound on the internet about the fact that the client never knows what they want – or at least, what they communicate bears no relation their actual ideas and expectations. If a company took their client’s expressed desires and fed them verbatim into an AI-driven design system, you can guarantee that nine times out of ten, there would be a very angry client at the end of it.

‘Interpreting’ client needs remains a very human undertaking – based on experience, empathy, context and the ability to read between the lines. And it’s not just a case of ‘translating’ what a client says into what they mean. There’s then the additional stage of translating it into a task that the AI can understand, with key points and defined parameters. As a result the idea of prompt engineering has become big business: and ironically, in its fundamental concept it’s not so different from coding itself: the idea of feeding very specific language into a machine in order to get a desired output.

It therefore seems pretty clear that in fields such as software engineering, programming and indeed design in general, there will remain an ongoing need for human ‘mediation’ – translating the often broad, nebulous and vague ideas of clients into the very specific, structured inputs needed by AIs.

 

Writers at risk

Of course, the other field that receives attention is that of writing. And no wonder: ChatGPT is fundamentally a language processing tool that ‘simply’ (though very cleverly) predicts the next word in a sentence based on statistical probability. That prediction is so accurate that what it produces often seems to pass for actual thought; an embodiment of intention, understanding and purpose. But it isn’t.

As a result, ChatGPT tells us some interesting things about language; its function, and the way it interacts with more fundamental concepts of self and consciousness. It suggests that for our supposed uniqueness as individuals, language patterns our thoughts and behaviours so much that it can be used to produce a compelling and convincing facsimile of humanity.

But it really is a facsimile. Because even though there are vast patterned areas of commonality between people, what marks us out as truly human is that tiny 1% of unpredictability, creativity and serendipity that allow us to innovate, create and produce things that are truly novel.

But this concept of uniqueness brings about a problem for AI: how can a probability model based on huge amounts of data write about something that’s never been written about?

On a practical level, if AI works on prediction, then it needs data from which to extrapolate its predictions. So if you’ve just created a brand new product, entirely unseen on the market, with all data under embargo: what is going to feed ChatGPT? There is nothing for it to get its teeth into. Unfortunately though, precisely because it doesn’t understand what’s right and wrong, true of false, real or fictious – merely what is statistically probable – it will plough on with a word salad comprised of what seems ‘most likely’, even when that’s wildly inaccurate.

Best case, it’s very obviously wrong and can be discarded. But in the worst case, these outputs might be accepted as valid because to the casual eye, they seem like they might be. For businesses that’s a huge risk, because it speaks to your fundamental credibility. Delivering false information breaks trust, loses the respect of your audience, and communicates that you can’t be bothered to invest in doing things right.

 

The unforgivable sin: mediocrity

Perhaps an even worse sin than being wrong though, is being bland. Imagine you’ve spent five years developing a truly remarkable, unique product that will genuinely revolutionise the market: Product X. Well, here’s what ChatGPT delivers when you ask it to write a PR for Product X.

chatgpt-ai-caption

The wildly generic template it spits out is followed by the disclaimer ‘Note: This is a fictional PR created based on the information provided. Please make sure to customize it to match your specific product, company, and industry details’. So there’s no getting around the fact that you’ll need human input at some point. But worse, by simply replicating the most common product points in its template and expressing them through a string of bland clichés, you’re still having to pay a human to do the actual hard work, whilst eliminating the true value they can bring to it. Instead, you’ve ended up defaulting to an output that’s so unremarkable, it’s actually counterproductive.

 

Because as we said above, even though much of what we do as humans is so similar and repetitive that a machine can anticipate it most of the time, it’s that 1% of unpredictability, creativity and serendipity that we look for in each other. It’s that 1% that creates connection, trust and emotion. It’s that 1% that marks our humanity. And the business that underestimates the value of that human connection – that 1% – or worse, thinks it can be ‘outsourced’ to AI, stands to learn some very expensive lessons.

 

The conclusion: collaboration

None of this should be taken as us saying that Generative AI has no value. Anybody holding that position is desperate, delusional or being deliberately naïve. Instead, what we want to stress is that, as usual, rhetoric around the issue has been incredibly binary (“it’s us or the machines!”), when in reality it’s far more nuanced.

At Xpresso Communications, we have always prided ourselves on holding a collaborative mindset; not only with our clients, but our competitors, our contributors, and the industry as a whole. And a growth mindset dictates that it’s time to extend that collaborative mindset to computers also. Which is not to say that you’ll ever see us use ChatGPT to inspire content. But instead we recognise that it’s a tool that our clients may want to embrace, and we’ll do our best to guide them in its implementation – leveraging its strengths whilst avoiding its pitfalls. And all the while promoting real and meaningful connection over everything else.