Data

Is ChatGPT coming for your job?

Is ChatGPT coming for your job?

I need your clothes, your boots and your employee ID. OpenAI’s Large Language Model (LLM) chatbot exploded onto the scene last November. In less than a week ChatGPT had over 4 million users and the Internet was awash with examples of prompts used and output from the chatbot. Some of the responses were so good that it didn’t take long for a wave of people claiming ChatGPT would make programmers, writers, support, and a whole host of other roles obsolete. A few months in and the doomsayers are still predicting widespread replacement of humans.

So how accurate are these claims of this particular AI offering taking our jobs?

Should we all head for the Job Centre?

So, will ChatGPT replace us all? There are certainly plenty speculating that it will be the death knell for careers as varied as software development to copywriting, even those saying it’s the beginning of the end for Google Search. Ignacio de Gregorio, speculated whether ChatGPT would kill Google in his Medium article: Can ChatGPT kill Google?.

François Chollet, who is a key member of Google’s own deep learning work, called out the “AI effect” where people were prematurely jumping on the bandwagon of LLMs replacing us all, much like how we’re supposed to be all riding around in self-driving cars right now.

This certainly seems like a fair comment. We’re often so quick to jump on new tech, as amazing as it may be, and think it’s going to have an immediate, widespread impact. But new tech often takes time for mainstream adoption. Apple certainly didn’t invent smartphones and weren’t the first to market by a long shot, but they timed their entry well and built a buzz around the iPhone that prompted a major shift in public perception. ChatGPT is certainly causing a huge buzz but the fact we haven’t seen commmercialised entries from the major players (Google, Amazon, Meta) suggests there’s plenty more to come in this space.

Language Models vs General purpose AI

Large Language Models (LLM) are specifically trained and optimised for language-based use cases, such as recognising and interpreting text. Because they are able to interpret meaning from an input (or prompt), they can often produce output that, on the surface at least, appears highly plausible and accurate for the specific subject of the prompt. However, dig a little deeper and there are plenty of examples where these fail on the detail. Consider the below Tweet example, where ChatGPT is repeatedly asked to do a task and repeatedly fails, despite “knowing” the correct answer:

General Purpose AI is what we think of when we picture a super-intelligent artificial life form in science fiction. They are able to learn, adapt and grow, using the sum of their experiences to handle any situation, much like humans. This is a much harder problem to solve than language. In any given situation there may be millions of variables which our brains will filter through our memories and experience, even our current emotional state, which is all considered in milliseconds and results in our decision. This is almost impossible to even conceive, never mind codify and build into an artificial brain.

The way this seems to be approached currently (to my very basic understanding) is to build specialised models for different scenarios and build other systems to combine the output and filter them through rules, much like one would go about building a multi-model recommendation system.

This doesn’t mean ChatGPT is bad. There are plenty of cases where LLMs are hugely useful, they just aren’t anywhere near the levels of general intelligence (most) humans can offer.

Use cases for ChatGPT now

Indeed, Microsoft are betting big on the utility of AI and LLMs, having been major investors in OpenAI for a number of years, and recently launching a GPT-3 integrated version of their own search engine, Bing.

ChatGPT as a tool to optimise workflows and increase productivity. Areas such as SEO where there can be a lot of text-based grunt-work, optimising descriptions and meta content, can be standardised and optimised. The human part becomes one of review and correction rather than creation.

We’ve already seen this in action with Github Copilot. Code has stricter syntax than spoken or written language and much less variety which means it is arguably easier to codify. Copilot has been well received (despite some quality concerns) and yet there are still plenty of software engineering jobs out there. Things like drafting documents, creating product descriptions, generating realistic filler text for websites are just a few simple examples of applications of generative LLMs that can be time-savers for professionals.

Simon Wardley, creator of Wardley Maps has been very successful at analysing developments in technology and their adoption.

He was adamant that cloud computing would become the norm, then serverless and predicts the next major leap is in conversational programming, the art of speaking to machines in natural language to create software. Far from removing the human from the equation, this democratises the ability to program. This in itself may be a threat to some in the industry, but much like the rise of WYSIWYG editors and no-code app creation tools haven’t made every web designer or software engineer redundant, conversational programming won’t do this either. Subject matter expertise, experience, and knowing your business will still be hugely important, much as they are now. Knowing how to code isn’t valuable in itself, it’s knowing how to use that skill to solve the right problems, something people can tend to forget when there is something new and shiny on the scene.

The more we reduce friction from the input experience, the more people can focus on identifying and solving the important problems and create value.

The future

So when will we be looking at viable General Purpose AI that can challenge the human capability to adapt and apply expertise to new and different disciplines?

I think we’re decades away. Beyond the technical challenges of creating the necessary neural capabilities, there are a whole host of secondary challenges to solve before we’ll see widespread adoption. The ethical concerns around AI bias are well documented, not to mention questions around transparency, data protection, security and commercial sensitivity. While these shouldn’t hold back innovation to a point, they are real blockers to adoption in a lot of organisations. Remember even now when cloud has become widely accepted, there are still organisations succeeding on what we’d call legacy tech due to compliance, commercial or infosec concerns.

It’s an exciting time, certainly. OpenAI have taken a huge step towards bringing real-world examples of AI into the mass consciousness and the proactive and open-minded out there have been given a super-charged tool to add to their utility belt. I don’t think it’s yet a differentiator, but can certainly optimise a number of use cases today.

Are you using ChatGPT or any of OpenAI’s other models for any use cases today? If so, what? What are your impressions of it and what impact has it had?

comments powered by Disqus