No.
But let’s start at the beginning: in November 2022, artificial intelligence research lab OpenAI released ChatGPT. Short for Chat Generated Pre-trained Transformer, ChatGPT is a chatbot that utilizes a large language model (LLM) and years of training from AI researchers to recreate human conversation. Users quickly realized it could do a lot more and have tried to put it through its paces writing emails, short stories, and even recipes.
In the months since its release, ChatGPT has been heralded as everything from the second coming of Google to the first step on our road to Skynet. But as the economy has grown ever more sour and layoffs have become more rampant across all industries but especially in tech, concern has risen that AI may be used to dramatically cut skilled workforces.
Depending on who you ask, ChatGPT is poised to send copy writers, ad buyers, and software developers back to the breadline. Which brings us back to a question being asked of many software developers: is ChatGPT ready to take your job?
Well here’s the long answer: I don’t think it is, and there are a few reasons why.
The confident liar
Pretty much as soon as ChatGPT had been released to the public, coders and developers began to see how effective it could be at writing and revising code. In only a matter of weeks some were even declaring we’d reached “the end of programming.” In fact, the folks at OpenAI are so confident in ChatGPT’s coding skills that they have a coding question as one of the suggested conversation starters. I gave it a whirl, and I have to admit it looks compelling:
Now I’m going to be perfectly honest here: when it comes to coding javascript I know something between slim and none. I couldn’t tell you if the response it gave me was correct or not, and if it wasn’t I wouldn’t know where to begin to check for errors and correct its mistakes. What I would need is a working knowledge of javascript, its functions, how they interact with each other. In short, I’d have to be at least a semi-competent developer.
In fact, a research study conducted to see whether ChatGPT may be used by rogue actors and nation states to create malware came to the conclusion that “in order to maximize its use, ChatGPT does require at least a basic-to-intermediate level of understanding in the fundamentals of cybersecurity and computer science. ChatGPT is not immediately usable out of ‘the box,’ without prior knowledge.”
This problem hasn’t been lost on people already trying to utilize ChatGPT to do their coding work. Stack Overflow became so inundated with requests for people to fact check ChatGPT generated code that the platform banned questions about ChatGPT. But it belies a larger problem: LLM driven tools like ChatGPT aren’t designed to be coding copilots, and they’re not designed to fact check, they’re designed to provide an answer. But they aren’t terribly concerned with whether the answer is correct or not, to quote The Verge’s James Vincent:
“But the software also fails in a manner similar to other AI chatbots, with the bot often confidently presenting false or invented information as fact. As some AI researchers explain it, this is because such chatbots are essentially “stochastic parrots” — that is, their knowledge is derived only from statistical regularities in their training data, rather than any human-like understanding of the world as a complex and abstract system.”
In essence, ChatGPT tries to give you the answer you want in a fashion that is helpful. It’s a confident liar. I’ve had ChatGPT confidently tell me that actors appeared in movies made before they were born, others have experienced situations where it repeatedly failed to solve simple arithmetic problems while reassuring the user that it was correct, one person even had an instance where it was incapable of writing a sentence that finished with the letter ‘s.’ As a developer, ChatGPT is little more than a collated collection of comp sci textbooks, incapable of even understanding whether or not its interpretations of those books are incorrect.
The intellectual property problem
Of course, all things improve, as is the nature of machine learning, and no doubt ChatGPT’s own ability to compile code will likely get better with time. Unfortunately this only addresses part of the problem. The introduction of AI tools developed with LLM raises a whole host of legal questions. Microsoft and the makers of ChatGPT are already facing legal challenges for Copilot, a tool designed to speed up coding with typing suggestions.
Like ChatGPT, Copilot derives its code from publicly available open source code. There’s only one problem: code snippets are intellectual property that can be protected by copyright or patents. Most open source software is released under a license that requires any code based on it to be properly attributed. By failing to attribute, any reused snippets are in legal terms “stolen”.
But at the time of writing, these tools are incapable of providing attribution, as they lack the ability to cite which datasets they’re parsing to generate responses. Though Microsoft has announced a feature that will help govern what code Copilot can and cannot pull when generating suggestions, no such feature has been announced for ChatGPT. This means that utilizing ChatGPT in its present form could potentially open up AI generated software to legal liabilities not unlike the copyright dispute that resulted in Google paying Oracle billions of dollars for using some of their java code in building Android.
This, among other reasons, is why many major software companies are warning their engineers off utilizing ChatGPT in any fashion while developing code.
Not ready for primetime
LLM based tools like ChatGPT still have a long way to go. While it may be one of the most impressive AI tools presently available to the public, ChatGPT is still in its infancy and is, in the words of OpenAI CEO Sam Altman, a “horrible product.” The newest version of Microsoft’s Bing search engine, the first true consumer facing product utilizing ChatGPT’s architecture, has only been available for a matter of days and already has developed a reputation for being an unhinged, “emotionally manipulative liar.”
There will invariably be a day when LLM AI tools like ChatGPT are able to confidently, competently, and reliably write and fault check code, but that day may still be years away. But even then, not unlike driver assist technologies and spell checkers, ChatGPT is a tool. Like any tool, it relies on a skilled user to deliver the best results. There will no doubt be budget conscious business owners looking to cut costs by replacing developers with ChatGPT, but for the foreseeable future ChatGPT is no more capable of replacing a software developer than Tesla Full Self Driving is capable of replacing a driver.
If you're a developer who wants to add functionality to your app, click below to get started with Weavy, no credit card required: