Over the past few weeks I have been diving into ChatGPT. Release by OpenAI, it describes itself as a large language model artificial intelligence that is good at giving human like responses to input. That much is true. I've used it to write a 10,000 word short story with as little creative input from me as possible.
I've also had it generate a few websites, some are good and some are really bad. I've also spent a not insignificant amount of time screwing around with the creative writing prompts other users have done. Among those: tricking the AI into being a choose-your-own-adventure book, having the AI write bits of code, having the AI review code written by a human, and having AI format a free form prose block into a highly structured response format. I've also picked out some queries people have written that bypass the AI's built in guard rails by allowing it to connect to the internet and allowing it to suggest themes labeled as "unsafe".
My quick and immediate opinion of the AI is that it's useful as a tool, and like any tool it is only as good as its user.
If you give it good input, and if you understand what constitues as good input, then you get good output. Take for example the two websites I generated and listed above. The professional looking .NET rate limiting website lended itself well to a chatbot trained on large amounts of data: the content required nothing more than references to technical documentation, blogs maintained by tech writers, and dictionary definitions of terms. ChatGPT is very good at stating terse information into fluffed out vapid language. Following this example, the Swimdog Co. website is a nightmare. Swimdog Co. is a pretend company based on a cartoonish 1980s yuppie profit over people stereotype. ChatGPT doesn't understand that, so it made what looks like a website for a pet shop. The design is not something a human UX engineer would ever consider, and for good reason. Left to its own, and with a poor input, ChatGPT creates mere garbage.
A similar situation is seen in the short story it wrote "The Case of Agent Turner" (title is mine). Every chapter it wrote tried to tie up every knot. It had no interest in letting story arcs linger on through multiple narrative shifts. The flavor and style of the story is run-of-the-mill and not surprising at all. It's laden with cliches and overly expected plot devices. It also inexplicably descended into almost erotic fiction at the end.
It is beyond fair to say that the content it was producing was bland because I wasn't giving it enough to go on. And that's exactly my point. The skill to know what to give the AI and what to not give it is still an inalienable human trait. If left to its own devices it produces fluff that's readable, but it's also boring and it feels like you've read it before. A writer with skill would know the art of storytelling, and would know what to instruct the AI to write to include exciting plot twists or interesting insights into characters or society. AI does not and can not do any of that.
So if you're a bad writer, all this AI will do is make you a bad writer who has a high output of boring drivel.
And if you're a good writer, this AI will help you overcome writer's block when trying to fill in the details necessary for smooth story telling.
The same can be said for its coding capabilities. You can tell the AI to do simple things that usually professional programmers don't need to do anyway because a library already handles it for them. You can tell the AI to review a chunk of code, but from my experience you risk getting very wrong feedback, and the bugs it does pick up are ones a human would be able to spot just as fast because they're so obvious. It's a dangerous tool for a fool to use and put on airs with because of the quantity of his perceived output. It's a good tool for a professional to use and decrease the amount of time he has to spend on work requiring rote procedural work along with low cognitive demand.
AI being seen as a tool to make ourselves more human, as it gives us more time to focus on our strengths as dynamic problem solvers, should be the primary opinion that everyone has of ChatGPT. It isn't hailing in the apocalypse and it isn't prone to replace your jobs. If you're good at your job, it will make you better. If you're bad at your job, it won't help you.
The principal problem I see with AI is its capacity to act as a conduit for cheating, and a conduit for generating antagonistic and personalized shitty messages for trolls to post on Twitter to piss off people. But maybe, if you have enough faith in humanity, you can be convinced that the knowledge that trolling is a low-effort and AI generated process could make the whole effort seem impersonal and irrelevant. Automating away trolls would actually be a pretty good thing!