I use Chat-GPT occasionally, instead of Google, to get answers to technical questions. Well there is definite value in promise in the technology, the trouble is that it currently uses authoritative sources with the same eagerness as blog posts written by newly-trained wannabe database devs who seek to get their names known, and I have to keep pointing out errors: ‘No, you can’t run JDBC from PowerShell that way’, ‘Github doesn’t support that way of accessing its REST API’ and so on. When noted, it does express remorse and is fulsome in its apologies. However, I regard Chat-GPT as an indiscriminate conveyor of existing information, not artificial intelligence.
To work quickly, I need information that is correct.
I am intrigued, however. One of my hobbies has been in using a computer to generate what we used to call Gobbledegook; wordy and generally unintelligible jargon based on someone else’s work. Now Computer Science does this for you. Unless you are wary, you can mistake this for intelligence: it isn’t. It is, rather, a mashup of other sources of knowledge and insight, providing plagiarism on an industrial scale. To rephrase someone else’s intellectual property uncredited, as if it were your own, isn’t intelligence but merely theft. To make it worse Chat-GPT cannot, in its present state, distinguish bad data.
Psychologists who are assessing neurological problems have a colloquial term for this, ‘Cocktail-party Syndrome’, where a person whose mental faculties aren’t up to demand will produce very plausible sentences that seem to reflect an intelligent and ordered mind. The brain can, almost effortlessly, pull in trains of thought and reconfigure them into a plausible ‘Word Salad’. It is a good technique, much used by ‘content-providers’, conspiracy theorists, and managers. Never mind the quality, feel the width. It is easy to bluff the listener into believing that there is deep thought behind the blather. No. it is an artificial language model for which we have few natural defenses. Even I found myself apologising when I pointed out Chat GPT’s errors.
Chat GPT is currently like the keen schoolchild in the front of the class who is eager to answer every question with uncrushable confidence, but rather shaky knowledge. Some of the proudest and most valuable signs of real human intelligence is to realise and admit when you have no knowledge or expertise in a topic, because there is nothing more damaging than transmitting incorrect information.
There was once a culture where what appeared in print was edited and checked. It was published by an organisation that was obliged to stand by what they published. Now we have insufficient attribution to give us confidence that information comes from a reliable source, has been tested against reality, and peer reviewed. At this point, any database developer will get a nagging doubt. Is this a possible vector for bad data, a strain of the worst ailment that can ever overtake a data system?
Is it a generator of bad content, not necessarily. But it’s best to take it as a beginning not a finished product because we’re all ill-equipped to judge information that is presented slickly with apparent confidence.
This article is editorial content and may not represent the opinions of Redgate. Please Use the comment section below to discuss the topic if you disagree (or agree).