Artificial intelligence as a term conjures up all kinds of science fiction images for most people. Colossus, Terminator, HAL 9000, the Matrix: most of the stories are tales of man’s hubris as a ‘creator’ turning against him.
These are all interesting cautionary tales. And advances in the field of artificial intelligence over the last few years have been amazing. But are we really on the verge of ‘the singularity’? Will our creations become smarter than us and then turn against us? Are these the end times, just before our robot overlords arise?
No, I honestly don’t think so. Not today, probably not for a long time, and quite possibly never. The biggest risks in my opinion come from people in authority misunderstanding the current limits of AI and applying it in dangerous ways.
This post is my attempt to summarize what I know and my own beliefs about artificial intelligence based on the current ‘state of the art’ in 2024.
Who am I to judge?
Everyone has an opinion, so why should mine carry any weight? I’m a lifetime technologist who has spent forty years working in the computer industry, Thirty of those years were with IBM where I developed and designed application code in a number of different languages.
I’ve seen numerous ‘next big thing’ technology changes come and go over the years. Network computing, the Internet, e-business, on-demand computing, cloud computing, virtual reality, metaverses, artificial intelligence… they have all left their mark. But they were also all hyped in ways that completely exceeded their actual capability. Marketing and sales always seems to leap ridiculously ahead of reality. I am, on the other hand, a ground level technologist steeped in the actual reality- that’s the perspective I’m coming from.
I would suggest that my opinion about AI has some industry experience-based weight. But I’m not a professional AI expert or educator: a person with a PhD in artificial intelligence who has worked on actual implementation would find fault in many of my simplifications.
Current state of AI
Artificial Intelligence is a comically ill-defined term, much like all of the other ‘trendy’ technologies I mention above. Many businesses are applying AI to things where it really doesn’t belong because it is ‘trending’, and that muddies the already unclear meaning.
Broadly speaking, AI is any computing technology that seeks to emulate some function of a thinking creature. By that broad definition, basically every piece of software is “AI”, and that is the source of a lot of badly considered marketing.
So what is ‘real’ artificial intelligence? More modern definitions usually involve something that humans can do that computers are traditionally not very good at: ‘understanding’ natural language (the way we speak) or visual input (images) are examples of modern AI.
The state of the art for AI systems today start with vast collections of billions of examples of whatever it is you want the AI system to ‘understand’. These collections are then processed to generate complex statistical probability models.
Simplistically speaking, these models link patterns or combinations of data based on ‘likelihood’ (probability). So ‘car’ is more likely to be related to ‘wheel’ than ‘cheese’. I have a basic understanding of how this works for word based models: image based models are harder for me to grasp, but I presume they relate to groupings of pixels and colours as well as human-provided tagging (words) of the images in the original data collection.
The AI models being used today differ from the past not so much in basic approach but in size. Old language models might have had a few tens of thousands of documents whereas modern ones have billions of source documents. The models they produce are both vast and insanely complex.
But the really important takeaway here is that the current state of the art has absolutely no concept of what anything it produces ‘means’: there is no ‘intelligence’ there, just probability. I don’t want to diminish the work being done: ‘just probability’ is a huge understatement of the incredible capabilities being created. The models are so complex that they border on the inconceivable, and can produce quite shocking results.
The latest ChatGPT+ large language models can answer very ‘complex’ natural language questions surprisingly well. It is really incredible, and I’m amazed at what has been accomplished.
The risks today
Marketing of AI has reached a fevered pitch in the past few years. The more obvious and ‘minor’ risks relate to simple misuse and over-use of the term. This happens, for example, when a company slaps the label ‘AI’ on collection of simple scripts they called “automation” a few months ago. It has nothing to do with actual AI except in the broadest of definitions.
The more subtle risks come from intentional misuse or misunderstanding of the state-of-the-art AI systems today. Here are the ones that I think are most concerning:
- Too much authority: Granting the output of an AI model the ability to perform a task based on its analysis, as opposed to just providing a suggestion to a human, is fraught with danger. A model might propose that glue and turpentine would greatly improve the taste of your pizza. Allowing such a model to make pizzas would obviously be a bad idea.
- Unauthorized use of data: current AI is a voracious consumer of data. A human child can figure out what a dog is after seeing five or six of them: AI models require millions of images to come close to the same level of accuracy. This means that all of today’s AI models digest images and documents from billions of sources, most of which never gave permission for their data to be used in this way. The argument is that the data is on the internet so it is ‘public’ by default. That argument does not always hold up. Copyrighted data makes its way in, as does personal information. Imagine asking an AI image generator to produce a picture of a dumb person and seeing your own face in the result.
- Expectation of accuracy / ‘truth’: many people seem to think that AI tools like ChatGPT provide factual results. This is absolutely not true. Current large language model AI strings together words based on probability: it has no concept of ‘truth’ or correctness. It will happily invent completely new (and wrong) answers to comply with the prompt it is given. Ask for a legal briefing with case citations and ChatGPT will happily invent cases that never actually happened. This is sometimes called ‘hallucination’: it is quite simply ‘being totally wrong’.
- Replacement of people: a lot of the business case for AI being pushed by sales organizations focuses on the idea that it can replace humans. Legal researchers, accounting experts, programmers: who needs ’em? AI will let you trim that wasteful headcount! But how can this really be true if the AI system has no conscious awareness of the meaning of anything it produces? A human must always be ‘in the loop’ until this changes.
The future
My expectation is that AI based on the current state of the art will gradually become a useful tool amongst knowledge workers. It can and will provide ‘inspirational’ guidance: given a prompting question it will produce an outline, rough out a draft, prepare some sample images. All of these things can be made easier with the current AI technology.
AI based on currently available techniques and their logical progression will be ‘close enough’ to fool most humans most of the time into thinking they are interacting with something intelligent, which is good enough to solve a lot of problems. It will also cause a lot of problems- see the previous section.
But I am very skeptical about the current technology path for AI leading to any kind of artificial general intelligence. Nothing in a large language model establishes real understanding. And the way these models are built is nothing like the capability of a human mind.
No human mind requires the billions of examples of a thing or concept that current AI techniques require to start to recognize that thing. This means to me at least that current AI techniques are still in the realm of pure ‘brute force’.
Modern AI also has no self-driven interest in learning and forming new understanding. There is no curiosity there, no drive to learn, no hunger to create or improve itself. It has no mechanism for establishing a concept of itself or any ability to become aware that other intelligences also exist. It can’t understand the value of life, or even that life exists. And I don’t think building larger models is going to overcome these huge deficiencies.
Will we ever create a technology capable of being called a true artificial general intelligence? Almost certainly, yes. When an AI starts to teach itself new things and begins to create novel output without relying on billions upon billions of pieces of ‘prior art’, that will be something to watch. That could very well be the start of the Singularity. But I don’t see any real evidence that the best of today’s AI is even close to being on the right path for this yet. I don’t think we’ll see anything like this in the next decade or two. Maybe, like fusion power, artificial general intelligence will always be ‘coming in 50 years’ until suddenly it is here.
In conclusion, I’m not afraid of the current AI technology becoming a super-intelligent Skynet. I’m afraid of it being a super-dumb falsehood generator capable of performing some useful tasks, but unwisely entrusted by the greedy or ignorant to give answers on things it doesn’t actually understand.
Tell me more of this “glue and turpentine” pizza. I think it may have potential.
It is a taste sensation! And it is great for weight loss: seals off the intake and flushes out the output, all in one meal.
AI has become concerning among artists and I don’t blame them. I feel it’s shoved in my face too much like oh here is an AI to help you write. No thanks.
The AIs that come to my mind are Cortana and EDI and hopefully we don’t get a Reaper invasion from this.
I am concerned both how AI can disrupt creatives as well, Emily. I have some thoughts about that for another post I’m pondering.
As for myself, I’ve started using AI a bit to do things like review my post after I’ve written it to provide suggestions. I sometimes make small tweaks as a result: about 30% of the time I find the AI’s ‘feedback’ to be useful, and WordPress Jetpack gives a few queries each month for ‘free’. Mostly it is a technical curiosity tome: I’m not yet at the stage that I’d pay for it, though.
As for creating content from the ground up, I’m really not comfortable with that. I’ll experiment with it because it is fascinating to see what something like ChatGPT can produce. But the big corporations have invested such huge sums into AI that it is imperative for them that the shove it down our throats. They have to find a way for AI to make money.
And that drive could definitely cause some serious harm in the near future. Creative work is already often tenuously capable of providing a living, and seeing AI take over some of that role would be a sad thing indeed.
Oh so the AI can help say check your spelling grammar? I guess that’s not a bad thing as Grammarly can’t do much if you use the free version. I tried using an AI once to make a new site banner but wasn’t satisfied with it.
Like you, Emily, I haven’t really liked using AI for creation. I’ve tried it, and had AI create a cartoonish image for my other blog (Geek on a Harley)- but I felt a bit ‘dirty’ afterwards. I can’t afford a professional artist to make good images, but it feels a bit like stealing food from someone’s mouth to use AI to create something ‘barely good enough’.
AI, or at least the service provided as part of WordPress Jetpack, isn’t so much about grammar and spelling as it is about ‘tone’ and readability analysis.
For example, it might scan my article and say something like “Your overall tone is very negative. Some readers might find this disheartening”, or more practically: “You have a lot of long sentences with big words: try to simplify your writing to make it easier to read”.
Jetpack’s AI also gives me suggestions about adding more links, defining acronyms, breaking the post down into smaller sections… sometimes the recommendations are quite helpful. It can also recommend different titles based on the content.
The AI as used in Jetpack is a bit more like a ‘friendly’ editor than a content creator. I am still getting used to using the capability, but it feels like a ‘good’ use of AI to me. Enough so that I keep running out of the free usage tokens and feeling like I should pay for the upgraded service 😉
It could be a lot worse like taking someone else’s art without permission and not giving credit.
But yeah I’ve noticed the whole tone piece. Often it tells me I’m using words that show lack of confidence like “may” and “might” I don’t see how using those may be concerning
That’s an excellent overview. Some of us in this part of the blogosphere have been playing around with “AI” for a couple of years now and the initial exhilaration and excitement has largely dissipated. The work that’s been done to render the models capable of providing useful and accurate data has so far only managed to flatten out everything that was strange and charming about them, while still mostly failing to prevent those “hallucinations” that make the output so dangerous.
The thing that most surprised me is that the term “AI” is now so widely and uncritically accepted both in the media and by users. It’s become generic and meaningless and as you suggest has replaced any number of prior terms that were much more accurate. There is no prospect of that changing so we’re stuck with it now but it makes any discussion on the topic fraught with misunderstanding. Certainly there’s no current expectation of anything even remotely similar to the kind of machine consciousness we used to mean when we used the term. Whether that’s a good thing or not is another question.
Thanks, Bhagpuss!
The speed with which AI as a term has spread is amazing but, to me, unsurprising. There are a lot of factors pushing it that are similar to past ‘trendy’ technology terms like ‘cloud’. The key trigger seems, in my experience, to be the point that the stock market starts rewarding that term whenever it appears in company plans.
Big corporations quickly spin up their marketing machine to make sure the term appears more frequently in their business unit materials; they add a line item in their quarterly reports; and they reward sales teams and divisions for growth related to that terminology. Entire divisions that, for example, did software maintenance suddenly become ‘AI software maintenance’; the database product changes from ‘SomeDatabase’ to ‘SomeDatabase AI’. Nothing really changes under the hood, but everyone starts looking for things that are ‘close enough’ that they can be relabelled in order to chase the rewards.
It is a funny system that, if I hadn’t spent over 30 years watching it, I wouldn’t believe.
What a great post! I have a post in me about AI as well, if I get around to it I’ll definitely link back to this. 😀
I would love to see your thoughts on AI as well, Jaedia! A link back would be cool too: it has been a few years since I had one of those 😉
It is a topic that will, like it or not, have a lot of disparate impacts over the coming months and years. Talking and thinking about it seems like a good idea.