I had an interesting interchange with a colleague recently. It went something like this:
COLLEAGUE: I used this AI tool to generate this web app! It had some bugs, but after asking it 5 or 6 times to fix them, it finally figured it out!
ME: That's pretty neat! By the way, what's the code quality like?
COLLEAGUE: Oh, it's a total spaghetti mess. If an engineer looked at it, he'd puke. The code is totally unmaintainable, except by maybe the tool itself.
There are a lot of things that come to mind when I have a dialogue such as this.
Vendor Lock-In.
Hidden “Features” (read: back doors, bugs, impossible to diagnose errors, etc).
The Next Big CIO/CTO Craze.
Even in this modern day and age, people are not immune to the snake-oil salesman, with his magical blend of rare herbs from the Far East, guaranteed to cure any ailment or malady….
To be sure, this colleague looked at it as a rapid prototyping tool. OK, I could see that… but it very often comes to pass that the prototype becomes the product. It’s one reason I do not very much like rapid prototyping in software development. You might rapidly prototype a physical object, which you must then refine and make more of (each one being a more perfect specimen), but with software you are not constrained to such limits. Take that prototype, and go to production with it.
Honestly, it’s not the AI so much that scares me; it’s the people who are so willing to adopt it. Too much, too fast, too unproven, too new, too buggy. It was said that we should be trying to use AI as much as possible to “improve our productivity.” Maybe I should have an AI go to my meetings for me, as those are my biggest productivity killers.
But let’s take a step back twenty or so years. I was in to neural networks - great for pattern recognition. Genetic algorithms - a way to “evolve” a solution (not necessarily one that’s alive or anything, it’s just a process). Artificial life - set up a “world” and see what evolves, given the environment and the constraints.
The world was waiting for a technological breakthrough.
And then, it happened. But not quite as people think.
AI today is still not much beyond pattern recognition. It’s just that we’ve developed ever more interesting feedback loops to get more interesting results.
But what does all this really mean, and why am I so utterly terrified by it? I admit, I hate it. I see it and I am actually scared. I wonder what kind of world my children will inherit. I wonder what we will see in a year from now, let alone five. Me, a guy for whom technology was a set of incantations and a digital magic wand; the spells I’d cast would manifest as data structures and algorithms; the incantations I’d utter would result in compilation and execution; the tests would run; the work, when done, would be flawless.
OK, I might be exaggerating a bit…
Still, what does the future hold?
People talk about us creating a new form of life. Or they talk about AI taking our jobs. That soon people will just sit around waiting for their next UBI check to show up. Or that AI will kill us all.
What makes me afraid are the people creating it, and the people who so desperately want to use it.
What makes me afraid are the AI-generated videos and audio, the quality of which is getting better daily - at what point does it become impossible to separate this digital fantasy world from our human reality? What will we do, or become, when the quality is too good?
You could frame someone for murder, with fake digital surveillance footage.
You could discredit a genuinely good person, and remove them from nearly any position of power.
You could incite mob violence.
You could incite a population to war.
And you might be saying to yourself, well that’s a lot for AI to do. But it’s not - and it’s not AI so much as it is the people who are willing to believe it, believe in it, and use it ahead of all other things. It’s their perception of reality that matters. If you can control their perception, you control them. Even if the truth eventually comes out, the damage just needs to be done. Like a dirty bomb: just wait a thousand years, and you can repopulate…
But wait, there’s more!
You could censor en-masse, using an army of AI agents instead of an army of human surveillance operatives. You could spy on pretty much everyone. You could manipulate their lives; feed some groups of people one narrative, and another group of people another narrative.
With the current AI tools, it’s all already possible; at this point it’s just a matter of refinement.
I’m not one to complain, generally. I don’t think it’s a very useful activity. It solves nothing to complain. You might feel better for a while, but it’s short-lived. Problem-solving is a much better activity. If you’re going to complain about something, at least bring one or two good ideas on how to solve it with you. Maybe three. The more, the merrier.
But for the life of me, I can’t think what to do with the coming AI tsunami. How do you evade, or fight against, or even survive, that which people want to cram down your throat? Steering directly into a maelstrom, they want it. They need it. It’s perceived a competitive advantage. It’s an improvement in “productivity.” It’s the next big thing. And yet there are so many details missing here. Regardless, their want of it drives it forward. It will get out of control. It might already be.
I guess I could see a few ways in which this might all go:
AI will achieve a level of cognition and ability that will set the world on fire - hopefully just digitally, but nonetheless it will be the beginning of much turmoil and anguish.
AI will continue to be nothing more than a convenient search and build tool; companies will embrace it heavily, nuke their talent base, and then after 5-10 years realize that their infrastructure is entirely unmaintainable. Entire by-human rewrites are required.
AI will be used to generate so much imagery and content, that people will cease to be able to tell fact from fiction (given that a large percentage already have this problem, it’s only going to get much, much worse). A new regime will be possible, and our values, our freedoms, will wither.
I could postulate that people will realize that AI isn’t all that it’s cracked up to be, and abandon it; but at this point that seems so far fetched as to be laughable.
In any case, what will there be left to do?
AI today; robots tomorrow. If, in ten years, we went from the world we’re in today, to one in which humanoid robots make up a significant percentage of the workforce, what would you think of that? What would you expect to be doing? Who would bother learning, or developing skills?
I maybe wouldn’t be so worried if not for the zeal with which people I’ve seen have adopted these tools. It’s unlike anything I’ve ever seen. I think they’d just as readily adopt robot workers. We’ve learned nothing from The Great Outsourcing.
For my part, I continue to develop my skills, teach my children, build resiliency and self-reliance. If nothing else, not having to depend so much on external providers means more freedom. And to me, freedom is everything. Maybe I’ll become a farmer. Maybe I’ll find a way to make AI work for me, too.
The beast is uncaged. There is no putting it back. We are in truly uncharted waters. It seems we have no choice but to strap in, hold on, and try to survive.