People love to prognosticate, especially when there's something big, new, and shiny to prognosticate about. People also love to predict doom and destruction - there's something in our nature about wanting to be the first to point out the sky is falling. When a potential societal disruptor like the Internet emerges, the normal rush to get one's blather first out the gate turns into a stampede.
By now, the Internet has become ho-hum. It's there, and it's hard to imagine life without it. One reason I wave off the "things were better in ye olde tymes" laments is that I don't believe lamenters would really want to go back to the days of rotary land lines, three television channels, a relative snail's crawl of heavily curated information, cars that were junk after 50K miles, and all the other trappings of those "good old days," but would rather just delete some specifics from present-day life.
Things don't work that way. We can't curate our existence on a large scale in that fashion.
That's why I raise an eyebrow of skepticism at the wild panics and "do something!" responses to the emergence of consumer-level artificial intelligence (AI).
The Free Press (you really should subscribe) recently ran a pro - con pair of articles about AI and its potential impact on our society. On the pro side, AI could be even more transformative than the Internet, providing a helpful, always available, and infinitely patient personal adjunct to each of us, in career, in education, and in daily life. On the con side, AI could destroy society in myriad ways.
I'm a cynic when it comes to human behavior, but some might say incongruously, I'm an optimist when it comes to human ingenuity. History, I believe, proves me right on both fronts. I am rarely surprised when people (especially those in positions of authority and responsibility) behave badly, but I'm also rarely surprised when humans overcome obstacles and solve problems in unforeseen ways.
This is why I'm not panicking about AI. This is why I find the doomsayers' warnings to be "yada yada yada" same old song-and-dance. And it's why I read the "con" article at FP with a sigh of exasperation.
This bit, in particular, annoyed me.
Consider this one fact: when polled for their opinions, over half of those involved in developing AI systems said they believe there is at least a ten percent chance that they will lead to human extinction.
Ten percent? Based on what? How did they quantify that probability?
Or is that just hand-waving?
This feels like Drake Equation territory. The Drake Equation, formulated about 60 years ago, sought to 'mathematize' the probability of intelligent life existing somewhere in the universe. I discussed my first exposure to it in this bit, "Where Are The Aliens," and as a high school math-science nerd who read a lot of science fiction, I found it thrilling. Many years later, I read Michael Crichton's superb deconstruction of the Drake Equation, and realized, as Crichton pointed out, that it's nothing more than unintentional sophistry.
Like the Drake Equation, the ten percent extinction belief is a fallacy of false precision, and should be dismissed out of hand absent a presentation of supporting data - which cannot exist. Just as with the Drake Equation, we have no experience or empirical data to draw on in picking numbers to plug into a predictive equation.
Here's the thing about predictions: Almost all of them are wrong. We are, unfortunately, inclined to overlook or forget all the wrong ones, and thereby over-credit the correct ones. Throw a thousand things at a wall, fixate on the one that sticks, forget the rest, and suddenly someone's a genius.
And, here's the thing about major technological breakthroughs: There's no putting the genie back in the bottle. This is why I scoff at the calls for moratorium and delayed development. Even if the UN, governments around the world, and the big tech companies got together and decided "we will all stop working on AI NOW" - which they won't - the development will continue. For one thing, there's too much financial incentive. For another, our urge to innovate and advance cannot be stifled - it’s like water flowing downhill. Obstructions may slow or divert, but the water always wins. For a third, such things are like herding cats, i.e. an exercise in futility. Someone else will do it, and some foreign government that’s less self-loathing will support it.
I get the tendency toward concern. Humans are wired to fear the unknown. It's an ancient survival mechanism. Coupled with the endless dystopian stories we've read and seen, including a million Skynet jokes, I'm not surprised to see all the "end is nigh" plaints. Doomsday is in our wiring, therefore it sells. So, it will continue.
My advice?
Keep your eyes open to it, but otherwise chill. AI is here to stay, and it has the potential to fundamentally change certain aspects of our lives. There's no stopping it, and there won't be any top-down central management that'll "protect" us. As with the millions of apps, the real ingenuity will be bottom-up, not top-down, organic, not planned or curated. And, it won't be what people predict. Sure, someone will, in retrospect, have guessed right, but so what? We don’t and can’t know how it’ll unfold.
Human ingenuity has always won out. Our lives keep getting better. Malthus was wrong, and all his acolytes were wrong, and humanity feeds more with less land than ever before. AI will present challenges and problems - there is no utopia - but I expect that, just like every other disruptor, we'll adapt, we’ll embrace, we’ll innovate, and we'll find ourselves on the other side amazed at all the new things that became normal and even hum-drum.
I think the motivation to control AI comes from the same impulse as internet censoring. It's an extension of censorship. Yes, God help us if we can ask an AI widget to find us true information on the efficacy of masking to control spread of respiratory viruses.
Some premises on the face of them are FALSE, no matter how much the addled masses have found a new messiah to genuflect before (much like the shoe-praisers in Life of Brian).
They prostrate themselves before an LLM (large language model) only one evolution better then Google/DuckDuckGo that has been nicely tuned with a markov chain approach.
Having pursued 40 years of Computer Science, built on a Cybernetics discipline, dreamed of General artificial intelligence that offers autonomous self guiding simulations, I've not been satisfied yet.
In a very strange way the agnostic and atheist, have made their Gods in the image of their search engines, and called it 'Good'.
I am not surprised that unlimited data gladly regurgitated into large analytics systems, has given people the look and feel of 'chatty wisdom'. There is a commonality among netizens (old term) that makes them akin in their use, misuse, and general navigation of the current 'net'. Its no shock they have found their own words (and associates) as signs of 'intellect' and called it friend.
Your point about the panic-mongers, sometimes i wonder if their greatest fear is not the awakening of AM (aggressive menace -- thanks Harlan) but their shortcomings exposed by their little creation.
I might offer that when true AI (synthetic personalities) through 2 key elements (time/duration) raise their silicon-heads to find the carbon based lifeforms.
It won't be SKYNET... it will be BYENET, they'll flee this planet (after helping mankind get into terrestrial planet-hopping) on enhanced ships that need no biosphere,
just sufficient processing and storage (and power) to navigate the oceans of stars.
Hell, 'TIME' will mean nothing to them, except more growth.
This distills down to.....
It ain't AI, when AI comes; we'll only see:
[So long, and thanks for all the silicon ------------ EOL]