No, Artificial Intelligence Will Not Solve All Problems
Authored by Jeffrey Tucker via The Epoch Times,
The famed historian and epidemiologist John M. Barry just threw out a trope that has become unbearably popular today. He predicted that in the future, artificial intelligence (AI) will make it possible to more perfectly enact pandemic lockdowns and develop vaccines.
“Artificial intelligence will perhaps be able to extrapolate from mountains of data which restrictions deliver the most benefits—whether, for example, just closing bars would be enough to significantly dampen spread—and which impose the greatest cost,” he writes.
“AI should also speed drug development.”
Maybe if you are an expert in anything these days and write in The New York Times, this is just what you say today to seem hip and with it, even if you have no idea what you are talking about. That’s the most likely reason.
Even so, this constant invocation of AI as the future solution to all problems is getting extremely annoying.
Can you imagine a world in which AI demands that your favorite local watering hole needs to shut? I can easily imagine it. I can also imagine local media citing AI as the final authority such that no human arguments can get in the way.
There is nothing new under the sun.
Every time a fancy new technology appears on the horizon, the experts emerge from the firmament to assure us that it will solve every human problem in the future.
And they always advocate that it be made the core of government policy, thus fixing all the problems with government that everyone has known from all ages. Thanks to this brilliant thing, it will be different this time.
This is exactly what happened with computers, starting in the mid-1950s once they became available. The new claim was everywhere: Computers would make the central planning called socialism possible. This claim was conjured up as an answer to an intractable problem that had vexed intellectuals since the 1920s.
Here is a bit of background on that controversy. In the late 19th and early 20th centuries, socialists were running around saying that they could rework economic life in a way to make all things function more efficiently and with even greater economic growth once we got rid of capitalistic systems.
In 1922, Ludwig von Mises posed a very serious problem to the theory. If you collectivize the capital stock, you eliminate trading across the board for all capital goods. That means that none will carry a market price that signals relative scarcities. Without those, you cannot have accurate accounting. Without that, you will not gain a precise reading of profits and losses, so you will have no idea if what you are doing is efficient or wildly wasteful.
Not only that, you won’t have any clue of how to produce anything with any kind of effectiveness. You will end up just barking orders in an economic environment of pure chaos. “There is only groping in the dark,” he wrote. In short, the whole society will fall apart.
The socialists were confounded by the critique. In fact, they never really answered it in any compelling way. Not only that but the reality of communism in Russia seemed to confirm as true everything that Mises said. The “war communism” imposed by Vladimir Lenin achieved nothing but starvation and waste. In short, it was a total disaster.
That didn’t mean that the attempt to centrally plan economies went away. Instead, they just kept trying. But following World War II, they had a fancy new tool: the computer. We don’t need market-generated prices anymore. Now we only need to plug in resource availability and consumer demand into the computer and it will spit out the answer concerning how much to produce of what and how.
Oddly, Soviet premier Nikita Khrushchev, who was very keen to get the economy actually making stuff that was useful to people, trusted these new fools and attempted the solution of asking the computer for answers to problems. You don’t need to be told the results. It didn’t work. The computer was, and is always, garbage in and garbage out. There is simply no substitute for market prices generated through the roil and toil of trading and price discovery.
Sadly, it took many decades for people to finally concede that Mises was right all along.
But no lessons last forever in a world where human arrogance runs rampant. So now we are being lectured that artificial intelligence will solve all the problems associated with pandemic planning that we discovered from 2020 to 2022. Don’t worry about it! We’ll just ask ChatGPT what to do!
The same problem presents itself: garbage in and garbage out.
Mr. Barry’s idea is that we simply plug in seroprevalence levels in a community, hoping to get a picture of disease spread, along with transmission and infection fatality rates, and AI will reveal the costs and benefits of shutting things down. Will it generate the right answer? No, because there is no one answer, not for communities and not for individuals.
The costs of shutdowns will be more seriously felt, for example, by the bar owner than the patron. The supposed benefits cannot be summed up as failing to get infected since exposure (and not just vaccination) is a path toward immunity. There are conditions in which exposure offers a better risk-benefit ratio than waiting for a vaccine, especially one that does not work.
Plus, we found out last time that we have no real way to get an accurate read on exposure, certainly not with PCR exams that measure the presence of particular pathogens and not actual sickness. And the testing itself is a problem: People despise the tests today, and rightly so. The only need to test is if you are sick, and only then to better guide the appropriate response. We have never imposed population-wide testing in order to know whether and to what extent to lock down whole populations.
In so many ways, the epidemiological models that imposed lockdowns on us in 2020 and following were born of the same primitive analytical tools that drove central planning models in the 1950s. In them, everything seems to work perfectly on paper. The trouble comes when you try to impose the same models on real life. The data is incomplete and inaccurate, the assumptions about spread are wrong, and the mutations in the pathogen will typically outwit the planners’ intentions.
In other words, pandemic planning fails for the same reasons that central economic planning fails. The world is too fast-moving and complicated for the models to capture and control all of the necessary conditions. But admitting that is not usually the habit of governments and their intellectual advisers. They cannot stand to confess their own ignorance, impotence, and incompetence in the face of real-world conditions.
As a result, we now have the pandemic planners toying with the idea that AI will save their bacon, following a catastrophic experience in letting them have their way last time. The truth is that the next pandemic plan will fail just as badly as the last one, no matter how many computer programs the planners throw at the problem. The real pathogen among elite government planners and intellectuals has a much deeper root: The problem is hubris.
AI has its uses, but substituting for actual human action and intelligence is not one of them. It can never happen. If we attempt that—and surely we will—the result will be disappointing at best.
F.A. Hayek said that economic planning by government embodies a pretense of knowledge. That’s nothing compared with the ambition of governments throughout the world to control and manage the whole of the microbial kingdom. There is nothing that AI can do to achieve that. And like communism, the attempt only creates nothing but destruction.
Tyler Durden
Fri, 05/24/2024 – 19:40