Concerns about AI interfering with the 2024 elections are well-founded, yet not unprecedented in recent history. In 1975, the Asilomar Conference on Recombinant DNA foreshadowed today’s AI concerns.
Asilomar set the precedent on how to respond to changes in scientific knowledge. According to conference organizers, biochemist Paul Berg and molecular biologist Maxine Singer, the proper response to new scientific knowledge was to develop guidelines that governed how to regulate it.
They were as wrong as those asking for AI regulation. The solution is not to be found through regulation, but by debunking the premise of processing immense sets of data at the cost of sustainability. Brute force computation sold as intelligence is a fraud!
The challenges of AI and the 2024 elections are ethical. Not regulatory. We’ve made a Faustian bargain for AI, and its impact will irreversibly affect the future of humankind if we do not challenge the science behind it.
We can’t put the genie back in the bottle, hence the need to understand how to mitigate the potential dangers to society and our democratic system implicit in the deterministic foundation of AI.
It is not only the past, represented by data processed in AI that is the cause of our actions, but rather the possible future, of choices we make in a responsible manner, that matter.
AI doesn’t care who wins the presidential election. It solves a mathematical problem. Not long ago, Kenneth Arrow got a Nobel Prize for showing how elections can be manipulated.
In some AI learning-based processing in which an immense quantity of numbers is used, the purpose is to engineer the behavior of the 10-12% of the voting population that never previously appeared on the radar of elections. This is a political gold mine that’s waiting to be exploited.
Is it ethical to conceive behavioral engineering? Never mind if this is or is not a legal tool.
Our political system, already subject to cannibalism, is undermined by replacing human judgment with machine inferences. We need to understand that mechanistic technology has no anticipatory dimension. There is no ethic in using a hammer – it does not distinguish between a nail and someone’s head.
The automated hammer is actually a gun. It is automated know how to the exclusion of know why. The automated abacus – called a computer – is exceptionally good at processing data, but with zero know why capabilities. It has no sense of right and wrong, and no conscience.
Indeed, the Turing machine, based upon which everything computational exists, knows only the limits of physics – expressed as volume of data, speed of processing and cost (energy used). The human aspect, represented by the meaning of data, is entirely absent.
CLICK HERE FOR MORE FOX NEWS OPINION
Progress at any cost based on data processing spells destruction unless we rein it in. The Asilomar Conference is living proof. Participants, aware of how dangerous gene manipulation could be, were looking for guardrails.
Change of function of the COVID recent memory might ring a bell as we talk about AI today. Also, remember the genome hysteria: All diseases will be cured! Today the promise is that AI will make medicine better. But then, the reality: More disease was artificially produced.
AI is already making medicine more expensive but not necessarily more effective. AI regulation is of the same nature as what Asilomar endorsed. It is enthusiastically supported by those who want to secure their advanced positions. But it will not prevent aberrant applications.
What we need is a scientific foundation that does not reduce behavior to the physics and chemistry of matter. So far we have failed to do so. This is reflected in the increased pathological, delusional nature of human life in the 21st century.
I hope we can wake up and choose the right path. The clarion call to disrupt science is not optional but an existential imperative.