It may seem that intelligence is appropriated by machines. I pointed out in an earlier article that assuming intelligence in a machine is wrong. That does not mean that the word AI will become redundant. However, as reliance on machines and their capabilities increases, it becomes more critical for wise individuals and organizations to adopt behaviors and policies to prevent redundancy.
While most techno-optimists present AI as inevitable, its advance as unstoppable, and the human race can do little about it, they should consider exponential growth. This defeatist submission can be explained by AI's rapid growth—a term used for organic beings. If AI were considered a bug, virus or bacteria, would we have found a vaccine, drug, or newspaper to squash it? The techno-optimists need to understand exponential growth or what happens when machines are responsible for improving themselves.
The exponential growth of chatGPT in two months is astounding: 100 million users! It's difficult for humans to understand this pace. Yet, the techno-optimists also believe that machines will not obliterate jobs or work for humans.
They believe that anyone intelligent will understand morality, ethics, and the common good. But we're asking too much of an algorithm that is used to churn data into insights--just because it can string together a sequence of words doesn't mean it has reached a level of understanding where it would be able to determine the ethics of an action that affects human progress or jobs.
AI optimists and pessimists have different views of how AI will affect our lives. Technologists like George Hinton, often called "the grandfather of AI" and who worked for Google in the past, are among the Techno-pessimists. I'm one of those people who is a techno-realist, not an anti-AI Luddite. Although I'm worried about some aspects of how it's developing, AI can benefit people significantly, but it demands responsibility from governments, corporations, and consumers. See the debate here.
The belief of some people that artificial intelligence will not cause any losses is based on the past outputs of technology; they can't fathom the exponential nature of AI. For example, ChatGPT4 is a processing model of large language models (LLMs). It returns higher-quality answers than other systems because, all other things being equal, it uses an algorithm that processes a humongous 45 terabytes of data. That's a significant portion of the internet! Human brains can't comprehend how an algorithm can process such a large amount of data; this appetite for processing power comes from rapid growth in processing power and the availability of LLMs.
Humans do not get old because they stop learning. They get old because the framework they use to absorb new information gets old, and in the case of artificial intelligence, the frameworks are still evolving. Therefore, its impact on jobs, society, science, programming, or digital evolution cannot be predicted. The adage that every prediction will underestimate the power of the algorithm is true when it comes to artificial general intelligence (AGI)—the algorithm can rewrite itself or improve itself. As one reads this, it is even being felt in the generative AI space.
Even now, the original creators of artificial intelligence engines cannot fully explain how machines arrive at their conclusions. Theoretically, an artificial general intelligence (AGI) machine could perform any intellectual task that a human can—reasoning, planning and learning. AGI machines would be able to understand natural language and solve problems in a way that is similar to humans. Unlike AI, AGI is not limited to specific domains or tasks and can exhibit general intelligence. The difference between the two is clearly explained here.
In Abundance, authors Peter Diamandis and Steven Kotler explain the inability of the human brain to understand exponential growth with a thought experiment: If I asked you to go out your front door and take 30 steps, I'll bet you could guess where you'd end up without even taking that short walk.
The human mind is very good at estimating linear growth but tends to underestimate exponential growth. For example, if you take 30 steps, each time twice as big as the previous one, where will you end up after 30 steps? The answer is that you will have circled around the Earth 26 times!
The chart below best explains the inability of the average human mind to estimate exponential growth.
Old-timers used to talk about Moore’s law, computing power doubling every two years. Some would say Moore's law became outdated as a measure of at least computing progress in 2012; the article from 2018, when Google first started using TPU, explains why even the engineers within Google found the growth crazy. Even after five years, we cannot accept the last five years of exponential growth of AI.
The problem with newfound experts postulating on the impact of AI and AGI gaining singularity is that they are just Johnnies who know how to do a Google search. Or like ChatGPT, who can string a sentence that could be more intelligent and wise than some would realize in an AI age.
Now the wiser among them would look at research at least before commenting on when AGI would reach human-like intelligence. And I will urge anyone who claims they know about AI, AGI and its impact to read the collection of research here https://aiimpacts.org/. Please resist the urge to comment until you can see what has been predicted about the future of AI from at least the last 23 years. The first prediction of AI capability is even older, say around 1960. Even Kurzweil's book "The Singularity is Near" copyright is 2005.
The above is a survey of experts in machine learning.
Here are the broad areas dedicated to different scenarios in the 2022 graph (equivalent to averages):
● Extremely good: 24%
● On balance good: 26%
● More or less neutral: 18%
● On balance bad: 17%
● Extremely bad: 14%
My views on the regulations have always been the same–if you are late to the regulation, don't worry; with technology, you will always be late. With AI or AGI, regulations will continue to lag behind as humans play catch-up with the technology until only another superintelligence (God) knows when. Giving up on regulation at any stage is like not taking an antibiotic for a runny tummy because you did not take it initially. The only problem with that approach is that you will shit yourself to death if you don't take it when you can.
While the debate over when AGI will surpass human skills and ability continues, the real issue is the money being spent by humans to wage this war against themselves. The few will decide the fate of many. The few will derive value, while many minds atrophy due to lacking a job or work. This is the dice that humans are rolling against their future as a species. The many will dither, debate, and undefine their own purpose. Many will fail.
And yes, the superorganism that Daniel Schmactenberger talks about will lose it to AI. And, no, the meta crisis triggered by the climate crisis and the rising use of energy by AI data centers –will not be solved by the ingenuity of humans or the market. Not until many decide to regulate it. Organizations have an important role here to play by developing policies in the interest of their stakeholders, which includes their employees. For example, agreeing not to use AGI or AI to replace humans as the Indian outfit Zerodha recently did. Will all the big tech and IT services companies come out with a declaration of a clear policy that they will not fire, replace or stop hiring because of AI?
This will go against the very root of the market structure, which believes efficiency or margins should come at any cost. The techno-optimists think that the market will change its behavior. Or will the government step in with regulations to prevent job losses? This is the question Techno-pessimists are asking. While the realists know that these two things will play together, wisdom rarely plays a role in a world awash with intelligence.
Yatish Rajawat is the founder of Centre Innovation in Public Policy, a think tank based in Delhi. His area of research includes everything digital affecting policy, people, and the biosphere. Feedback or contact at [email protected].
Image credit: iStockphoto/Sylverarts; Charts: Yatish Rajawat