2022 ended with two big bangs last year. The first was the crypto implosion led by SBF's clumsy (or fraudulent) adventures. But equally loud was the entry of ChatGPT into the public domain.
Countless articles talked about ChatGPT’s pros and cons. My colleagues have created equally explorative articles like the one here.
It’s undoubtedly a game-changer. But while we ponder its merits and deride its shortcomings, our collective reactions show that we aren’t ready for the next generation of AI models even though we are already using their narrow versions.
Chatbot remixed, retaught and rebooted
ChatGPT aspires to be what chatbots before couldn’t. It's like talking to a fellow human being — well, almost. You can download it here to see what the fuss is about if you have not tried it.
There is a humungous fuss. People were shocked at how well it replied and carried a conversation when it launched. It also knew many more things and, more importantly, admitted mistakes (which it can learn from), challenged the asker on critical questions and even rejected inappropriate requests.
ChatGPT (GPT stands for Generative Pre-trained Transformer) was launched by OpenAI in November 2022. It runs on the company’s GPT-3.5 family of large language models. The deep learning neural network model has over 175 billion parameters, eclipsing Microsoft’s Turing NLG model, which had only 10 billion.
It also is better at understanding the context. For example, the predecessor InstructGPT model would accept, “Tell me about when Christopher Columbus came to the U.S. in 2015” as a truthful query. ChatGPT will give you a “what if” scenario if Columbus landed in the U.S. in 2015 based on the historic voyages of the current world scenario.
Past chatbots used Narrow or weak AI models. They ingested the information quickly and can be retrained or pretrained easily with updated information. The simplicity also allows you to scale it fast.
ChatGPT goes deeper. It is not satisfied pointing you to the correct information. It wants to be the primary touchpoint for all conversations, requests, inquiries and comments. Think of it as your virtual contact point within a company.
Its wide conversational latitude adds more uses. Some journalists suggest ChatGPT can even be a therapist. Others see it as good enough to code, read, write, play tic-tac-toe, emulate Linux systems and even compose music.
Mindset also needs rebooting
The use cases that worry many are those related to cheating, fraud, and thievery. This Atlantic article felt that the college essay is under threat. ChatGPT, which has the potential to write one good essay, may lead to cheating, the article thinks.
“Cybercriminals are finding ChatGPT attractive. In recent weeks, we’re seeing evidence of hackers starting to use it [for] writing malicious code,” remarked Sergey Shykevich, threat intelligence group manager at Check Point Software Technologies.
“ChatGPT has the potential to speed up the process for hackers by giving them a good starting point. Just as ChatGPT can be used for good to assist developers in writing code, it can also be used for malicious purposes,” he added.
But all this talk about cheating, cybersecurity threat and fraud only shows how much our traditional thinking continues to be our crutch. We’ve had years to prepare for the likes of ChatGPT (especially educational institutes), and now all they can do is ban them.
Instead, we should rewire our thinking on how bots can improve our productivity and lives and stop looking at them as adversarial.
Take coding, for example. Ask any software developer, and they will moan about the boilerplate code they have to write. If ChatGPT can take away that workload, they can focus on the more exciting and creative stuff (which ChatGPT can only suggest). But if you are a software developer who thrives on writing boilerplate code, well, the writing was on the wall even before ChatGPT.
The same goes for writers and journalists. ChatGPT cannot replace good investigative journalism and in-depth reporting. It is getting better at making correlations, but it is still data-based. When there is sparse data or a lack of good data (like ESG data sets), making a correlation takes work. Sometimes even esoteric topics that may seem obvious to us can be challenging.
ChatGPT is also only the beginning and is learning fast. With OpenAI making it public, you will see others joining in. We’ve yet to see the Big Tech response, which is more reactive and defensive at the moment. Whatever the future trends, we need to see more advanced AI as part of ours.
Governments to get on onboard
Not only do our mindsets need to change, but government approaches also need to change.
Governments across the world have always talked about driving an AI-driven economy. But they see this future from a non-AI point of view. Manufacturing will be smarter; cities will be more intelligent; policing citizens will be more efficient. But what happens next when they all become AI-driven?
Then there are two data-related challenges that we’ve yet to address collectively. One is data-driven bias. ChatGPT, or any AI-driven chatbot, uses supervised and reinforced learning to improve outcomes. It’s still early to say whether the data was biased. There are already claims that it is.
AI, in general, is also bad at making inferences from facial microexpressions unless it is very obvious. Try looking for an AI model that differentiates a smirk from a genuine smile. It is a breakthrough in AI, which is often seen as part of our growing up.
Calling out this bias can be challenging when there is no fundamental legal framework — should you sue the developer, the company that uses it, or the data used to pre-train the model? And what are the “red lines” that AI models need to be aware of as it becomes more advanced and their impact more pronounced on society?
Are AI legislations too late?
There has been some progress in defining these “red lines.” The AI Act looks to clarify AI behavior across three risk categories: unacceptable risks, high-risk applications, and other unregulated applications that do not fall into the above categories.
The U.S.’s Algorithmic Accountability Act (AAA) of 2022, as proposed by Senator Ron Wyden, Senator Cory Booker, and Representative Yvette Clarke, requires AI developers to conduct algorithmic impact assessments. This is still in the draft stage. Meanwhile, we have an AI Bill of Rights and the Initiative on AI and Algorithmic Fairness. We also have the Federal Trade Commission looking into AI system oversight.
The Responsible AI Institute (RAII) is mapping over 200 AI-related international principles, many of which are also monitored by the OECD AI Policy Observatory. RAII is also creating a certification program so companies can get the appropriate ISO standard.
While there is some coordination, many of these efforts tackle AI from different viewpoints. They are also not universal. For example, using AI to balance the social credit system may be ok in China but is an “unacceptable risk” in the E.U.
Some say it’s still early days. Yet, we already have ChatGPT and other new AI models coming online. It creates an AI fault line that will create issues and challenges in the future.
The AGI headache
AGI (Artificial General Intelligence) is the next major point in AI’s evolution. It’s when the AI model understands or learns any intellectual task a human can do. This is when our jobs genuinely need to change, or we won’t have one.
ChatGPT is not AGI. However, Sam Altman, OpenAI’s co-founder and chief executive officer, said that is what the company aims for.
“There will be scary moments as we move towards AGI-level systems, and significant disruptions, but the upsides can be so amazing that it’s well worth overcoming the great challenges to get there,” he tweeted.
The company is already readying its ChatGPT 4 in the coming months, which looks to expand the use cases and functions. And predictably, we will be woefully unprepared for its impact — again.
Winston Thomas is the editor-in-chief of CDOTrends and DigitalWorkforceTrends. He’s a singularity believer, a blockchain enthusiast, and believes we already live in a metaverse. You can reach him at [email protected].
Image credit: iStockphoto/Nuthawut Somsuk