Artificial Intelligence: The New Exponential Threat
With the recent release of ChatGPT, we are witnessing the exponential adoption of a new technology, new “large language models” (LLMs) that without question will transform society. It reminds me of the famous computer character, “Hal,” from 2001 Space Odyssey. We all recall the introduction of computers, the first use of browsers, and the famous dial-access to AOL. Within years, the “dot.com” explosion occurred.
We are on the precipice of a new and even more accelerating era. In November 2022, ChatGPT was released. A million people used it within one week. Within two months, 100 million people used ChatGPT. Users started to draft letters, school papers, speeches. In response, rival firms started to release competitive chatbots.
Throughout human history new technology, even the printing press, were viewed with skepticism and fear of change. But this new technology feels a little different given the scope and immediate impact. The general fear of the future has been represented in new LLMs systems.
The fears are familiar – replacement of workers, loss of jobs, and even machine take overs of humanity. In the past, new technologies have often replaced laborers. LLMs systems, however, present a real possibility of replacing white collar professionals.
At bottom, there is a fear of loss of control of our “civilization.” As a result, some have advocated for stringent regulation and even a temporary “pause” in deployment. Italy even banned ChatGPT for a while.
The capabilities of LLMs systems are incredible and offer real significant benefits. LLMs systems will transform our relationship with computers. AI advocates claim that new capabilities can help develop new drugs, help fight climate change and transform worker productivity. New technologies have fueled significant productivity changes. The Internet radically altered society, our economy and social interactions and culture.
On the other hand, the new technology presents a number of real and substantive risks fueled by fears that the technology will be beyond control. LLMs systems create a significant risk of vast quantities of disinformation. The Internet has become in many areas a refuge for disinformation, and it is easy to see how LLMs systems can replicate the Internet and multiply the amount of false data exponentially. If false information is targeted at specific events, such as a local or national election, LLMs can raise serious risks that elections could be further disrupted.
Aside from disinformation, existing AI systems can incorporate significant bias that can have disastrous consequences when an AI system is used to replace human analysis and review. AI systems also facilitate access to protected intellectual property, copyrights, images, and protection of personal sensitive information. Furthermore, the accuracy of the information provided by AI is still questionable – evidenced by the attorney who submitted a court brief created by AI, which cited to fake cases.
Moreover, with a new technology, fraudsters will quickly develop fraud schemes using voice-AI, fake information and fake images that are computer-generated rapidly and transmitted throughout the Internet. Criminals can be imaginative and access to LLMs systems will quickly become a tool for a criminal’s toolbox.
Governments are wrestling with regulation. Some governments are leaning toward “light-touch” regulation, while others are considering creation of regulatory agencies, licensing schemes, and proactive risk mitigation. The European Union appears to be taking a tough line on future regulation. The EU’s system includes a classification system and principles-based approach. Just like other regulated industries, the EU appears poised to require inspections and monitoring of AI systems.
In the United States, Congress has conducted hearings to explore the risks and need for regulation. Federal agencies are launching initiatives to address risks in specific regulated industries. Some argue for strict testing, pre-approval and licensing before public release.
The balancing of risks and benefits has to to continue. A heavy-handed approach to regulation may be too stifling and frustrate achievable benefits of AI technology. ChatGPT has confirmed the explosion new AI technology. LLMs systems even create capabilities to enable users to create their own code and learn how to build their own language based systems.
Companies are rapidly developing new business ventures based on language-based systems. Media companies are poised to develop focused models for information consumption. New start-ups are rapidly growing with new creative missions.
The sheer capabilities of these new LLMs systems is mind-boggling. Large processing capabilities, coupled with vast quantities of data can threaten humans access to reliable and meaningful information. Computer-power, however, costs money – electricity, skilled labor, training and other resources.
Companies need to quickly catch up with this new technology. It will become a part of every workplace, corporate functions and human culture. The risks have to be identified and addressed. It is important to remember the old admonition – “The Future is Now.”