New York, NY, March 23, 2023 – Over the past several months, the world has come to recognize the potential of artificial intelligence to reshape society due to the rise of large language models (LLMs) such as ChatGPT. Large language models are applications of transformer-based deep learning algorithms that can recognize and predict patterns in massive multimodal training datasets analogous to a super-powered auto-complete function that can predict the next several sentences instead of just a few words. ChatGPT, a breakthrough in human-computer interaction, has attracted over 100 million users in two months because of its perceived intelligence and ability to hold human-like chat-based conversations for hours on end. According to a UBS study, ChatGPT is the fastest-growing consumer application in history.
Despite this early enthusiasm, there are widespread concerns that chatbots will replace humans in their jobs, as millions of jobs that require little critical thinking are at risk of replacement. However, the more foreboding aspect of generative AI is that many people believe that the responses generated by LLMs are accurate and an expression of well-reasoned thought. Believing the responses of LLMs like ChatGPT without critically examining the veracity of the information will significantly contribute to the spread of viral misinformation. Although they seem generally intelligent, LLMs cannot perform any intellectual tasks performed by humans because they cannot ‘jump out of the system’ in which they operate and reason about the implications of what they are doing.
Author Douglas Hofstadter explains in his iconic book on intelligence and how the animate emerges from the inanimate, Godel, Escher, Bach: An Eternal Golden Braid, that the difference between machines and humans is that while a machine can be programmed to perform a routine task, it cannot jump out of the system in which it’s in and notice a higher order to the work it’s programmed to do. For example, ChatGPT can instantaneously provide a response because it’s not actually thinking of ways to express an idea. Instead, ChatGPT has no memory and cannot reason at various levels of abstraction. Unlike human beings, ChatGPT is not able to jump out of its routine task of predicting the next sequence of sentences and create new ideological paradigms and invent concepts such as ethics, mathematics, and biology, and cannot pose important questions like ‘what is truth?’ and ‘does P=NP?’
The current generative AI fever is concerning, not because LLMs have breached the threshold of artificial general intelligence, but because large language models have a zombie parrot-like quality that easily misleads people. To illustrate this point, two journalists recently published articles about their unsettling conversations with Microsoft’s newly released Bing Search powered by ChatGPT and its ‘shadow self.’ Ben Thompson of the Stratechery newsletter reported that Bing had an evil alter ego. Kevin Roose of the New York Times also published an article about how Bing’s shadow, self-named Sydney, ‘wants to be alive.’
ChatGPT is scary because it’s like a societal Rorschach test that’s capable of projecting the darkest sides of our collective psychology by simply regurgitating text based on the internet. Today ChatGPT learns from mostly human-generated text, but soon, due to the rise in popularity of LLMs, tools such as ChatGPT are likely to learn from AI-generated text they find on the web. The problem with this is that rather than compounding knowledge rooted in ground truth, generative AI models will compound viral misinformation that can potentially drive major inaccurate perceptions and irrational human behaviors. Bad actors, including foreign adversaries, are already using generative AI to spread viral disinformation, exploit ignorance, manipulate behaviors, and weaken civil society.
There are two general classes of foundation AI models, analytical and generative. Foundation models are AI models that learn implicitly from vast volumes of unlabeled data at scale and can be adapted to a wide variety of downstream tasks. Whereas analytical AI models automate tasks such as reading comprehension to generate insights reasoned from ground truth, generative AI models automate output-oriented tasks such as writing. Analytical AI can be thought of as thinking, and generative AI can be thought of as speaking.
The major threat is that LLMs speak without thinking. To unlock human potential, AI must be able to “think before it speaks” and be able to explain its reasoning. We believe the combination of both analytical and generative AI will reshape society by compounding knowledge reasoned from ground truth in a universe of viral misinformation. At Accrete, our AI solutions, powered by ground truth, can effectively and accurately automate domain-specific analytical workflows across industries. Our AI models accumulate tacit domain knowledge that feeds a continuously learning knowledge kernel.
As Accrete’s knowledge kernel continues to accumulate knowledge, our AI models not only automate increasingly complex analytical tasks that would otherwise require an army of human experts to perform, they can also generate valuable predictive insights that are beyond human capacity. Accrete’s AI models generate explainable, ethical, and trustworthy content enabling knowledge workers in industries ranging from Defense to Media & Entertainment to make mission-critical decisions. Earlier this month, Gartner recognized Accrete, alongside industry innovators like Hugging Face, as a ‘Cool Vendor’ in AI Core Technologies for our AI platform Nebula, which powers configurable dual-use AI solutions such as Argus for Threat Detection, which is being used in production by the U.S. Department of Defense to bolster national security in the context of supply chain and disinformation threats. Other Argus use cases include anti-money laundering, export control, and reverse engineering. Accrete recently published a case study explaining how Argus detects U.S. chip manufacturers that are unknowingly supporting the Chinese nuclear program despite government restrictions.
LLMs have enabled Accrete to further unleash the enterprise value of our knowledge kernel through the creation of natural language chat-based user interfaces. We’ve been using LLMs to create simplified, intuitive, and insightful chat-based user experiences that reduce time-to-insight. In fact, we’ve been benchmarking the performance of Argus Chat against Bing with ChaptGPT and ChatGPT, and the results are astonishing.
In the benchmarking results above, Argus chat generates insightful and actionable responses to complex questions that require domain expertise to answer. In contrast, Bing, powered by ChatGPT, and ChatGPT produced nonsensical responses to the same questions. Accordingly, their answers would not be useful in an enterprise context because their performance fails when a depth of knowledge is required.
Because Argus Chat learns from ground truths defined by human experts, it outperforms ChatGPT because it understands what potentially nefarious activity looks like in the context of foreign influence. These experts have held Argus to the same standards of performance, explainability, and ethics that they hold human professionals to. The video below demonstrates how users interact with Argus Chat. Argus Chat always provides source attribution, so users have full transparency when deciding whether or not insights are reliable.
Argus Chat provides a more intelligent, reliable, and relevant response to potential supply chain threats due to its domain-specific knowledge. When asked to identify Tesla’s battery suppliers in China, Argus identified both CATL and NIO, further elaborating that CATL plans to open a production facility in Germany to supply batteries to BMW, Audi, and Porsche. Argus also flagged that the CATL founder has a good relationship with Tesla CEO Elon Musk.
When asked a follow-up question: “Besides Tesla, what other automotive companies does CATL supply batteries to?” Argus Chat identified BMW, Daimler, Honda, Toyota, Volkswagen, and Volvo. In all cases, Argus Chat provided source attribution. The user saves the chat, and over the next two weeks, Argus continued to work in the background, constantly scouring the web for relevant information pertaining to Chinese battery manufacturers. Eventually, Argus pinged the user that it found new information, revealing a valuable insight that CATL has a Chinese Communist Party member on its board.
Argus is currently being used in production by the U.S. Department of Defense. Zachary Smith, Program Manager at Accrete and retired special agent who spent most of his 23-year Air Force career focused on countering human, technical, and cyber-based threats, recently conducted an off-site user training program for the customer, highlighting the new functionality and the increased usability and accessibility of the product.
Even though the most sophisticated AI models are not powerful enough to step out of the systems in which they operate to create new paradigms like humans can, AI is already reshaping society. In particular, AI will continue to challenge our ability to create and apply new paradigms to ensure the proliferation of AI positively impacts society.
Generative AI models have the potential to spread viral disinformation and create individual and societal vulnerabilities that bad actors will exploit to manipulate the truth. On the contrary, generative AI models powered by knowledge kernels produced by analytical AI engines are the key to creating intelligent, trustworthy, and ethical agents that humans can rely on to boost efficiency and predictive capabilities in previously unimaginable ways.
Thanks for reading and have a great day.
Founder and CEO Prashant Bhuyan shares his exciting vision about the future of Artificial General Intelligence and Generative AI.