Beyond the Gaps of Weak AI: Deep Learning as the Path to Artificial General Intelligence

Credit Dalle2

“I have no explanation for complex biological design. All I know is that God isn’t a good explanation, so we must wait and hope that somebody comes up with a better one” — Richard Dawkins

Thinking Magically

First introduced in 1955 by Charles Alfred Coulson and later popularized by Richard Dawkins in his 2006 book "The God Delusion," the "God of the gaps" concept highlights the use of divine explanations to account for gaps in our scientific understanding. In the realm of artificial intelligence (AI) and artificial general intelligence (AGI), passionate debates often echo religious fervor, with some ascribing unique or even mystical qualities to the human brain's capacity for intelligence. Part of the resistance to Deep Learning achieving AGI stems from the idea that the human brain may not function like a Turing machine. Sir Roger Penrose, for instance, has proposed the Orch-OR theory, suggesting that quantum processes play a role in human consciousness and, consequently, intelligence. This essay explores AI and AGI through an open-minded lens, contemplating the possibility that Deep Learning and generative AI could lead to AGI while also addressing their limitations and delving into the complexities of human intelligence. We will not, however, examine security and malignant AGI, something that should be left to future essays.

AI Models and Deep Learning Challenges

Over a century ago, ground-breaking advancements in physics led to remarkable discoveries in the atomic and subatomic domains. Quantum phenomena like quantum tunneling and quantum noise have constrained processor technology. Consequently, the progress of Moore's law has decelerated due to these limitations and increased power consumption. In the meantime, deep learning, powered by the backpropagation algorithm, has emerged as the driving force behind AGI development. Nevertheless, these models face various challenges – which we address below – that preclude them from being AGIs. Whether they will limit deep learning's ability to achieve AGI or if they merely represent current gaps in our understanding of these algorithms is not yet known.

AI models like GPT-4 can generate text and answer questions but struggle with what are known as hallucinatory mistakes. This is when a model returns a wrong result with high confidence to answer a prompt. To address these issues, researchers could develop more advanced methods for model interpretability, allowing AI systems to recognize when they should respond with "I don't know" rather than attempting to answer all queries.

Generative AI also suffers from other issues, such as token constraints, that limit the number of inputs they can process and outputs they can generate. They also face limitations in maintaining context in long conversations. Could these be addressed with more powerful hardware? It brings to mind the problems of high computational costs and the large amounts of data involved in training these models. The efficiency and sustainability of this approach should be in question, as the human brain's ability to generalize with far fewer data suggests that there might be more efficient learning algorithms that we could develop in the future (assuming that the brain is indeed a Turing machine). Yet, despite these limitations, the success of these models suggests that AI can be highly effective at solving complex problems without exactly replicating human thinking.

The really hard problems

To better understand AGI and its development, let's recognize that human intelligence comprises various components, such as verbal reasoning, visual and dreaming faculties, conscious experience, and emotions. So far, the current Deep Learning approach has achieved much success in verbal reasoning, visual recognition, dreaming faculties – as seen with generative AI – and has some success with the recognition of emotions (sentiment analysis). Yet, little progress is taking place in terms of the Hard problems of conscious experience and emotions.

Practitioners and AGI researchers should consider these components when creating AGI. They should develop and combine AI systems that emulate these individual intelligence aspects. Incorporating these components into AGI research and development may lead to more comprehensive AI systems that can approach human-like intelligence without being exact replicas of human intelligence.

The debate about whether AGI requires consciousness or agency, for that matter, remains unresolved. Philosophers like John Searle argue that computers can never genuinely think, using thought experiments like the Chinese Room to illustrate their point. On the other hand, proponents of AI suggest that intelligence may not require consciousness, and AI models like ChatGPT can be considered intelligent even without these qualities. For their intents and purposes, they are intelligent in verbal reasoning skills.

The challenges faced by Deep Learning models, such as hallucinatory errors and the need for extensive data, should not discourage the pursuit of human-level AI. Advances in model interpretability, learning algorithms, and neuromorphic computing—which seek to emulate the brain's neural functions—have the potential to address these limitations.

In the quest for artificial general intelligence (AGI), a comprehensive understanding of human intelligence may inform our approach to creating artificial humans using silicon-based systems. Engineers and researchers will continue to push boundaries, developing increasingly useful and powerful AI applications. Speculation about AGI's potential will undoubtedly persist, but the primary focus should be on harnessing AI advancements to benefit humanity. We can progress toward AGI development by examining AI from diverse perspectives, acknowledging its limitations, and understanding human intelligence's complexities.

Do Submarines swim?

The New York Times once predicted, on October 9th, 1903, that the development of flying machines would take between one and ten million years and attempts to create them would be futile and financially imprudent. Yet, a mere nine weeks later, the Wright brothers made their historic flight. Their success hinged on grasping the relationship between human weight and air density. The Wright brothers revolutionized aviation using an internal combustion engine, and just over 30 years later, jet engines signaled the dawn of commercial flight. Notably, the mechanics of these human-engineered flying machines differ significantly from the natural evolution of flight in animals. Examining the current generation of AI technology, such as Generative AI, we must avoid interpreting limitations as definitive proof that AGI is unattainable. Falling into this trap would be akin to committing the "God of the gaps" fallacy. With a lack of an all-encompassing theory of "Thinking," engineers should keep trying to build and improve models. At the same time, parallel research should continue to be funded on how the brain works. Known unknowns could mean we will hit a barrier with AGI research or solve it through the current Deep Learning approach. Should we overcome the challenges and unlock the full potential of AI and AGI, we have the potential to benefit humanity in unimaginable ways.

This essay was written in collaboration with #ChatGPT4, and here’s a link to how I used ChatGPT4. See comments for more details

Edits:

  • A previous version had in the title “The Path” instead of “A Path” and Unknown unknowns vs. Known Unknowns. Both changes make more sense to me.
  • I added references as hyperlinks, mostly coming from ChatGPT.

Comments

Popular posts from this blog

The Pincer after the North American Programmer’s Job

SuperIntelligence: A book Review