Subscribe NOW

Enter your email address:

Text Message our CEO:

650-283-8008

or on twitter

Free Resources

Click Here to learn more

In The Media

3rd

by Larry Chiang on July 8, 2025


Key Points
– Research suggests Peter Voss’s third wave AI focuses on cognitive architectures mimicking human intelligence, aiming for Artificial General Intelligence (AGI) with systems that integrate memory, reasoning, and metacognition.
– It seems likely that second-wave AI, like GPT-3 and GPT-4, is limited by token constraints, restricting context handling and real-time adaptability.
– The evidence leans toward third-wave AI overcoming these limitations through graph-based knowledge representation, enabling flexible, long-term interactions.
Peter Voss’s Third Wave of AI
Peter Voss, a noted AI pioneer, describes the third wave of AI as Cognitive AI, which seeks to emulate human intelligence through cognitive architectures. These systems aim for Artificial General Intelligence (AGI) by integrating components like short-term memory, long-term memory, goals, context, metacognition, and reasoning. A key feature is the use of a super high-performance graph database, reportedly 1000 times faster than commercial graph databases, enabling real-time, autonomous learning even with incomplete data. This approach is detailed in various articles, such as [Peter Voss – The Third Wave of AI](www.linkedin.com/pulse/third-wave-ai-peter-voss) and [Cognitive AI – by Peter Voss](petervoss.substack.com/p/cognitive-ai).
### Limitations of Second-Wave AI Due to Token Constraints
Second-wave AI, dominated by Deep Learning models like GPT-3 and GPT-4, processes data in tokens, with fixed context windows (e.g., 2048 tokens for GPT-3, 8192 for GPT-4). These token limitations restrict the models’ ability to handle long conversations or complex tasks, requiring workarounds like splitting inputs or summarizing, which can lose context. They also struggle with real-time updates and abstract reasoning, often leading to hallucinations. This is supported by resources like [Understanding Token Limits in Modern AI Models](medium.com/@t.sankar85/understanding-output-token-limits-in-modern-ai-models-99f6db54c7c6) and [What are tokens and how to count them? | OpenAI Help Center](help.openai.com/en/articles/4936856-what-are-tokens-and-how-to-count-them).
### How Third-Wave AI Addresses These Limitations
Third-wave AI overcomes token limitations by using a graph-based knowledge representation instead of fixed token sequences. This allows for flexible, efficient handling of large, dynamic contexts, enabling systems to reason over extensive datasets and maintain coherence in long-term interactions, addressing the shortcomings of second-wave AI.
### Detailed Survey Note: Exploring Peter Voss’s Third Wave AI and Second-Wave AI’s Token Limitations
This note provides a comprehensive analysis of Peter Voss’s concept of the third wave of AI, focusing on its characteristics and how it addresses the limitations of second-wave AI, particularly due to token constraints. The discussion is informed by recent online resources and aligns with the current understanding as of July 8, 2025.
#### Background on Peter Voss and AI Waves
Peter Voss, a serial entrepreneur and AI pioneer, has extensively written about the evolution of AI, categorizing it into three waves. His work, accessible through platforms like LinkedIn and Medium, positions him as a key figure in advancing Artificial General Intelligence (AGI). The first wave involved rule-based, symbolic AI, effective for specific tasks but lacking adaptability. The second wave, dominated by Deep Learning and statistical models, marked significant progress in natural language processing and image generation, exemplified by models like GPT-3 and GPT-4. However, Voss argues that the third wave, Cognitive AI, represents a paradigm shift toward systems that mimic human cognitive processes, aiming for AGI.
#### Characteristics of Third-Wave AI
Voss’s third-wave AI, or Cognitive AI, is characterized by cognitive architectures designed to emulate human intelligence. These architectures integrate various cognitive components, as outlined in his Substack post [Cognitive AI – by Peter Voss](petervoss.substack.com/p/cognitive-ai):
– **Short-term memory (STM)** and **long-term memory (LTM)** for storing and retrieving information.
– **Goals** and **context** to guide behavior and decision-making.
– **Metacognition** (self-awareness) and **reasoning** for higher-level, System 2 thinking, as defined in [System 2 Thinking](en.wikipedia.org/wiki/Thinking,_Fast_and_Slow).
A critical technological enabler is the use of a **super high-performance graph database**, claimed to be 1000 times faster than the best commercial graph databases, detailed in a whitepaper at [https://arxiv.org/pdf/2309.01622.pdf]. This database supports real-time, autonomous, goal-directed learning, even with incomplete or contradictory data, as noted in the same post. This integration is intended to enable systems to handle complex, long-term tasks, addressing limitations of previous waves.
Voss’s vision, as discussed in [Peter Voss – The Third Wave of AI](www.linkedin.com/pulse/third-wave-ai-peter-voss), emphasizes that third-wave AI can tackle global challenges like disease, poverty, and environmental issues, suggesting a broader application scope compared to earlier waves.
#### Second-Wave AI and Token Limitations
Second-wave AI, primarily Deep Learning models, relies on transformer architectures and statistical methods, processing data in units called tokens. Tokens are chunks of words or characters, and models like GPT-3 (2048 tokens) and GPT-4 (8192 tokens) have fixed context windows, as explained in [What are tokens and how to count them? | OpenAI Help Center](help.openai.com/en/articles/4936856-what-are-tokens-and-how-to-count-them). These limitations create several challenges:
– **Limited Context Handling**: The fixed token limit restricts the amount of information processed in a single interaction. For tasks requiring long conversations or extensive documents, models must split inputs, potentially losing context, as noted in [Understanding Token Limits, Pricing, and When to Use Large Context Models](medium.com/@hernanimax/understanding-token-limits-pricing-and-when-to-use-large-context-models-0dcb06e724d2).
– **Inefficiency in Real-Time Interaction**: These models cannot dynamically update their core knowledge in real-time, requiring retraining or fine-tuning, which is resource-intensive. This is highlighted in discussions on enterprise LLM inference, such as [Tokens Per Second is Not All You Need](sambanova.ai/blog/tokens-per-second-is-not-all-you-need).
– **Hallucination and Reasoning Issues**: Due to their statistical nature, second-wave models often generate incorrect information (hallucinations) and struggle with abstract reasoning and planning, as mentioned in Voss’s Substack post. This is a consensus in the field, as seen in [3 Strategies to Overcome OpenAI Token Limits](www.bretcameron.com/blog/three-strategies-to-overcome-open-ai-token-limits).
– **Resource Intensity**: The need for massive data and computational power, as discussed in [Explaining Tokens — the Language and Currency of AI](blogs.nvidia.com/blog/ai-tokens-explained/), limits scalability and practicality for real-world applications.
These limitations are evident in practical scenarios, such as AI transcription tools (e.g., [The Best AI Transcription & Summarization Powered By GPT-4 & GPT-4o](www.supernormal.com/blog/free-gpt4-meeting-transcription)), where token limits necessitate workarounds like breaking text into smaller pieces.
#### Comparative Analysis: How Third-Wave AI Overcomes Token Limitations
Voss’s third-wave AI addresses these token-related shortcomings through its cognitive architecture and graph-based knowledge representation. The Substack post [Cognitive AI – by Peter Voss](petervoss.substack.com/p/cognitive-ai) details that, unlike second-wave AI’s token-based processing, third-wave AI uses a flexible system that integrates all cognitive components seamlessly. The graph database, far faster than commercial alternatives, enables:
– **Flexible Context Handling**: Instead of fixed token sequences, the system can reason over large, dynamic datasets, maintaining coherence across long-term interactions.
– **Real-Time Adaptability**: The architecture supports autonomous, goal-directed learning in real-time, addressing the second-wave’s inability to update incrementally, as noted in the post.
– **Enhanced Reasoning**: By integrating metacognition and reasoning, third-wave AI can handle abstract tasks and explain actions, reducing hallucination risks.
This is supported by comparisons in Voss’s articles, such as [Why Machine Learning won’t cut it](medium.com/@petervoss/why-machine-learning-wont-cut-it-f523dd2b20e3), which highlight second-wave AI’s poor performance in language comprehension and reasoning compared to the third wave’s capabilities.
#### Supporting Evidence and References
The discussion is informed by multiple sources, including:
– DARPA’s Three Waves of AI framework: [https://www.darpa.mil/attachments/AIFull.pdf]
– Definitions of related concepts like GOFAI, Deep Learning, and Transformers: [https://en.wikipedia.org/wiki/Symbolic_artificial_intelligence], [https://en.wikipedia.org/wiki/Timeline_of_machine_learning], [https://en.wikipedia.org/wiki/Transformer_(deep_learning_architecture)]
– Practical implications in enterprise settings, as seen in [AI and your Bill of Materials: why token limits are nothing new](www.quickrelease.co.uk/insights/ai-and-your-bill-of-materials:-why-token-limits-are-nothing-new)
#### Conclusion
Peter Voss’s third-wave AI, through cognitive architectures and graph-based systems, offers a promising approach to overcome the token limitations of second-wave AI. By enabling flexible, real-time, and reasoning-capable systems, it addresses the constraints of context handling, adaptability, and resource intensity, paving the way for AGI and broader application in solving complex global challenges.

Leave a Comment

Previous post:

Next post: