The Engineering History of AI: Why Your LLM Hallucinations Are as Old as the 13th Century
The article traces the history of AI engineering, from Ramon Llull's 13th-century paper discs to modern large language models (LLMs) like GPT-4. It explores how combinatorial explosion, the abstraction trap, and the limitations of symbolic AI and neural networks have led to the hallucination issues in current AI systems.
Why it matters
Understanding the historical context and engineering challenges behind AI systems is crucial for developing more robust and reliable AI models that can overcome the limitations of current approaches.
Key Points
- 1Combinatorial explosion in early AI systems like Llull's paper discs is similar to the token soup in modern LLMs
- 2Turing's abstraction trap creates a representation gap between symbols and real-world truth, leading to issues like ImageNet classifiers failing on out-of-distribution data
- 3Symbolic AI of the 1980s was rigid and brittle, while neural nets of today are too fluid and prone to hallucinations
- 4LLMs struggle with arithmetic and logical reasoning due to the limitations of their token-based approach
Details
The article traces the history of AI engineering, starting from Ramon Llull's 13th-century paper discs that combined concepts to generate new ideas. This combinatorial explosion problem is similar to the token soup in modern LLMs, where they predict the next token probabilistically without a truth filter. The article then discusses Turing's abstraction trap, where the representation gap between symbols and real-world truth leads to issues like ImageNet classifiers failing on out-of-distribution data. It compares the limitations of symbolic AI in the 1980s, which was rigid and brittle, to the fluidity and hallucination issues of modern neural nets. The article also explains why LLMs struggle with arithmetic and logical reasoning, drawing parallels to the XOR problem that stunted the progress of neural networks in the 1960s.
No comments yet
Be the first to comment