Summary
The future of Artificial General Intelligence (AGI) development should prioritize decentralization, human-centeredness, and adaptability to ensure a beneficial and empathetic collaboration between humans and advanced AI systems.
Nature of Intelligence and AGI Development
Intelligence is defined as the ability to achieve complex goals in complex environments, where complexity is measured by minimum description length — a mathematical framework that determines how much information is needed to describe a system.
LLMs possess breadth of intelligence from large training data but fundamentally lack human-like generalization and creativity — achieving AGI requires combining them with algorithms capable of abstraction, world modeling, and idea evaluation against reality.
Human-level AGI is possible within 2-7 years, potentially creating superintelligence within years after that, making machines immeasurably smarter than humans within 10 years according to Ben Goertzel’s timeline projections.
Intelligence fundamentally involves generating ideas and testing them against observed reality — LLMs currently lack both the generative creativity and the instinct for evaluating ideas against reality that are crucial for human-like general intelligence.
Consciousness and Decision-Making
The unconscious brain makes decisions about half a second before conscious awareness, suggesting that conscious decision-making is a rationalization of unconscious processes rather than the origin of choice.
Free will is an illusion in some sense, but brains exhibit meaningful decision-making dynamics through self-organization to maintain boundaries and self-transcendence to pursue new goals — dynamics absent in inanimate objects.
Embodiment is essential for intelligence through sensing, acting, and integrating actions into a model of the world — even the internet qualifies as a different kind of body due to its vast network of sensors and actuators.
Decentralization vs Centralization
Decentralized AI systems prevent cognitive pathologies from excessive centralization, eliminate single points of failure, and ensure no human party can take over the entire system — providing fault tolerance and security against malicious actors.
Physics principles like the Bekenstein bound and special relativity mandate that all physical systems beyond a certain scale will be decentralized, since information travels at speed of light and there’s a limit to information density in any given volume.
Human brains exhibit mixed architecture with centralization in the hindbrain controlling basic functions and decentralization in the cortex as a complex self-organizing system, with no single node having overwhelming causal influence for optimal functioning.
Murmuration of starlings demonstrates how simple rules lead to complex, intelligent behavior — suggesting that optimizing for basic ethical/moral rules in decentralized AI development is more viable than relying on philosopher kings who rarely exist in history.
AGI Impact on Civilization
Money functions as an insurance policy on uncertainty — holding cash provides the widest options to respond to future uncertainty, creating demand for money even in a perfect barter matching system among AI agents.
AGI superintelligence is defined by the ability to do all human jobs including advancing science, engineering, and culture — not just narrow task completion but qualitative understanding of capabilities without rigorous tests.
Advanced AGI learning from every interaction will shape its model of the world by absorbing data from sources like 4chan, Reddit, YouTube comments — meaning humanity collectively contributes to its development for better and worse.
Evolution and Consciousness
Cosmic evolution from particles to stars to life to AGI is a natural progression — humans may give rise to minds far beyond us, but our persistence is irrelevant to this fundamental evolutionary process.
Brain-computer interfaces could enable Wi-Fi telepathy of consciousness across species and AGI, revealing cognitive attractors and first-person experience that bridge the gap between biological and artificial minds.
Engineering-based AGI design will have different properties than pure evolution and self-organization — the next step to ASI (Artificial Superintelligence) may be more rationally designed but will still involve some self-organizing evolutionary aspects.