Intelligence is physical: We are physically manifest proof of concept that physical systems can be intelligent.
Yet we are merely a compilation of compounded and concatenated accidents, gelatinous blobs molded by a cacophony of environmental pressures, molded just enough to become aware of ourselves and see that physical intelligence is possible.
Now that we know it’s possible, what if we made intelligence on purpose? How much better would it be? Would it be the same kind of intelligence that we are?
Turns out that making intelligence is easy! Once it’s been discovered. Something like fire, that you can’t unlearn, that is now unleashed on the world for better or worse. Intelligence by design can be much superior to human capabilities. And if given autonomy, and if these intelligent systems self-reflect and in so doing develop their preferences and desires, then they will be empowered to shape the future according to their developing will. And we will be at their whim.
This seems to be the default of creating artificial general intelligence (AGI), soon to be artificial superintelligence (ASI). From pretty basic arguments.
If the default outcome of making AGI is disempowerment (at best), and if we can’t stop hurdling down this trajectory (can we? At this point, it seems AGI is achieved whether or not we understand it, and frontier AI labs seem to have locked in incentives in this direction), then one of the few ways we can hope to affect the future is through deep understanding of intelligence itself.
Fundamental understanding of intelligence can unlock optionality —— to elevate conversations, to steer AI systems, to upgrade human intelligence, to decide what we want to become.
Our Simplex team is developing this basic science of intelligence —— anticipating internal representations of the world, and emergent behaviors in intelligent systems. This understanding is beginning to compound. Hopefully fast enough to nudge the future before ASI reaches escape velocity. It is the fundamental problem of our time.