Epistemological Limits
What is the upper bound of human knowledge & cognition?
A paradox arises when contemplating contemporary physics. The closer we seem to move toward a fundamental understanding of the universe, the further removed we become from the human experience. This brings into question the epistemological limits of human cognition.
Here we outline some anecdotes highlighting potential upper bounds of reason & logic.
As frequently employed in mathematics. it can be pragmatic to think about problems “in the limit”, or under other natural constraints. We can apply this approach to reduce the epistemological search space.
Prime Number Mazes
Intelligence is the ability to abstract and reason about reality to guide behaviour in a productive way. All intelligence, however, is contextual and although it feels as if we have a general purpose computational engine (the human brain) our intelligence is likely far narrower than one might perceive.
We can observe these limitations in other animals and extrapolate to find our limitations. A brilliant example can be seen in studying rats. Rats are exceptionally good at solving mazes, however, rats are incapable of solving a prime-number maze (that is a maze that is solved by turning at every prime number). Although a clear, well-defined rule (prime numbers) instantiates this reality, the rats do not have the cognitive ability to decode this rule.
The Time Machine
Another example of intuiting a natural upper bound of intelligence can be demonstrated in works of fiction. In H.G.Wells’ “The Time Machine” the protagonist ventures to the year 802'701 and is surprised to be met with different variants of the human race that exhibit no sense of contemporary intelligence. In solving contemporary issues, the theorised descendants of humanity have lost the problem-solving capabilities that we so closely associate with intelligence today.
This depressingly highlights the pragmatism of biological design, and pragmatism does not allow for superfluous upper bounds.
On the individual level, it is exceeding likely that we are met with a purely computational cognitive limitation.
The Transcendent Nature of Reality
Now turning our attention to sensory input, we have no reason to believe that the scale of the universe with which we engage interfaces sufficiently with “baseline” reality. We know that physically we have very limited personal interaction with reality, given the evident limited range of our experience.
Undoubtedly, as a scientific community we have been able to circumvent these limitations to some degree, however it remains unclear to what degree. By nature, we vastly overweigh experience, coupling this with our limited engagement with the world results in an inconceivably narrow view of the cosmos.
This is like trying to understand a computer by only interfacing with excel.
Although we are interacting with a very real instantiation, we are so removed from the hardware and machine code that governs the core processes that they are effectively inconceivable. Perhaps the laws of modern physics are just the universes most basic excel formulae.
The Sophistication of Weather
The narrowness of our cognition is by no means limited to poor signal. Consider trying to define an intelligent system. The natural assumption is to assume human cognition to be the highest known form of intelligence, but a compelling argument can be made to highlight the weather as being of equal brilliance.
If we conceed that consciousness is purely emergent, arising from sufficient computation, arbitrarily complex systems such as the weather clearly demonstrate sophisticated behaviour.
If a system so familiar to us (the weather) can be completely overlooked, one might only imagine how narrow of our own intelligence may be.
So where lies the Ghost in the Machine?
Great thinkers are capable of circumvent this limitation, however their success is almost only ever realised and popularised when it can be mapped back to intuition.
Knowledge Institutions
Much of our most ingenious engineering and scientific achievements are, of course, a consequence of great collaboration. Although key individuals make notable (though inflated to tell a good narrative) contributions, great innovation is almost always systemic.
It is easy to see how this might apply to advanced technology, but even trivialised everyday objects follow this rule.
In that case, does the reliance on specialization and collaboration address the issue of our cognitive bandwidth? To some extent, but it brings it’s own caveats…
The Space of Ideas
Science itself is a systematic population-wise search in the idea space. This is hopeful as it is clear that through some emergent properties systems can produce outputs far greater than their constituents, however the very elements that make intelligent search so robust create boundaries.
The biggest issues being inefficiency, inertia & incompleteness.
Inefficiency — Search is slow. Almost all science is wrong (replication crisis). This inefficiency appears to map well to evolution, and therefore we can infer that with sufficient computation power local optima can be attained.
Inertia — ideas built on their predecessors almost always inherent thier search space. Consider reductism as a paradigm, shifting to a purely computational paradigm forces one to rebalance many reductist achievements.
Incompleteness — Godel’s incompleteness theorem highlights limits inherited by our axiomatic approach to mathematics.
Summary
- Cognition is not exempt from the entropic nature of reality — all things are constantly in decay & energy is not free.
- Most “knowledge” or “wisdom” is only intuition, which need not map to reality, but only experience.
- Necessity is the mother of invention, everything is contextually limited.
- The axiomatic nature of current science inherently imposes tight bounds.
- Many questions remain outside of empiricism or are so deeply axiomatic that they have incredibly low probabilities of success (e.g. quantum field fluctuations)
- Many questions are irreducible (the principle of computational equivalence)
- Many questions are provably improvable (Godel’s incompleteness theorem).
- We need to remove our cognitition away from intuition to address deep science.
- We have yet to find the Ghost in the Machine, and as such emergence may be our saving grace.
Personal Speculation
The last century saw unimaginable scientific progress through reductionism (physics and mathematics). It now seems as if we are in a similar spurt of progress through computation. But whenever things seem too obvious the nature of markets tells us they are not, so I’m not quite sure what I’m missing…