Modified: March 12, 2021
trapped priors
This page is from my personal notes, and has not been specifically reviewed for public consumption. It might be incomplete, wrong, outdated, or stupid. Caveat lector.SSC link: https://www.astralcodexten.com/p/trapped-priors-as-a-basic-problem
(2021 thoughts)
- How general is this phenomenon?
- You have a belief
- Your belief colors your perception of something that doesn't inherently reinforce the belief
- You conclude that your perception is reinforcing your belief
- The belief grows even stronger
- You could see this as fundamentally about rounding. You don't have the capacity to see that the prior + perception is slightly less in favor of your belief than the prior alone.
- More specifically, maybe it's about the need to view things in discrete categories, which might somehow be a fundamental part of generalization?
- But it's not always a numeric phenomenon. It can also be about the structure of your conceptual scaffolding.
(2026)
I'm now coming to see trapped priors as maybe the basic problem of trauma, samskaras, depression, and general human wellbeing?
the basic emotional belief is "things are not okay", "I am not safe", etc.
this kicks off a panic / tension reaction, or a freeze response. and in these states the nervous system isn't capable of doing the balanced investigation that could contradict that belief.
it's not just that the mind is doing the Bayesian inference process of adding prior + perception, and not noticing that the perception slightly disconfirms the belief (as I speculated above). it's maybe more an issue of active inference? (in the sense of seeing perception as an action) The mind is not automatically always updating every possible hypothesis as an idealized Bayesian reasoner would. That wouldn't be computationally tractable. we generally have to choose to look at something, and be open to it changing us. when we're not safe, we're not open. we close off.
is the ostrich sticking its head in the sand a model of this? it believes there is something bad, and that to actually perceive it would update its beliefs even further towards 'something bad'. fundamentally it wants to move towards a belief that things are not bad. the failure mode is that it does this, not by planning in an accurate world model, but by blocking perception.