Wednesday, May 6, 2015

The Simulation Calibration Formulation

It's a fairly common practice among rationalists I'm familiar with to train epistemic calibration by making predictions, quantifying their confidence in those predictions, tracking their success rates, and betting actual money with each other along the way. The idea is that with repeated practice, you'll eventually get a gut sense that "sensation x", which you used to associate with "80% certainty", actually occurs more reliably in the presence of 60% probability predictions than 80% probability predictions. That is, 60% of the time when you feel "80% certain", the prediction comes true, and this eventually cause that to sensation to feel like 60% certainty instead of 80% certainty.

I'm going to call this approach "the observation correlation calibration formulation" (because I can). [The Credence Game] (http://acritch.com/credence-game/) allows rapid-fire execution of the the observation correlation calibration formulation.

But there's a second approach to epistemic calibration that I don't hear people talk about so much, and I think at this point in my development, it's more valuable to me.

From Luke's summary of How To Measure Anything:

"Suppose you’re asked to give a 90% CI for the year in which Newton published the universal laws of gravitation, and you can win $1,000 in one of two ways:

1) You win $1,000 if the true year of publication falls within your 90% CI. Otherwise, you win nothing.

2) You spin a dial divided into two “pie slices,” one covering 10% of the dial, and the other covering 90%. If the dial lands on the small slice, you win nothing. If it lands on the big slice, you win $1,000.

If you find yourself preferring option #2, then you must think spinning the dial has a higher chance of winning you $1,000 than option #1. That suggest your stated 90% CI isn’t really your 90% CI. Maybe it’s your 65% CI or your 80% CI instead. By preferring option #2, your brain is trying to tell you that your originally stated 90% CI is overconfident.

If instead you find yourself preferring option #1, then you must think there is more than a 90% chance your stated 90% CI contains the true value. By preferring option #1, your brain is trying to tell you that your original 90% CI is under confident."

I call that the Simulation Calibration Formulation, and I think it's brilliant. Especially the part about how to identify underconfidence. It's relatively easy to humbly admit your overconfidence, but dropping your credence after that by exactly the right amount is hard.

I haven't tested this, but I expect I'd gain skill more quickly through a rapid-fire Simulation session than through a rapid-fire Observation Correlation session. You can also do a calibration simulation in any real-life instance where you might otherwise make a bet.

I think the Observation Correlation method assumes either that you already have pretty good reflective awareness of your credence-related subjective experiences, or more likely that reflective awareness of those experiences isn't all that important. Especially in the online-training version of Observation Correlation, improvement is expected to happen below the level of awareness. It's a quiet shifting of gut feelings.

I think reflective awareness of credence experiences is probably hugely beneficial. The simulation method trains exactly that, making it a good candidate for something earlier in a calibration training program than the observation method.

The other reason I suspect it should come before observation is that it isn't tied up with social feelings like wanting to protect your reputation or social stigmas surrounding gambling, or personal insecurities related to intelligence and ego. In the moments of real-world prediction and prediction-checking, any of those sensations is likely to be so salient that it blots out credence feelings both at and below conscious awareness. And when you turn out to be wrong, you'll probably be punished (in a behavioral psychology sense) for making a prediction in the first place, if you're not already very skilled.

If I'm right about these things, then it would be wise to practice Simulation Calibration until the mental movements of balancing overconfidence and underconfidence are fast, easy, and nearly automatic, and to do that before you get really serious about Prediction Book or similar things. At that point, you'll be armed with sharper phenomenological weapons to cut through counterproductive ego preservation/20th century science virtue ethics of skepticism, and you'll actually be able to hear your "80% confidence" feeling ringing clear above the noise. You'll know what you're listening for, and you'll store the feelings in memory for later comparison.

You can practice this offline using the Credence Game I mentioned before, performing the simulation for each question, and not keeping score. When that gets easy, pay attention to the score again. And when that's easy, stop doing the simulation.

I don't mean that you should stop making real-world predictions if calibration simulation isn't easy yet. I just mean that early on, Simulation should be the focus of your epistemic calibration training, rather than Observation. I'm certainly going to make it the focus of mine.

1 comment:

Dan said...

I find it interesting and useful to offer odds for and odds against that I would accept happily, and considering those as upper and lower bounds of my estimate, with the spread representing my confidence in my estimate.