Friday, June 22, 2007

A little bet...

Imagine someone offered you a bet - you pay $1 and pick an integer between 1 and 100. An integer in that range is randomly chosen, and if it matches your choice you win $200. Do you take it?

If you do, it's probably because you do this calculation in your head (whether you realize it or not): $200*(1/100)=$2. Since the expectation of the game is greater than $1, you're better off taking the bet.

Conversely, imagine someone offers you a $1, but if they pick the right number (again out of 100), you owe them $200. Would you take that bet instead (presumably no one thinks it makes sense that you would take both)? Well, towards what I thought was the end of a discussion about economics, a friend of mine did take that bet. Three years later we're still talking about it, so it's time to share it with all of you.

A rational person is expected to undertake any action in which the person believes the marginal benefit outweighs the marginal cost. If someone offers you $2 for something you value at $1, you'd be wise to sell it to them. However, how do you value things that are not definite, but instead have some probability of having a certain value? The conventional explanation is to calculate the expectation. It is the sum over all outcomes of the chance of that outcome happening times the value of that outcome happening.

My friend claims that it is not only the expectation, but also the distribution, that is important in determining what the actual value is. In this case, 99% of the time he wins money. The expectation is closely approached only over a large number of trials, so if he plays only once he is quite likely to come out ahead. Needless to say, since we've argued about this for some time, but I disagree with his reasoning. While I realize that the added information about the distribution may be valuable, I fail to see how. In the end you must make a yes/no decision, and there has to be some point at which your decision changes based on the particular probabilities and payouts. I claim that point is when the total expectation becomes negative. I think it's a significant point that expectation has all sorts of nice properties like additivity.

I think the problem here is a cognitive bias: treating a small probability as if it were zero. I bet the calculation goes: 1/100 is approximately zero, so 0*$200=$0 which is less than $1. So he takes the bet. I wonder if there is some way to test this - at the very least it seems like there are some psych experiments in here somewhere.

If you agree with him, here's the question I'd most like answered: you must go through some decision making process - what's the formula you use to determine whether to play or not?

4 comments:

John Rice said...

You like the expectation because it "has all sorts of nice properties, like additivity". But so does -Expectation. I think the real criterion you're optimizing is the limit of the empirical average winnings, and it's usually true that the formula "integral of outcomes over probabilities" equals this limit. But what if the limit doesn't exist?

Here's such a game: you pay $1 per play, your payoff if you win on the nth play is 2^(n+9), and your probability of winning is 2^-(n+8). Every round seems "superfair" in that you're paying $1 for an expected payout of $2. But if you play a very large number of times N, are your winnings close to 2*N? Not at all. In fact, there's a 127/128 = 99.2% chance that you'll lose *every single play*, on out to infinity. Should you play an infinite number of times just because the expectation tells you to?

The reason the limit doesn't exist, and the reason the expectation misleads you in that game, is that the variance is simply too big compared to the number of plays (the variance grows exponentially while the plays grow linearly). Real life games aren't so maliciously constructed, but they often have a variance that is huge compared to the number of "plays" you get, and for these decisions, you can't hope to approach the limit of the empirical average. So why care about the average? It's just a lazy summary of the distribution, that in this case has no bearing on your future.

dave hiller said...

"But if you play a very large number of times N, are your winnings close to 2*N? Not at all. In fact, there's a 127/128 = 99.2% chance that you'll lose *every single play*, on out to infinity."

The last sentence does not follow. Even if you are unlikely to win, the game can still have positive value. The best estimate for your winnings is in fact 2*N. I'm not sure I'd use the same words but I agree you're optimizing something like the limit of the empirical average winnings. I fail to see how the expectation does not do that here, and furthermore I do not see what else should take its place.

Also, be careful using games that have an infinite number of plays with rapidly escalating payouts. While we can imagine a person playing an arbitrarily large number of times, there is a limit on payouts and it cannot be arbitrarily large.

I have a paradox for you, perhaps you can find a flaw:

There are 100 people in a room who are told about the existence of the zero-sum bet I proposed originally. Assume for the moment that the side that receives the dollar has positive value because the expectation is not closely approached over one play. Everyone then wants to take that side of the bet. Happily, one person in the room decides to take the other 99 people up on the bet. This is reasonable for him, since he's playing a large number of times and therefore the expectation is valid. This game has positive value for him as well. If every player in the game has positive value then the game is not zero sum and we've reached a contradiction. One of our assumptions must be incorrect; which one is left as an exercise to the reader. This sort of exercise demonstrates why I think additivity is a necessary, though not sufficient, characteristic.

Anonymous said...

Hi dave, how are you? Well, it's a pretty interesting thing. There have recently been a flurry of publications about this. Including the types of things where you play economics games while hooked up to an fMRI.

It occurred to me while reading the article that just when we are understanding how our grey lumps affect decision making, computers are taking over those same decisions.


http://tinyurl.com/yuwjdb [sciam.com]

dave hiller said...

Hey Pete. I think the psych experiments using fMRI are awesome. For instance, it has been shown that in the ultimatum game players have an emotional response (RR) when rejecting unfair offers, which presumably has evolved to keep us from being taken advantage of. I'm really interested in evolutionary psychology and there is a lot to be learned from these experiments.

 
php hit counter