# Riddle #3

``````   \$halve = rand( 0, 1 );
if( \$halve )
\$flip = \$face / 2;
else
\$flip = \$face * 2;
// the flip side is either half or twice the facing side, 50/50
``````

I think this is the faulty part here and where I screwed up, because if the range of possible values is finite, then there are some values for which the chance of the opposite face being double the value is zero. You can create pairs of ‘x’ and ‘2x’, but if you choose a card with a value of ‘2x’, there isn’t necessarily a ‘4x’, and so on.

E.g., if the distribution is from 1 to 10, a card with 3 on it could have 6 on the opposite side, but if you see the 6, there’s no way the other side could be 12. The person doesn’t know what the distribution is, but the restriction still exists.

Edit: My lame code attempt, which shows flipping loses in the end.

Winnings on stay: 750629914
Winnings on flip: 675263132

// the number with which you start is an int between 10 and 50

first create cards numbered 1 through N, assign x and 2x to each card from any arbitrary distribution, and then randomly choose an integer between 1 and N to simulate pulling a card out of the bag.

Yes, I think that’s exactly the point. Let’s requote Phil’s paradox from above:

The following arguments lead to conflicting conclusions:

1. Let the amount in the envelope you chose be A. Then by swapping, if you gain you gain A but if you lose you lose A/2. So the amount you might gain is strictly greater than the amount you might lose.

2. Let the amounts in the envelopes be Y and 2Y. Now by swapping, if you gain you gain Y but if you lose you also lose Y. So the amount you might gain is equal to the amount you might lose.

That’s a paradox only because we fail to differentiate between open sequences and closed buckets.

Argument 1 applies if every new number was indeed generated completely randomly. You should always flip because the gain of 100% is double the loss of 50%, and still much bigger when you include the 10% fee.

Argument 2 applies to the original question. The numbers aren’t generated completely randomly, there’s a fixed bucket of cards. And in that case, you win or lose some fixed amount from that bucket with a 50% chance. That evens out in the end – except that you also lose the 10% fee. So don’t flip.

Right. If there were no fee, you would just break even on average.

The subjective assessment depends on the idea that “No matter what value I draw, I have an equal chance of doubling it, or halving it”. The thing is, the only way this can be true is if the billionaire picked a very specific distribution of values on the cards, has an infinite number of cards in the bag, and an infinite amount of money. Since he is a billionaire and not an infinitillionaire, flipping will on average cost you the flip fee.

People are confusing the odds of winning and losing with the cash result of such a thing.

Let’s say you get a card that says 100 on it.

Scenario 1, (Flip will double value):
Flipper gets 190.
Stay gets 100.
Flipper wins by 90.
Scenario 2, (Flip will halve value):
Flipper gets 40.
Stay gets 100.
Stay wins by 60.

You have one shot. All 4 outcomes are equally as likely.
Since you always come out ahead, and only have one shot (no aggregates) it would be logical to maximize your potential outcome.

If you don’t flip you can feel smug about the fact that 100/40 is a bigger percentage than 190/100(useful for aggregates), but that ignores the fact that the only way to win max is to flip.

From a stay perspective, the flip offers you a 1/2 chance of winning 90 dollars or losing 60.

Just ignore the fact that the number is random, it doesn’t matter (Except for the unstated assumption that the number will be positive). That will play hell with your aggregator simulations, because you will weight each result randomly.

The original problem gave an explicit amount (\$53), and stated that you could play this game only one time. In that case, I still believe my explanation at the top of page 2 holds. It breaks down if \$53 is the highest amount listed on all of the cards. In that particular instance, the probability is 0% that flipping will result in a higher number, and switching is bad.

Folks are changing the original situation by assuming you can play the game over & over again, & I suspect that may be where some of the paradoxes arise. I think that a lot of the value of the loss incurred by the premium paid for flipping occurs in the rare situation when the highest card is initially picked. On any given trial, that chance is slight, but when it does occur, your expected value is terribly lower when you switch. Over repeated trials, a policy of always switching will cost money.

I assumed that the range of possible values were finite since the riddle involved money (a finite item, even for a bajillionaire). As the original was worded, any card no matter the value on the facing side could have a value of either x/2 or 2x on the reverse. Hmmm…

Do you think that will change the results? I will try it when I get home from work today.

If, under repeated independent trials, it is a bad policy, than what makes it a good policy for one trial?

What if you have 100 different people play the game? Is it a good idea to flip for each of them, but a bad policy for all of them?

It breaks down if \$53 is the highest amount listed on all of the cards. In that particular instance, the probability is 0% that flipping will result in a higher number, and switching is bad.

Actually, it breaks down if there is no \$106 value.

Here’s a question: Instead of the problem as described, say you draw with the same value printed on each side, say \$53. You have a choice to flip a fair coin for \$5.70. If you get heads, you double it, if you get tails, you halve it.

Now, this scenario is very different for the game operator. From the game operator’s perspective, it’s always a good policy for the player to flip. That clearly wasn’t the case before.

Yet, according to your analysis, it’s exactly the same situation for the player. Why does it change from one perspective (the perspective with more information) but not for the other? This tells me there is something wrong with the first analysis. I think the problem is that the ‘as likely to double as go halves for any value drawn’ amounts to an incoherent prior on the distribution of values on the cards. I.e., the only distribution of cards values for which that is true is physically impossible.

That’s a far, far different problem Phil.

It’s the reason it’s fun to go to Vegas and put down a bet on roulette, but it’s fucking stupid to go to Vegas and put down 1000 bets on roulette. It’s much easier to buck the odds if you take a single sample. As you take multiple samples you’ll approach the underlying distribution. If you’re taking multiple samples you should always play to the underlying distribution. If you’re taking a single sample you should play to the expected value.

In roulette, the expected value is a loss. I don’t think there is a game where the expected value of a single trial is a gain, but the expected value of 1000 independent trials is a loss.

I also have no idea what you mean by “play the underlying distrubution” versus “play the expected value”.

I don’t think there is a game where the expected value of a single trial is a gain

Being the owner of a roulette wheel is one.

You didn’t quote my whole statement. Bad grammar on my part I guess.

Find a game where the expected value of a single trial is a gain AND the expected value of 1000 independent trials is a loss.

Ah, you’re right.

See, that isn’t hard.

You are speaking nonsense.

But it’s obvious what he means. The longer you play, the more likely your payout is to close in on the average distribution, i.e. the house edge in a casino.

Roll two dice once, and your average roll has a good chance of being 12 or 2. Roll two dice a hundred times, your average roll has almost no chance at all of being 12 or 2.

OK, Mike, I think I see your theoretical point.

In the real world, however, I believe a person playing this game multiple times could adopt the following rule:

1. if the amount on the card I draw is X or greater, I will stay, because I believe amount X is greater than the amount on the back of the card with at least a 51% probability.
2. if the amount on the card I draw is less than X, I will flip.

As long the player makes a valid choice for X, he or she always gains by flipping cards less than X. Not enough information is given in the original question for us to make a valid choice for X.

For example, a player sets \$10000 for X.

Example 1 - The highest card is 400000/800000; the second highest card is 200000/400000. The player gains an average of 15% whenever she initially draws a card with a value of 9999 or lower and flips. She neither gains nor loses when she draws cards with values of 10000 to 400000. Over repeated trials, she gains more by flipping some of the time than she would never flipping. Note that if she would have set X closer to 400001, she would have made more money over repeated trials, because she would also gain by flipping when she drew a value from 10000 to 400000.

Example 2 - The highest card is 4000/8000; the second highest card is 2000/4000. The player gains an average of 15% whenever she initially draws a card with a value of 4000 or lower and flips. Because her initial selection of X was too high, she costs herself 4800 in potential profit whenever she selects 8000 initially and flips. Over repeated trials, this loss will result in her profit being lower than if she had simply stuck with her initial selection every time.

I freely admit that this involves a lot more subjectivity than the initial situation’s conditions. I simply believe that in real-world situations, the average person could come up with a conservative estimate of X that resulted in some profit for switching.

Fair disclosure: I have been known to enter a casino from time to time.

Random aside: Casino games are the most boring games ever. What’s the point of ‘playing’ blackjack – for every combo of cards in front of you, there’s a single best move. You can fit the entire lookup table on a tiny index card (only the edge cases are hard to remember anyway). See a card, lookup what you should do, then tell the dealer. You may as well be working in a factory – there’s no ‘game’ in it. You may as well just give the dealer some money and tell him to play for you. (Obvious exceptions like Poker or trying to count cards without looking like you’re counting cards.)

The mean value theorem.

On multi deck blackjack? You sure about that?

And no, there’s no “game” involved. It’s luck, not skill. The attraction is that you can sometimes be lucky, and winning money is fun to people.

I’m not questioning his math, I’m questioning his sense. The only difference between one roll and one hundred is the fact that on one roll you are only likely to lose \$n.

On 100 rolls you are likely to lose \$100n.

The only thing you gain from playing only once is the comfort of knowing you didn’t lose the other \$99n.