... In fact, it's too late for me to take the time to think it through enough, so I'll look into it tomorrow.
Okay, let's try to settle this. My argument was:
The real justification for her conclusion, though, is symmetry. If we posed the same problem, asking about the probability that I have more tails, the answer has to be the same as the probability that I have more heads, since heads and tails are interchangeable. Since, as she says, we can't tie, I must be equally likely to win or lose, and that probability is 1/2.
So, what is "the same problem"? The original problem is, "I have n+1 coins, you have n. What is the probability that I toss more heads than you?" Clearly the probability that I toss more tails than you is the same.
So far, nothing is different if I have 100 coins and you have 2. But now, both of those probabilities are very high; and they certainly don't add up to 1, but in fact close to 2. My having more heads and my having more tails are not mutually exclusive; that is, if "winning" means having more heads, having more tails isn't equivalent to "losing" (or not winning). That is why the argument doesn't work for 100 vs 2.
But in the n+1 vs n case, they are mutually exclusive, because if I have more heads than you, I must not have more tails than you,
due to the difference being 1. To elaborate, suppose you have 10 coins and I have 11. If I have, say, 4 heads and you have 3, then I have 7 tails and you have 7. I don't have more tails. Generalizing, if you have n and I have n+1, and I get x heads while you get y<x, then I get n+1-x tails while you get n-y tails. The difference is (n+1-x)-(n-y) = y-x+1 < 1, which implies that the difference is non-positive. If we replaced 1 with k, the final conclusion would not be true.
So now the question is, (a) how do we say this briefly and convincingly, and (b) does Marilyn's version somehow imply this but we missed it?