Disclaimer:
I am not a pro statistician neither an Englishman, so excuse me for maybe poor wording.
Problem description:
There is an online football manager that has games result based on team ratings and random.
They got “100 replays” function that gives Win/Draw/Loss distribution.
Questions:
1) how reliable (mathematically) are those “100 replays” to assess real expected result?
2) how many replays would be needed to trust the result?
And then bonus question/ request for explanation. We got into heated discussions regarding following:
If two games had similar expected result, let’s say W/D/L at 70-20-10. Would “100 replays” for them always had same “error”? Or if one game was very volatile with many expected goals (say both teams field no defence) and other was very defensive goalless game, then “100 replays” for the first one would have bigger “error”?
Hope my question makes sense.
I am not a pro statistician neither an Englishman, so excuse me for maybe poor wording.
Problem description:
There is an online football manager that has games result based on team ratings and random.
They got “100 replays” function that gives Win/Draw/Loss distribution.
Questions:
1) how reliable (mathematically) are those “100 replays” to assess real expected result?
2) how many replays would be needed to trust the result?
And then bonus question/ request for explanation. We got into heated discussions regarding following:
If two games had similar expected result, let’s say W/D/L at 70-20-10. Would “100 replays” for them always had same “error”? Or if one game was very volatile with many expected goals (say both teams field no defence) and other was very defensive goalless game, then “100 replays” for the first one would have bigger “error”?
Hope my question makes sense.