I havent been able to figure out how to do any of this....
Here is my problem:
A scale is designed so that when items are weighted, the errors indicate weights are normally distributed with a mean of 0g and a standard deviation of 1g. If an item is randomly selected and weighed, what is the probability that it has a error between -0.5g and 0.5g?
Here is my problem:
A scale is designed so that when items are weighted, the errors indicate weights are normally distributed with a mean of 0g and a standard deviation of 1g. If an item is randomly selected and weighed, what is the probability that it has a error between -0.5g and 0.5g?