A student wants to estimate the mean score of all college students for a particular exam. First use the range rule of thumb to make a rough estimate of the standard deviation of those scores. Possible scores range from 500 to 2200. Use technology and the estimated standard deviation to determine the sample size corresponding to a 95​% confidence level and a margin of error of 100 points. What​ isn't quite right with this​ exercise?