[ Pobierz całość w formacie PDF ]
sample mean is d" 1. This is sometimes expressed by saying that the sample
mean is asymptotically efficient for estimating a normal mean. The sample
mean also enjoys a number of other optimal properties in this case. The
sample mean is unquestionably the preferred estimator for the normal 1-
sample location problem.
9.1.2 Hypothesis Testing
If à is known, then the possible distributions of Xi are
Normal(µ, Ã2) : -"
182 CHAPTER 9. 1-SAMPLE LOCATION PROBLEMS
If à is unknown, then the possible distributions of Xi are
Normal(µ, Ã2) : -"
We partition the possible distributions into two subsets, the null and
alternative hypotheses. For example, if à is known then we might specify
H0 = Normal(0, Ã2) and H1 = Normal(µ, Ã2) : µ = 0 ,
which we would typically abbreviate as H0 : µ = 0 and H1 : µ = 0. Analo-
gously, if à is unknown then we might specify
H0 = Normal(0, Ã2) : Ã > 0
and
H1 = Normal(µ, Ã2) : µ = 0, Ã > 0 ,
which we would also abbreviate as H0 : µ = 0 and H1 : µ = 0.
More generally, for any real number µ0 we might specify
H0 = Normal(µ0, Ã2) and H1 = Normal(µ, Ã2) : µ = µ0
if à is known, or
H0 = Normal(µ0, Ã2) : Ã > 0
and
H1 = Normal(µ, Ã2) : µ = µ0, Ã > 0
if à in unknown. In both cases, we would typically abbreviate these hy-
potheses as H0 : µ = µ0 and H1 : µ = µ0.
The preceding examples involve two-sided alternative hypotheses. Of
course, as in Section 8.4, we might also specify one-sided hypotheses. How-
ever, the material in the present section is so similar to the material in
Section 8.4 that we will only discuss two-sided hypotheses.
The intuition that underlies testing H0 : µ = µ0 versus H1 : µ = µ0 was
discussed in Section 8.4:
" If H0 is true, then we would expect the sample mean to be close to the
population mean µ0.
¯
" Hence, if Xn = xn is observed far from µn, then we are inclined to
¯
reject H0.
9.1. THE NORMAL 1-SAMPLE LOCATION PROBLEM 183
To make this reasoning precise, we reject H0 if and only if the significance
probability
¯
P = Pµ0 Xn - µ0 e" |xn - µ0| d" ±. (9.1)
¯
The first equation in (9.1) is a formula for a significance probability.
Notice that this formula is identical to equation (8.2). The one difference
between the material in Section 8.4 and the present material lies in how one
computes P . For emphasis, we recall the following:
1. The hypothesized mean µ0 is a fixed number specified by the null
hypothesis.
2. The estimated mean, xn, is a fixed number computed from the sample.
¯
Therefore, so is |xn - µ0|, the difference between the estimated mean
¯
and the hypothesized mean.
¯
3. The estimator, Xn, is a random variable.
4. The subscript in Pµ0 reminds us to compute the probability under
H0 : µ = µ0.
5. The significance level ± is a fixed number specified by the researcher,
preferably before the experiment was performed.
To apply (9.1), we must compute P . In Section 8.4, we overcame that
technical difficulty by appealing to the Central Limit Theorem. This allowed
us to approximate P even when we did not know the distribution of the
Xi, but only for reasonably large sample sizes. However, if we know that
X1, . . . , Xn are normally distributed, then it turns out that we can calculate
P exactly, even when n is small.
Case 1: The Population Variance is Known
Under the null hypothesis that µ = µ0, X1, . . . , Xn
¯
Xn
¯
This is the exact distribution of Xn, not an asymptotic approximation. We
¯
convert Xn to standard units, obtaining
¯
Xn - µ0
Z = "
Ã/ n
184 CHAPTER 9. 1-SAMPLE LOCATION PROBLEMS
The observed value of Z is
xn - µ0
¯
z = " .
Ã/ n
The significance probability is
¯
P = Pµ0 Xn - µ0 e" |xn - µ0|
¯
¯
Xn - µ0 xn - µ0
¯
"
= Pµ0 e" "
Ã/ n Ã/ n
= P (|Z| e" |z|)
= 2P (Z e" |z|) .
In this case, the test that rejects H0 if and only if P d" ± is sometimes called
the 1-sample z-test. The random variable Z is the test statistic.
Before considering the case of an unknown population variance, we re-
mark that it is possible to derive point estimators from hypothesis tests. For
testing H0 : µ = µ0 vs. H1 : µ = µ0, the test statistics are
¯
Xn - µ0
Z(µ0) = " .
Ã/ n
¯
If we observe Xn = xn, then what value of µ0 minimizes |z(µ0)|? Clearly,
¯
[ Pobierz całość w formacie PDF ]