Some Comments about Chapter 2 of Hollander & Wolfe


Section 2.1

Since I believe that STAT 554 covers this material more thoroughly than H&W do, hopefully you're quite comfortable with the material in this section. Still, I'll add some comments below.
p. 21, line 1
While the test as described is a level alpha test, it is also a size alpha test, and indicating that the size is alpha gives more information (since a level alpha test can have size alpha or a size less than alpha).
pp. 21-22
Note that they fail to use a continuity correction for their large sample approximation. I've found that the use of a continuity correction tends to make the normal approximation better.
p. 22, Example 2.1
I think that this is an odd choice for an initial example for Ch. 2. I would guess that p is not constant for all nutrient-poor and dry soil sites, and I would also worry about the lack of independence (thinking that there are dependencies related to the spatial orientation of the gaps in the clumps). One way to firm things up would be to define a fixed population of finite size, and take a simple random sample from the population, and make an inference about the population proportion using the hypergeometric distribution.
pp. 23-24, Comment 1
The sign test is covered in Ch. 3. When I cover this test about the distribution median when we get to it in Ch. 3, I'll extend the scheme to handle tests about quantiles other than the median.
p. 24, Comment 2
I find it to be extremely odd to refer to a test about the parameter of a specified parametric model as being a distribution-free test! By the same logic we could say that the t test is a distribution-free test about the mean of a normal distribution.
p. 25, between (2.15) and (2.16)
I refer to the symbol under consideration as a binomial coefficient, and a lot of people read it as "n choose b."
p. 26, Comment 9
Hopefully you're very comfortable with p-values. Still, I suggest that you read Comment 9 from the text very carefully as a review.
pp. 26-27, Comment 10
Note that the powers are rather low because the sample size is very small and the alternative under consideration isn't very different from the null hypothesis value. Also note that H&W use beta fot the probability of a type II error, like a lot of elementary books do, while I use beta for power, like advanced books sometimes do.
p. 28, Problem 1
I think that a two-sample test (of the kind that will be covered in Ch. 10) may be more appropriate, since one has two samples of binary observations. Still, there would be concerns about whether the assumptions are satisfied for such a test.
p. 29, Problem 9
I tend to get annoyed with silly situations for problems. What is the significance of the value 0.6 for this problem? It seems to me that an interval estimate of p would be more meaningful than a test with 0.6 as the null hypothesis value. Besides, since the voles were of different sizes, I think that it would be better to study the relationship between vole body mass and the probability of success, and also incorporate the sizes of the adders in the model (since it can be noted that the title of the article on which the problem was based is The advantage of a big head: swallowing performance in adders).

Section 2.2

Most of the material in this section is presented in STAT 554. I'll make a few comments below.
p. 29, (2.20) & (2.21)
The standard deviation of an estimator is often referred to as the standard error of the estimator. So (2.20) gives the standard error. (2.21) gives the estimated standard error. (Note: Some statistical software packages sometimes label an estimated standard error as a standard error. My guess is that this is done to make the output less cluttered.)
p. 30, Comment 17
H&W don't do a good job of stating the good points of the alternative estimator reported on by Hodges and Lehmann in their 1950 paper. Although, asymptotically, the alternative estimator wins for no values of p when the MSE is used as the measure of goodness, for any finite value of n, the alternative estimator is better for values of p in an interval that is symmetric about 0.5, with the width of the interval decreasing with increasing n. For smallish n, the interval can be rather wide. For example, for n = 16, the alternative estimator has a smaller MSE than the standard estimator if p belongs to (0.2, 0.8).

Section 2.3

I'm very disappointed that H&W don't provide much information about the Clopper-Pearson confidence interval. During the last hour or so of the first lecture, I'll try to explain it to you and provide some more details. (I don't have a lot of comments here since copies of my overhead projector presentation pertaining to this section will be / were distributed in class.)