Some Comments about Chapter 1 of Hollander & Wolfe


Section 1.1

As I imply in my comments about the preface, H&W tend to glorify nonparametric procedures and downplay the fact that they do have restrictions even though a parametric model doesn't have to be assumed. But nonparametric methods do have their good points, and p. 1 of the text describes some of these good points.

Section 1.2

This section introduces the notion of a distribution-free test statistic using the Wilcoxon rank sum test from Ch. 4 as an example. During the first lecture, I'll do something similar using the sign test from Ch. 3 as an example: the sign test is a distribution-free test about the median of a continuous distribution. Then I'll mention some other nonparametric tests for one sample problems (or matched-pairs observations, which some refer to as paired samples), and point out that an additional assumption of symmetry has to be made if one wants to view them as being distribution-free tests about the distribution median/mean.

While H&W describe what is meant by a distribution-free procedure, they don't make a point of describing how nonparametric is not the same as distribution-free. Basically, a nonparametric procedure is one which is not based on a particular parametric model, and can be applied if one of a large number of parametric models happens to be the appropriate one. A variety of density estimation methods are referred to as being nonparametric because they can be applied the same way whatever may be the continuous distribution underlying the data. We can say that the simple sample median is a nonparametric estimate of the distribution median because it can provide a sensible estimate of the median whatever the underlying distribution is. Since the sampling distribution of the sample median depends on what the underlying distribution is, I wouldn't say that the sample median is a distribution-free estimator, but I would refer to it as being nonparametric. So you can think of distribution-free procedures as being a subset of nonparametric procedures. Interestingly, although the null sampling distribution of a distribution-free test statistic doesn't depend on the distribution underlying the data, the sampling distribution of the test statistic if the alternative hypothesis is true does depend on the distribution(s) underlying the data (and so the power function of a distribution-free test is not distribution-free).


Section 1.3

While the examples given in this section are okay, without looking at the data it isn't real clear why nonparametric methods should be used as opposed to some other types of methods.

Note that Example 1.5 seems to deal with a categorical data analysis problem, as opposed to what is usually thought of as nonparametric statistics. In Ch. 2 and Ch. 10, H&W cover some settings that could be covered in a course in categorical data analysis. I plan to cover those chapters somewhat briefly, since they are a bit out of the mainstream of nonparametric statistics. But it certainly won't hurt for you to get a bit more comfortable with such categorical data analysis settings, and we'll see some connections with nonparametric statistics. (Plus, since StatXact does both categorical and nonparametric procedures, you'll get some experience in using the software for categorical data analysis problems.)


Section 1.4

This section of Ch. 1 describes what is covered in each of the subsequent chapters. Since the topics in the course syllabus closely match the chapters in the book, by reading this section you should get a fairly good idea about the types of things we'll cover this semester. But I do intend to deviate from the text a bit, mainly by adding some material. For example, in Chapters 4 and 5, H&W seem to assume that a treatment has a constant effect if it has any effect. That is, that it will cause a shift in the distribution of values if it does anything. Since this assumption may not be true all of the time, I plan to discuss the use of the procedures in Chapters 4 and 5 in more general settings in addition to the simple shift model described in H&W.


Section 1.5

Since I intend to follow the text rather closely in this course, it'll be good for you to read this section in order to gain a better understanding of the format, organization, and philosophy of the text.

It can be noted that I, like H&W, at times like to compare approximate results with exact ones, based on the same data, in order to learn something about the accuracy of various approximations.

In the Ties paragraph, H&W indicate that after adjusting for ties, the "adjusted procedure should then be viewed as an approximation." But with StatXact, sometimes ties can be handled in such as way so that one is able to use an exact null sampling distribution for a test statistic even though there are tied values.


Section 1.6

Like H&W, I tend to use StatXact and Minitab to do most of my nonparametric statistics computing. While I think that StatXact is the best software package for standard nonparametric statistical methods, there is nothing special about Minitab except that it is easy to use, and it seems to be an adequate complement to StatXact. It should be noted that Minitab typically uses normal and chi-square approximations, with continuity corrections and adjustments for ties in some cases, for it's nonparametric computations, with the exception of the sign test which is done exactly. Some software packages for statistics, such as S-Plus and SAS, give some exact p-values for nonparametric procedures, but none of the others compare well to StatXact. Note that PROC-StatXact is not a SAS product, but rather it's a Cytel product that can be used with SAS (and since it's not a SAS product, a lot of places that use SAS won't have PROC-StatXact installed).


Section 1.7

This section gets into the history of nonparametric statistics, which I find to be interesting.

Although I think it's safe to say that the use of nonparametric procedures really took off in the mid 1940s with the publication of Wilcoxon's important paper, H&W point out that several nonparametric procedures were already in place by the 1930s. But to indicate that the true beginning of nonparametric statistics was 1936, as Savage did, seems a bit questionable. One can note that Spearman had a paper on rank correlation published in 1904, and the 1936 paper by Hotelling and Pabst suggests that prior to Spearman, Galton did some similar things. Also, Karl Pearson treated rank correlation in a 1907 book.

It can be noted that H&W include jackknifing and bootstrapping as nonparametric procedures (but some nonparametric books do not cover these important techniques). I covered these topics to a limited extent in my summer course (in Summer Session 2002), and I know that it takes a while to present even an elementary description of jackknifing and bootstrapping. My guess is that we won't cover these topics in general in STAT 657, but I will cover them to the extent that they are dealt with in H&W.


Section 1.8

This section is just a large table that is a nice complement to Section 1.4.