Some Comments About Ch. 7 of Text



On p. 300 of the text, they indicate that simple regression is a one-degree fit, even though there are two unknown parameters (the slope and the intercept). They do this because the predictor is in the model in a linear way, and has 1 df associated with it. (The other df is associated with the intercept.) In other places in the text and videos, they do similar things. For example, if we just have a single predictor variable, and fit it using a cubic spline with 3 knots, then the regression model has 7 df, but they refer to the variable being represented with 6 df (since they don't count the intercept term). With more than one predictor, not counting the intercept makes perfect sense, because we only need one intercept no matter how many predictor variables we use. But it gets a bit confusing when there is just one predictor, because the regression model (which includes the intercept) has one more df than the df associated with the predictor variable.



On p. 306, the text has "All but these k nearest neighbors get weight zero" but for the Epanechnikov quadratic kernel on p. 7-15 of the course notes, as well as some other kernels as well, it can be noted that only k - 1 neighbors get nonzero weight.



The Pros and Cons of GAMS on pp. 309-310 of the text is a good section. The main con is that it's not real easy to adjust for interactions. However, the methods of Ch. 8 can automatically adjust for interactions, so that should be kept in mind if one suspects that interactions may be important.