Choosing between jackknife & bootstrap estimates of bias &
standard error
You can never go too far wrong by choosing the bootstrap over the
jackknife, since in some cases the bootstrap is clearly superior, and
in other cases both methods should give similar estimates. But in cases
for which they should give very similar values, there are some reasons
for possibly preferring the jackknife estimates. For one thing, they
are nonrandom. For another, with small values of n they can be
quicker to compute. Lastly, it's perhaps easier to work out a closed
form solution (an exact expression --- a function of the
xi), since with the exception of a sample mean (the
sample mean (sample 1st moment), the sample 2nd moment, the sample 3rd
moment, etc.) computing an ideal bootstrap estimate is generally
quite tough. to do
The cases for which jackknife estimates are competitive with bootstrap
estimates are as follows:
- for estimates of standard error, the estimator has to be
- smooth (so not a sample median, or a sample trimmed mean),
- linear (like the sample mean, or the sample 2nd moment);
- for estimates of bias, the estimator has to be
- the plug-in estimator,
- smooth (so not a sample median, or a sample trimmed mean),
- quadratic (noting that
a linear plug-in estimator
is guaranteed to be unbiased for the estimand (e.g., the sample 2nd
moment is an unbiased estimator of the distribution 2nd moment),
and so no estimate of
bias should be needed for
a linear plug-in estimator).
(Ask me about linear and quadratic estimators if you don't understand
what they are.)
Click
here to see explanations for
the answers to Quiz #5.
Click
here to see explanations for
the answers to Quiz #7.