Well, it’s a pilot study, so expectations are pretty low to being with. Still, the results suggest that a larger study might be worthwhile. Certainly I think that acupuncture does *something*, even if I’m not sure exactly what.

- Alan

Well, it’s a pilot study, so expectations are pretty low to being with. Still, the results suggest that a larger study might be worthwhile. Certainly I think that acupuncture does *something*, even if I’m not sure exactly what.

- Alan

With a sample size of 17 you can’t really draw any conclusions. At the most it points to an avenue of further research.

It’d be nice to have the original paper availible, but it seems you have to pay for access.

I was taught this over ten years ago by my first girlfriend. It works exactly as you described. I don’t use it much though as it seems much easier to just pop some meds and have a nap.

Had a girlfriend that was in a dance and theater company, where she learned this and passed it on to me. It is exactly as you describe and works every time. Not related to the acupuncture stuff I guess, just wanted to chip in about this method, pretty awesome. I’ll go back to the world cup thread now.

Re: statistically significant:

Claiming you can’t draw any conclusions with a sample size of 17 is incorrect in some cases. If you were conducting a test to see if a coin was unfairly weighted or not (binomial test), 6 flips would be conclusive for an alpha of 0.05. 6 heads and 0 tails would be sufficient for that level of stringency. Small sample sizes with sufficiently large odds ratios (or penetrance) can be significant, if the magnitude of the effect is large enough.

This test had non-overlapping confidence intervals of +32±7 ma and −1±11 m;p<0.01. So, it’s statistically significant (ie: we’d only expect these results 1 time in 100 to come about due to random chance.)

It doesn’t explain the mechanism behind *why* this happened, and it could still be winner’s curse that this effect was observed (which also brings into question how many times this has been attempted, and what a suitable multiple hypothesis testing adjustment one would like to make.)

I’d say that the next step would be to fund a larger replication study. For an effect of this magnitude, 100 people would be great to really knock that p value out of the park. Chances are they got lucky and it will not replicate (or replicate with a much more modest effect size), but don’t say something can’t be significant because the sample size is 17.

PS: the abstract is free.

I didn’t say it’s generally impossible to draw any conclusions from a sample of size 17, I said determining statistical significance is tricky with such small samples. It’s tricky because with n=17 you cannot invoke a central limit theorem to make distribution-free inferences, so test statistics generally have unknown distributions under the null, unknown distributions which may be very poorly approximated by asymptotic approximations. There are many ways one could calculate the confidence intervals and p-values mentioned in the abstract. Some of them, including the most familiar ones, may give extremely misleading answers in extremely small samples.

Skedastic, you’re such a homo.

What I find most interesting is the result not cited in the abstract: the drop in the blood level of TNF alpha after acupuncture. If they can actually establish a deterministic, significant biochemical reaction to acupuncture, that would be much more decisive than wobbly external measures such as slightly higher exercise performance, let alone subjective statements about “feeling better”. And it would open the way for research to achieve the same effect without having to prick your skin with lots of needles.

Like anyone is going to trust German scientists anyway. You know they’re a unified country again now? Yeah, it’s true. I can’t believe it either.

Fair point, I’m only a low level guy for stats. I’d think that the student t would be able to handle pretty low sample sizes, though. Isn’t that a fairly appropriate test for differences of means? I played around with “simulating” drawing 9 and 8 samples from the normal distribution with the means and sd’s listed and applying the student t.

One could also do permutation on the case/control labels…

The t-stat based on difference in means is only distributed student in small samples if the underlying noise is actually normal. There is usually no reason to believe that’s true in this sort of context, so in a sample of size 17 calculating p-values using the student distribution may be very misleading. Resampling methods are, as you say, an alternative.

But I’m robust and consistent in the presence of heteros.