Today, I've lectured fellow undergrads on the basics of null-hypothesis significance testing. It seems they actually grasp the concepts as well. How's that for a day?
Whoa. That's impressive. Null-hypothesis stats things make my head hurt unless I "just follow the rules". )
I can imagine. Well, I'm a social scientist / researcher (and a programmer), so data analysis is like what I do half of the time, including teaching people how to do it. People always confuse the null hypothesis with the alternative hypothesis and what a P-value means. But yeah, partly it makes sense that it is difficult to grasp because technically NHST isn't completely sound in terms of how it is used most of the time (Cohen, "Why The Earth Is Round").
I don't have the best knowledge of these things, but what always confused me was how inconsistent higher level stats were with whether a low or high number was good or bad. Or maybe I wasn't taught by the best people, I dunno. I'm also a learner by doing, and I was sitting in a class (Six Sigma) being told these things as rules rather than explanations, which is a sure way to make me not remember things. :|
I can imagine - learning without context or wth you have to do with it later is pointless and makes it difficult to memorize and remember.
You probably find this the most boring topic ever (and that's OK of course), but basically the idea is that you have a research question for which you use data to verify the truth-value of that research question. To this end, you formulate a null hypothesis (statement of no effect, your research question is false) and an alternative hypothesis (statement of an effect, your research question is right, this is what you are interested in). You calculate with the null hypothesis, but you want to say something about your alternative hypothesis (the research question). So you need the null hypothesis to calculate the statistical test and its associated P-value, but you want to say something about the alternative hypothesis, which represents your research question.
The P-value that you calculate for a statistical test indicates the likelihood that the null hypothesis is true (not the whole story but this simplifies it for explanation). In other words, the P-value indicates the likelihood that you DID NOT find an effect. Now, if you find a P-value of .98 chances are EXTREMELY high that you DID NOT find an effect, or, that the null hypothesis is true. You now do not reject the null hypothesis and you conclude that your research question is not true and that you did not find an effect.
When you find a P-value of .02 chances are extremely SMALL that you DID NOT find an effect, or, that the null hypothesis is true. You do now reject the null hypothesis in favor of the alternative hypothesis and conclude that your research question is most likely to be true and conclude that you DID find an effect.
Mmm.. this paragraph turned out to be longer than I anticipated. I hope it wasn't too boring to read and that it made this difficult topic somewhat clearer!