The APA's Task Force on Statistical Inference has published an article in the *American Psychologist* in which they present the changes they are planning on recommending be made in the APA Publication Manual. The citation is:

Wilkinson, L., & Task Force on Statistical Inference. (1999). Statistical methods in psychology journals: Guidelines and explanations. *American Psychologist*, *54*, 594-604.

IMHO, this article is required reading for anyone who teaches statistics, experimental psychology, or any course in which students are expected to read original research articles or write a research proposal. It shall be required reading for students who take PSYC 6430 from me. I think it is very strongly recommended reading for any person who is a producer of psychological research (that is, all of our faculty) and recommended reading for all persons who are consumers of psychological research, including undergraduate students.

Among the suggestions made in this report are the following:

- Do not use hypothesis testing unless that is a method appropriate for your research (and it probably is not for most). If you are one of the first to start researching a particular area, exploratory methods are more appropriate than is hypothesis testing. If many have come before you, a meta-analysis might contribute more than would collecting data to test again a hypothesis already tested to death.
- Document how you decided what sample size to use (this should include an
*a priori*power analysis). - After the data are analyzed, report confidence intervals. Do not report what the power would be for an effect of the size suggested by your sample data.
- Screen your data for outliers, problems with distributional assumptions, etc. prior to any other statistical analysis.
- Apply Occam's razor to your choice of analytic techniques -- use the most simple analysis which can adequately answer the questions you pose.
- Use graphical methods to evaluate assumptions, do not use tests of the significance of any apparent departure from those assumptions.
- If you must test hypotheses, report an exact
*p*-value (like "*p*= .037"), not just a dichotomous "accept-reject" statement (like "*p*< .05" or "*p*> .05"). - Give effect size estimates in unstandardized form if the unit of measure is meaningful, in standardized form (such as
**d**or*r*) if not. - Provide confidence intervals for all important estimated parameters, including effect sizes.
- Do not use the "protected test" strategy, which is overly conservative. The protected test strategy involves first doing an ANOVA, then, only if that ANOVA is significant, conducting pairwise contrasts with something like Tukey's test or the REGWQ test (and you go straight to hell if you are still using obsolete procedures such as Newman-Keuls or Duncan range tests). These pairwise contrast procedures were designed to replace (not follow) ANOVA, and they control alpha familywise quite adequately without having to have a prior significant ANOVA. I would add that the protected test strategy is OK under one special circumstance -- when you have only three groups in a one-way layout. In that case, ANOVA, which, only if significant, is followed by simple pairwise contrast among means ("Fisher's LSD"), both holds familywise error at its nominal rate and maximizes power.
- Consider using a set of planned comparisons (those that directly address the questions you pose) rather than all possible pairwise comparisons.
- Have your head examined if you think that statistics can really allow you to make firm causal attributions from nonexperimental data. Yes, I am speaking to those of you who abuse path analysis, structural equations modeling, and assorted other multivariate techniques applied to nonexperimental data.
- Take advantage of the various new graphical displays available in modern statistical software.

You really should read the original article, there is a lot more there than I have summarized here.