T-Crit.txt
========================================================================
Date: Sun, 18 Oct 98 11:04:28 EDT
From: "Karl L. Wuensch"
Subject: Re: using t when sample >30
To: michael zalanka ,
edstat-l@jse.stat.ncsu.edu
In-Reply-To: Your message of Sat, 17 Oct 1998 22:46:59 -0500
Michael zalanka asked:
>Why does it state in my social science statistics text that the z
>distribution and its appropriate cutoffs can be used for t when n>30,
>yet the cutoffs in the t table are still larger than those for z even
>when df=60?
One could argue that with n > 30, the differences between z and t are
so small as to be trivial. For example, consider a z or t of 1.96 on 60 df.
The two-tailed p is going to be about .05 in both cases, but if you want to
be more exact, look here (at some Minitab output):
MTB > cdf -1.96 (using the standard normal distribution)
-1.9600 0.0250
KARL> That is, after doubling to get a two-tailed value, p = .0500.
MTB > cdf -1.96; (using the t distribution)
SUBC> t 60.
-1.9600 0.0273
KARL> That is, p = .0546.
Now, would not any reasonable person agree that the difference between
a probability of .0500 and .0546 is quite small? Of course, if one has the
delusion that there is something special about a p of .05 or less (it
indicates a "significant" result), then the difference between the exact p
and that approximated from the standard normal distribution here becomes
large.
In any case, one usually uses a computer to get p values these days,
so one doesn't need to hassle with tables of t anyhow, so even if you have
61 df, why use the normal approximation, the computer will give you the
exact p. If the choice were between using a standard normal table to get the
approximate p or (by hand) integrating the t function on 61 df to get the exact
p, I could understand being willing to take the approximation.
Karl L. Wuensch, Associate Professor, Graduate Faculty,
Director of Psychology/Social Work Computer Labs
Dept. of Psychology, East Carolina Univ.
Greenville, NC 27858-4353, phone 252-328-4102, fax 252-328-6283
Bitnet Address: PSWUENSC@ECUVM1
Internet Address: PSWUENSC@ECUVM.CIS.ECU.EDU
Web Address: http://ecuvax.cis.ecu.edu/~pswuensc/klw.htm
========================================================================
Date: Mon, 19 Oct 98 17:40:18 EDT
From: "Karl L. Wuensch"
Subject: delusional?
To: edstat-l@jse.stat.ncsu.edu
"Carl C. Gaither" asked:
>I'm curious about your use of the term "if one has the
>delusion that there is something special about a p of .05 or less".
>
>If one sets a test of significance and for whatever the reason makes the
>decision that a p of .05 or less is significant is it delusional because
>they say that a p of .0549 is not significant? Could you please
>indicate what value greater than a p of .05 is not delusional and can be
>accepted as significant?
1. For what reason does the typical user set a criterion of .05?
From consideration of the relative risks of Type I and Type II errors
or just because that is the most worn column in the table of critical
values?
2. If dichotomous decision making is reasonable, and if p = .05 can
be justified as the boundary between results that point me in one versus the
other direction, then I have no problem with thinking of p's LE .05 as special,
but I'm not so sure that dichotomous decision making is always reasonable
when tests of significance are employed, and I'm not so sure that much
thinking goes into the choice of the .05 boundary when it is used.
3. If Sally obtains p = .049 and Suzy, testing the same hypothesis,
obtains p = .054, should we think there is more than a trivial difference
between the two outcomes? Agreed, if each is reasonably using a .05 criterion,
and they are making their decisions independent of knowledge of other relevant
information (such as the outcomes of relevant research conducted by others),
then Sally and Suzy will make different decisions, and I suppose that makes
p's LE .05 special. But I still feel a lot more comfortable thinking of p
as a continuous (rather than dichotomous) index of how well the data fit with
the tested hypothesis. In common practice, when Sally gets that special
p LE .05, she (testing a nil hypothesis of no association between two
variables) concludes that there really is an effect, and Suzy, with a slightly
larger p, concludes that there is NO effect. Suzy may then generate a bunch
of BS to try to explain why she got no effect and Sally did: How her
experimental units were different than Sally's, her manipulations or
measurements of the variables different than Sally's, or how the relationship
between X and Y was influenced by values of moderating variables that differed
from those in Sally's study. If these two researchers didn't think of p's LE
.05 as special, if they recognized that the difference between a p of .049 and
one of .054 is quite small, then they would recognize that the results of
their two studies are concordant, and the readers of their research reports
would suffer less from the toxic effects of BS (bad science).
========================================================================
Date: Mon, 19 Oct 98 17:50:11 EDT
From: "Karl L. Wuensch"
Subject: Re: using t when sample >30
To: "Neil W. Henry"
In-Reply-To: Your message of Sun, 18 Oct 1998 13:09:30 -0400
Nice post. You brought up one of my pet peeves when you mentioned
delusions about the CLT. My students are doing a little Monte Carlo
right now demonstrating that even when N is large enough to make the
distribution of sample means close to normal, that does not assure that the
t-test is robust to its normalcy assumption. I have them construct the
sampling distributions of both the mean and t from a very skewed parent
population. With increased N the means normalize, but the sample t's are
clearly not distributed as Student's t.
----------------------------- Original message -------------------------
michael zalanka wrote:
> Why does it state in my social science statistics text that the z
> distribution and its appropriate cutoffs can be used for t when n>30,
> yet the cutoffs in the t table are still larger than those for z even
> when df=60?
> Thanks. ~mike
Because the authors of your text are more concerned with having you
get the same answers on your test questions that they have in their notes
. . . to 4 decimal places . . . than they are with statistical education.
These texts, which have been influenced if not written by teachers who
learned their statistics from 40 year old books (or older), ought to have
their mouths washed out with soap for lying to you.
The correct rule is "always use t", which is what you will realize
when you start reading the output from statistical software.
Does your book also say that you can safely use the Central Limit
Theorem to justify normality of the sampling distribution of the mean
whenever n > 30? That is also a lie that has nothing to do with
statistical practice and everything to do with the construction of
examination questions.
*************************************************
`o^o' * Neil W. Henry (nhenry@vcu.edu) *
-<:>- * Virginia Commonwealth University *
_/ \_ * Richmond VA 23284-2014 *
*(804)828-1301 x124 (math sciences, 2037c Oliver) *
*FAX: 828-8785 http://saturn.vcu.edu/~nhenry *
*************************************************
========================================================================
Date: Mon, 19 Oct 1998 11:06:46 -0400 (EDT)
From: Dave Krantz
Subject: Re: using t when sample >30
michael zalanka asks,
> Why does it state in my social science statistics text that the z
> distribution and its appropriate cutoffs can be used for t when n>30,
> yet the cutoffs in the t table are still larger than those for z even
> when df=60?
The t distribution is only a rough approximation to the true sampling
distribution in most real-world cases, and in addition, it takes into
account only sampling or measurement error, not the systematic error
that is bound to be present, if only to a small degree, in any real
study. Thus, one should not take the third significant figure at all
seriously, and even a small change in the second significant figure
does not matter much. If it matters to you whether you use 1.9 or 2.1
as the t-quantile, then you are probably inferring too much from your
results. That's why the quantiles for df=30 are good enough for most
purposes, if not quite all. I like having the t tables for 60 and 120
df available, just to show students how little this matters.
Dave Krantz (dhk@columbia.edu)
========================================================================
Date: Mon, 19 Oct 1998 20:32:28 -0500
From: Charles Metz
Organization: The University of Chicago
To: "Karl L. Wuensch"
Subject: Re: delusional?
Well said, Karl.
Charles Metz
========================================================================
Date: Mon, 19 Oct 1998 18:00:35 -0500
From: "Carl C. Gaither"
To: PSWUENSC@ECUVM.CIS.ECU.EDU
Subject: p- value
Hi Karl--
I was just picking. Personally, I have no use for tests based upon a p
value where the researcher does not express why he has selected the
value. For example, virually everyone thinks that a p of .05 or .01 is
what you have to do. But I say hogwash. Let's suppose that there is a
test to determine if a medicine will put a cancer in remission. I'd
venture to say that a p of .5 would be considered as significant to the
person who has cancer and a poor prognosis. Hence the decision to use
the treatment is p=.5 would be a very good decision level.
My personal preference is to just present the p value obtained (using an
exact test if at all possible) and letting the reader decide if they
want that value to be significant. What may be one person's
significance may be another person's "huh?". The problem that I think
academia has to overcome, whether in any of the "icks" courses or
"ology" courses is producing students that believe that a p value make
something significant. It doesn't. Significance is a human attribute.
Nature is just doing what comes naturally.
Anyway, I have to go now. I have a manuscript which I have to get to
the publisher in about a month I have more work than I know what to do
with in getting the bibliography in order.
Have a nice evening.
Glad you liked the signature line.
--
Carl C. Gaither & Alma E. Cavazos-Gaither (Authors)
Statistically Speaking: A Dictionary of Quotations
Physically Speaking: A Dictionary of Quotations on Physics and Astronomy
Mathematically Speaking: A Dictionary of Quotations
Practically Speaking: A Dictionary of Quotations on Engineering
http://www.angelfire.com/tx/StatBook/index.html
"If God had meant for Texans to ski he'd have made bullshit white."
=======================================================================
From: (Clay Helberg)
To: PSWUENSC@ECUVM.CIS.ECU.EDU (Karl L. Wuensch)
Subject: Re: delusional?
Date: Sat, 24 Oct 1998 17:50:01 GMT
Organization: SPSS, Inc.
Karl--
Well said!
--Clay
Clay Helberg http://www.execpc.com/~helberg/
SPSS Documentation and Training chelberg@spss.com
Speaking only for myself....
======================================================================== 70
Return-Path:
Received: from ECUVM1 (NJE origin SMTP@ECUVM1) by ECUVM.CIS.ECU.EDU (LMail
V1.2a/1.8a) with BSMTP id 5010; Sun, 25 Oct 1998 01:44:01 -0400
Received: from jse.stat.ncsu.edu [152.1.62.3] by ECUVM.CIS.ECU.EDU (IBM VM SMTP
V2R4a) via TCP with SMTP ; Sun, 25 Oct 1998 01:44:00 EDT
Received: from host (localhost [127.0.0.1])
by jse.stat.ncsu.edu (8.8.4/ES19Dec96) with SMTP
id BAA27956; Sun, 25 Oct 1998 01:42:38 -0400 (EDT)
Received: from clark.pathlink.com (clark.pathlink.com [165.113.238.170])
by jse.stat.ncsu.edu (8.8.4/ES19Dec96) with ESMTP
id BAA27807 for ; Sun, 25 Oct 1998
01:41:41 -0400 (EDT)
Received: (from news@localhost)
by clark.pathlink.com (8.8.5/8.8.5)
id WAA06724 for edstat-l@jse.stat.ncsu.edu; Sat, 24 Oct 1998 22:30:12
-0700 (PDT)
Received: from GATEWAY by enews with netnews
for edstat-l@jse.stat.ncsu.edu (edstat-l@jse.stat.ncsu.edu)
Message-Id:
Date: Sun, 25 Oct 1998 05:29:55 GMT
Sender: owner-edstat-l@eos.ncsu.edu
From: Eric Bohlman
To: edstat-l@jse.stat.ncsu.edu
Subject: Re: delusional?
References: <199810192141.RAA06688@jse.stat.ncsu.edu>, <36328066.21841585@news>
X-Sender: newsgate@newsguy.com
X-Listprocessor-Version: 8.0 -- ListProcessor(tm) by CREN
David A. Heiser wrote:
: However the basic thing that the customer (the one for whom the analysis goes
to) wants is for
: you the expert to tell him what is significant or not. You can't hide behind a
hedge of say
: p=0.045.
The customer may want that, but if he does, he has to give the
statistician enough information to determine what constitutes
"significant." It's reasonable to assume that something more than idle
curiosity has motivated the customer to have the statistician do the
analysis; in all probability the customer wants to make some decision
based on the results. The customer will make the right decision if the
analysis detects an effect that's actually there, or fails to detect an
effect that isn't there. The customer will make one kind of wrong
decision if the analysis detects a phony effect, and another kind of
wrong decision if the analysis fails to detect a real effect. One can
assume that making a wrong decision is more expensive than making a right
decision, that the costs for the two types of wrong decision are unequal,
and that whether or not the decision was right will not be known for
certain until after it has been made and committed to.
In this scenario, the only rational way to set a "significance" level is to
minimize the cost of making a wrong decision. It is very unlikely that
p=.05 or p=.01 will be that level. If different analyses result in
p-values that center around the rationally-chosen significance level,
than the only reasonable conclusion is that the methodology being used is
incapable of answering the question.
The customer would, of course, find it easier to let the statistician
determine "significance" through a series of magic incantations, just as
the customer would like the statistician to detect "outliers" purely by
applying arcane formulae, just as the customer would like the
statistician to be able to "prove" causation via purely statistical
means. But such things cannot be wished into existence, nor can they be
conjured up by paying the statistician sufficient sums of money; the
customer would do well to heed the words of Mick Jagger and remember that
you can't always get what you want but if you try, sometimes, you'll find
you can get what you need.
I'm starting to sound like Herman Rubin here.