Jump to content

lowest sample sizes


nonstoptaxi

Recommended Posts

Hi all,

can anyone tell me the lowest sample size needed for an experiment? Coolican (Research Methods and Statistics in Psychology) suggests 25-30 as the lowest number. So does this mean having 25 in each condition, or 25 as the overall size? I ask because I 19 in condition A, 100 in the other). I was unable to get a matched sample size.

 

Please advice if at all possible.

 

Thanks,

Link to comment
Share on other sites

It depends entirely upon what test you are running. For example, a t-test requires only two groups, but ANOVA generates cells, each of which holds a sub group of your overall sample. So, a 2x2 Two-Way ANOVA has four cells, but a 2x3 Two-Way has six cells and your sample size need to be increased.

 

Sample size is one factor influencing the power of your experiment (to detect the effect it is testing for). The other factors are effect size and Alpha. One of the uses of power analysis is to help researchers calculate their samlple size, e.g. they are looking for a large effect (0.8) at Alpha 0.05 and you want an experimental power of 0.9 using a t-test; power analysis would tell you how large your sample needed to be to achieve that experimental power. I think Coolican covers power analysis. If not, look in Howell (Statistical Methods for Psychology).

 

As for the issue of different sized groups, if your data are parametric (and of at least interval level of measurement), having different sized groups should not matter, as long as the smaller group has at least the minimum required.

 

Increasing a sample size when taking parametric measures simply means that the sample mean and SD will more closely match the population mean (and SD). If you a sample size which adequately represents the population mean, then increasing the sample size will not increase the experimental power. If your sample size is smaller than that, then the sample mean will be further from the population mean and your experiment will lose power.

 

By this principle, the issue in your experiment is not that one group has n = 100, but that the other group has n = 19. If this value is lower than that required, then your experiment will lose power.

 

Coolicans' recommendation refers to group sizes not overall sample sizes; he means ~20 per group, so for an independent groups t-test you need 40, in a 2x2 Two Way between subjects ANOVA you need 80, in a 3x2 Two-Way (between subjects) you need 120 (and so-on). However, his recommendation is an extremely broad 'rule of thumb' as sample size requirements depend upon the required experimental power (0.8 is acceptable), the size of the effect you are looking for, (large, medium or small), where small effects require a much greater sample size than large effects, and Alpha (set by convention to 0.05). The larger the value for Alpha, the smaller the sample needed (it's never a good idea to increase Alpha though).

 

I just ran a power analysis (out of interest). I set required experimental power at 0.8, Alpha at 0.05 and effect size at 0.8 (large). For a t-test (2-Tailed), the total sample (N) needs to be 52 (i.e. 26 per group). However, if your hypothesis is 1-tailed, the required total N drops to 42 (21 per group). I would say you need to add a few to your smaller group.

Link to comment
Share on other sites

i would advise to do a big a sample size as you can.

 

limited by how much time you have available and how much data you have availble.

 

also, if this is for a project, you can (as an after-project comment) say that the bigger a sample size is used the more reliable the data is... however obviously there is a limit to how much you can work with!

Link to comment
Share on other sites

  • 2 weeks later...

Thanks for the responses Glider and 5614.

 

Glider, thanks for your indepth response. U always come through on these stats questions! You nearly make stats palatable! :D U must be some psych. stats lecturer.

 

Yeah, I'm aware of power analysis, and I think 'g-power' is the free program circulating around my department, which I should really learn to use. But in terms of specifying the effect size...how does one do that? I know how to calculate an effect size post hoc for a t-test for instance. But how do I do it in advance? As for experimental power, it's 0.8 'cause Cohen said or something, and alpha is usually set at .05. So it's just effect size that is confusing me.

 

Thanks.

 

John

Link to comment
Share on other sites

Thanks for the responses Glider and 5614.

 

Glider' date=' thanks for your indepth response. U always come through on these stats questions! You nearly make stats palatable! :D U must be some psych. stats lecturer.[/quote']

Thank you. That's what my students say. :D

 

Yeah, I'm aware of power analysis, and I think 'g-power' is the free program circulating around my department, which I should really learn to use. But in terms of specifying the effect size...how does one do that? I know how to calculate an effect size post hoc for a t-test for instance. But how do I do it in advance? As for experimental power, it's 0.8 'cause Cohen said or something, and alpha is usually set at .05. So it's just effect size that is confusing me.

Well, this is where the whole 'ridgid science' thing breaks down. There are really only two ways of specifying effect size for an a priori test (as you say, it's easy enough post hoc). You can either read around previous studies in the area and see what effect sizes they generated, or you can take a best guess; based on what effect you are looking for, what effect size would it be reasonable to expect?

 

Either way only gives you an estimate, but estimates are the only option, because clearly, you cannot calculate the size of an effect you haven't even tested for.

Link to comment
Share on other sites

Glider,

that makes sense. I guess as someone once said, stats can be our best friends or our worst enemies (or was that computers....or the tele?). I'm glad it's a matter of making a common sense judgement a priori, rather than conducting some bizarre statistical test :rolleyes:

 

thanks again!

 

J

Link to comment
Share on other sites

Any time.

 

Stats are not as bad as people say. Most useful things, when misused, will bite you in the arse. Common sense (specifically, the lack thereof) can make a worst enemy of most friends. For example, using a really useful mains powered razor in the bath, or slapping a horse on the arse when it didn't know you were there.

Link to comment
Share on other sites

  • 6 months later...

It depends on the experiment and the test you're using, but as a rule, if power analysis shows 25 to be the appropriate sample size, then using 30 won't make much difference to the power of the experiment. If power analysis has shown 30 to be the required sampls size, then using 25 will weaken the experiment.

 

However, he is right insofar as given the 'noisy' nature of some measures, then a difference of 5 won't make much of a difference, but for the same reson, it's always best to err on the side of caution and go for the larger sample.

Link to comment
Share on other sites

  • 2 weeks later...

Hi Glider and others,

in terms of conducting power analysis for one test, fair enough. But to your knowledge, can you conduct a power analysis for a correlational test?

 

 

If yes, what are the options for looking at what level of 'r' you would count as substantive when you get a number of zero-order corrleations? As in, where do you draw the line and say 'I'll just consider those rs above a certain level'? If this is so, is there a reference for this as well?

 

Any advice greatly received :D

 

Jon

Link to comment
Share on other sites

Yes, you can do power analysis for correlational tests. Generally you will be correlating two measures from one sample, so you won't have to double the sample size as for a two sample test like a t-test.

 

As far as the size of the correlation coefficients go, you can use the conventions laid down by Cohen. If I remeber rightly, it's something like up to r = 0.3 = 'Weak', up to r = 0.6 = 'Moderate' and r > 0.6 = 'Strong'. Cohen's conventions for the power of r are generally accepted (though I'm a bit hazy on the precise values. However, they'll be in any basic stats book).

Link to comment
Share on other sites

"The Chemist's rule: Never take more than three data points. There will always be some kind of graph paper on which they fall in a straight line.

"The Chemist's rule, first corollary: if you have only one kind of graph paper, never take more than two data points."

 

8-)

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.