Statistics for psychologists, sociologists, and political scientists

In terms of mathematical structure, psychologists, sociologists, and poly-sci folks all do the same experiment, over and over, and all use the same simple statistical calculation, the ANOVA, to determine its significance. I thought I’d explain that experiment and the calculation below, walking you through an actual paper (one I find interesting) in psychology / poly-sci. The results are true at the 95% level (that’s the same as saying p >0.05) — a significant achievement in poly-sci, but that doesn’t mean the experiment means what the researchers think. I’ll then suggest another statistic measure, r-squared, that deserves to be used along with ANOVA.

The standard psychological or poly-sci research experiments involves taking a group of people (often students) and giving them a questionnaire or test to measure their feelings about something — the war in Iraq, their fear of flying, their degree of racism, etc. This is scored on some scale to get an average. Another, near-identical group of subjects is now brought in and given a prompt: shown a movie, or a picture, or asked to visualize something, and then given the same questionnaire or test as the first group. The prompt is shown to have changed to average score, up or down, an ANOVA (analysis of variation) is used to show if this change is one the researcher can have confidence in. If the confidence exceeds 95% the researcher goes on to discuss the significance, and submits the study for publication. I’ll now walk you through the analysis the old fashioned way: the way it would have been done in the days of hand calculators and slide-rules so you understand it. Even when done this way, it only takes 20 minutes or so: far less time than the experiment.

I’ll call the “off the street score” for the ith subject, Xi°. It would be nice if papers would publish these, but usually they do not. Instead, researchers publish the survey and the average score, something I’ll call X°-bar, or X°. they also publish a standard deviation, calculated from the above, something I’ll call, SD°. In older papers, it’s called sigma, σ. Sigma and SD are the same thing. Now, moving to the group that’s been given the prompt, I’ll call the score for the ith subject, Xi*. Similar to the above, the average for this prompted group is X*, or X°-bar, and the standard deviation SD*.

I have assumed that there is only one prompt, identified by an asterix, *, one particular movie, picture, or challenge. For some studies there will be different concentrations of the prompt (show half the movie, for example), and some researchers throw in completely different prompts. The more prompts, the more likely you get false positives with an ANOVA, and the more likely you are to need to go to r-squared. Warning: very few researchers do this, intentionally (and crookedly) or by complete obliviousness to the math. Either way, if you have a study with ten prompt variations, and you are testing to 95% confidence your result is meaningless. Random variation will give you this result 50% of the time. A crooked researcher used ANOVA and 20 prompt variations “to prove to 95% confidence” that genetic modified food caused cancer; I’ll assume (trust) you won’t fall into that mistake, and that you won’t use the ANOVA knowledge I provide to get notoriety and easy publication of total, un-reproducible nonsense. If you have more than one or two prompts, you’ve got to add r-squared (and it’s probably a good idea with one or two). I’d discuss r-squared at the end.

I’ll now show how you calculate X°-bar the old-fashioned way, as would be done with a hand calculator. I do this, not because I think social-scientists can’t calculate an average, nor because I don’t trust the ANOVA function on your laptop or calculator, but because this is a good way to familiarize yourself with the notation:

X°-bar = X° = 1/n° ∑ Xi°.

Here, n° is the total number of subjects who take the test but who have not seen the prompt. Typically, for professional studies, there are 30 to 50 of these. ∑ means sum, and Xi° is the score of the ith subject, as I’d mentioned. Thus, ∑ Xi° indicates the sum of all the scores in this group, and 1/n° is the average, X°-bar. Convince yourself that this is, indeed the formula. The same formula is used for X*-bar. For a hand calculation, you’d write numbers 1 to n° on the left column of some paper, and each Xi° value next to its number, leaving room for more work to follow. This used to be done in a note-book, nowadays a spreadsheet will make that easier. Write the value of X°-bar on a separate line on the bottom.

T-table

T-table

In virtually all cases you’ll find that X°-bar is different from X*-bar, but there will be a lot of variation among the scores in both groups. The ANOVA (analysis of variation) is a simple way to determine whether the difference is significant enough to mean anything. Statistics books make this calculation seem far too complicated — they go into too much math-theory, or consider too many types of ANOVA tests, most of which make no sense in psychology or poly-sci but were developed for ball-bearings and cement. The only ANOVA approach used involves the T-table shown and the 95% confidence (column this is the same as two-tailed p<0.05 column). Though 99% is nice, it isn’t necessary. Other significances are on the chart, but they’re not really useful for publication. If you do this on a calculator, the table is buried in there someplace. The confidence level is written across the bottom line of the cart. 95% here is seen to be the same as a two-tailed P value of 0.05 = 5% seen on the third from the top line of the chart. For about 60 subjects (two groups of 30, say) and 95% certainty, T= 2.000. This is a very useful number to carry about in your head. It allows you to eyeball your results.

In order to use this T value, you will have to calculate the standard deviation, SD for both groups and the standard variation between them, SV. Typically, the SDs will be similar, but large, and the SV will be much smaller. First lets calculate SD° by hand. To do this, you first calculate its square, SD°2; once you have that, you’ll take the square-root. Take each of the X°i scores, each of the scores of the first group, and calculate the difference between each score and the average, X°-bar. Square each number and divide by (n°-1). These numbers go into their own column, each in line with its own Xi. The sum of this column will be SD°2. Put in mathematical terms, for the original group (the ones that didn’t see the movie),

SD°2 = 1/(n°-1) ∑ (Xi°- X°)2

SD° = √SD°2.

Similarly for the group that saw the movie, SD*2 = 1/(n*-1) ∑ (Xi*- X*)2

SD* = √SD*2.

As before, n° and n* are the number of subjects in each of the two groups. Usually you’ll aim for these to be the same, but often they’ll be different. Some students will end up only seeing half the move, some will see it twice, even if you don’t plan it that way; these students’ scares can not be used with the above, but be sure to write them down; save them. They might have tremendous value later on.

Write down the standard deviations, SD for each group calculated above, and check that the SDs are similar, differing by less than a factor of 2. If so, you can take a weighted average and call it SD-bar, and move on with your work. There are formulas for this average, and in some cases you’ll need an F-table to help choose the formula, but for my purposes, I’ll assume that the SDs are similar enough that any weighted average will do. If they are not, it’s likely a sign that something very significant is going on, and you may want to re-think your study.

Once you calculate SD-bar, the weighted average of the SD’s above, you can move on to calculate the standard variation, the SV between the two groups. This is the average difference that you’d expect to see if there were no real differences. That is, if there were no movie, no prompt, no nothing, just random chance of who showed up for the test. SV is calculated as:

SV = SD-bar √(1/n° + 1/n*).

Now, go to your T-table and look up the T value for two tailed tests at 95% certainty and N = n° + n*. You probably learned that you should be using degrees of freedom where, in this case, df = N-2, but for normal group sizes used, the T value will be nearly the same. As an example, I’ll assume that N is 80, two groups of 40 subjects the degrees of freedom is N-2, or 78. I you look at the T-table for 95% confidence, you’ll notice that the T value for 80 df is 1.99. You can use this. The value for  62 subjects would be 2.000, and the true value for 80 is 1.991; the least of your problems is the difference between 1.991 and 1.990; it’s unlikely your test is ideal, or your data is normally distributed. Such things cause far more problems for your results. If you want to see how to deal with these, go here.

Assuming random variation, and 80 subjects tested, we can say that, so long as X°-bar differs from X*-bar by at least 1.99 times the SV calculated above, you’ve demonstrated a difference with enough confidence that you can go for a publication. In math terms, you can publish if and only if: |X°-X*| ≥ 1.99 SV where the vertical lines represent absolute value. This is all the statistics you need. Do the above, and you’re good to publish. The reviewers will look at your average score values, and your value for SV. If the difference between the two averages is more than 2 times the SV, most people will accept that you’ve found something.

If you want any of this to sink in, you should now do a worked problem with actual numbers, in this case two groups, 11 and 10 students. It’s not difficult, but you should try at least with these real numbers. When you are done, go here. I will grind through to the answer. I’ll also introduce r-squared.

The worked problem: Assume you have two groups of people tested for racism, or political views, or some allergic reaction. One group was given nothing more than the test, the other group is given some prompt: an advertisement, a drug, a lecture… We want to know if we had a significant effect at 95% confidence. Here are the test scores for both groups assuming a scale from 0 to 3.

Control group: 0, 0, 1, 1, 1, 1, 2, 2, 2, 2, 3.  These are the Xi° s; there are 11 of them

Prompted group: 0, 1, 1, 1, 2, 2, 2, 2, 3, 3.  These are the Xi* s; there are 10 of them.

On a semi-humorous side: Here is the relationship between marriage and a PhD.

Robert Buxbaum, March 18, 2019. I also have an explanation of loaded dice and flipped coins; statistics for high school students.

Leave a Reply