The Authoritarians Read online

Page 5


  Back to chapter 1

  5 If I were you, I’d be wondering how well my results, which are based mainly on my local Canadian samples, apply to the United States. I wondered that too, so I made a determined effort when I started out to repeat my studies with American samples. I almost always found the same things in Alabama and Pennsylvania and Texas and Indiana and New York and Wyoming and California that I had found in Manitoba. Once American researchers began using my measures, I could simply loll by my hearth and read what others turned up in Massachusetts and Kentucky and Michigan and Nebraska and Washington and so on. The bottom line: A strong record of replication has accumulated over time.

  Still, sometimes weird things happen. For example, a Colorado Ph.D. student recently told me she found no correlation between college students’ RWA scale scores, and those of their parents—whereas correlations in the .40s to .50s have appeared quite routinely in the past. And naturally other researchers do not get exactly the same results I do in my studies. A relationship of .45 in my study might come in at .30 in an American one, or .60. But if I have found authoritarianism correlates significantly with something in a Manitoba-based study, then a significant correlation has appeared at least 90% of the time in American-based studies that tested the same thing. (That ain’t bad in the social sciences, and I think it’s mainly due to experienced researchers using good measures and careful methodologies.)

  Back to chapter 1

  6 The Weschler Adult Intelligence Survey, probably the most widely used IQ test, has a reliability of about .90. So also does the RWA scale, and nearly all the other tests I have developed that are mentioned in this book. (The alpha coefficient, described in note 3, is often used as an index of reliability.) What does that “.90” mean? It tells you that the “signal to noise” performance of your test equals 9 to 1. Most of what you are getting is useful “signal,” and only 10% of it is meaningless, confusing “noise” or static. In these days of high definition television you would be all over your cable company if your TV picture was 10% “snow.” But the reliability of most psychological tests falls well short of .90, you’ll be disheartened to learn—especially after you’re denied a job because of your score on one. You can easily find journal articles that say .70 is “adequate” reliability.

  P.S. We’re going to have a lot of technical notes at the beginning of this chapter as I try to anticipate the questions that you might bring up—if you are the careful, critical reader everyone says you are. Eventually the sailing will get smoother. But you don’t have to read these notes, which you see can be rather tedious. They won’t be on the exam.

  Back to chapter 1

  7 This isn’t as big a problem with the RWA scale as it might be. Believe it or not, most people don’t writhe over the meaning of its statements. The items had to show they basically meant the same thing to most people to get on the test in the first place. If a statement is terrifically ambiguous, the answers it draws will be all over the lot, connect to nothing else reliably, and explain zilcho. I know because I’ve written lots of crummy items over the years.

  But I stubbornly plodded along until I got enough good ones. It took eight studies, run over three years, involving over 3000 subjects and 300 items to get the first version of the RWA scale in 1973. Then the scale was continually revised as better (less ambiguous, more pertinent) statements replaced weaker ones. Only two of the items you answered (Nos. 6 and 18) survive from the first version. The internal consistency of responses to the test is so high, producing its high alpha and reliability, because items that were too ambiguous fouled out of the game during all this testing. So the years spent developing the test paid off. Let’s hear it for fixation. (And can you see why I get so p.o.’d when some researchers chop up my scales?).

  But still, to any individual person, any item can mean something quite different from what I intend. And some people will consistently have “unusual” interpretations of the items. And the test, which was designed to measure right-wing authoritarianism in North America, will probably fall apart in markedly different cultures.

  While we’re on the subject of what the items on the RWA scale measure, people sometimes say “Of course conservatives (or religious conservatives) score highly on it; it’s full of conservative ideas.” I think this does a disservice to “conservative ideas” and to being “religious.” Take Item 16: “God’s laws about abortion, pornography, and marriage must be strictly followed before it is too late, and those who break them must be strongly punished.” Knowing what you do about the concept of right-wing authoritarianism, you can pretty easily see the authoritarian submission (“God’s laws…must be strictly followed”), the authoritarian aggression (“must be strongly punished”), and the run-away conventionalism in the underlying sentiment that everyone should be made to act the way someone’s interpretation of God’s laws dictates. The item appears on the RWA scale because responses to it correlate strongly with responses to all the other items on the scale, which together tap these three defining elements of right-wing authoritarianism.

  On the other hand the item, “Abortion, pornography and divorce are sins”-which you may agree reflects a conservative and religious point of view—would not make the cut for inclusion on the RWA scale because it does not ring the bells that identify a high RWA loudly enough. You could in fact sensibly agree with this statement and still reject Item 16, could you not? Item 16 isn’t just about being conservative and religious. It goes way beyond that.

  (My God! You’re still reading this!) To put it another way, an empirical way: if you look at how responses to Item 16 correlate with the other items on the RWA scale, and then also look at how it correlates with some measure of traditional religious belief, such as the Christian Orthodoxy scale that measures acceptance of the Nicene Creed (Journal for the Scientific Study of Religion, 1982, 21, pp. 317-326), you’ll find the former correlations are much stronger. Item 16 does not measure time-honored, customary religious sentiment so much as it measures right-wing authoritarianism dressed up in sanctimonious clothes. The same is true of all the other religion items on the RWA scale—most of which came onto the RWA scale relatively recently as authoritarianism in North America increasingly became expressed in religious terms. Furthermore, these items all individually correlate with the authoritarian behaviors we shall be discussing in this chapter.

  Unless you think that conservatives (as opposed to authoritarians) are inclined to follow leaders no matter what, pitch out the Constitution, attack whomever a government targets, and so on—which I do not think—this too indicates that the items are not revealing conservatism, but authoritarianism.

  Back to chapter 1

  8 The RWA scale is well-disguised. Personality tests are usually phrased in the first person (e.g., “I have strange thoughts while in the bathtub”) whereas attitude surveys typically are not (e.g., “Bath tubs should keep to ‘their place’ in a house”). So it is easy to pass off the RWA scale, a personality test, as yet another opinion survey. Most respondents think that it seeks “opinions about society” or has “something to do with morals.”

  Back to chapter 1

  9 For the same good reasons, it’s out of bounds to give the RWA scale to your loved ones, and unloved ones, to show them how “authoritarian they are.”

  By the way, chances are you have relatively unauthoritarian attitudes. You see, authoritarian followers are not likely to be reading this book in the first place, especially if their leaders told them it was full of evil lies, or schluffed it off as “scientific jibberish.” (This is not exactly a book that an authoritarian leader would want his followers to read. Don’t expect it to be featured as a prime selection by the Authoritarian Book of the Month Club.) Still, the real test of how authoritarian or unauthoritarian we are comes from how we act in various situations. And that, we shall see at the end of this book, is a whole different ball game than answering a personality test.

  I am, incidentally, taking a minor chance by letting you s
core your own personality test in this book. I conceivably could get kicked out of the American or Canadian Psychological Associations—if I belonged to them. And for good reason: people have a long history of over-valuing psychological test results—which I have tried to warn you about. A good example of this popped up on the internet right after John Dean’s book, Conservatives Without Conscience, was published. Almost immediately a thread was begun on the Daily KOS site by someone who had Googled “authoritarianism” and found (s/he thought) the research program summarized in Dean’s book. S/he described the theory and also placed the personality test at the heart of this program right in the posting. Tons of people immediately jumped in, talking about how low they had scored on the test, how relieved they were that they weren’t an authoritarian, and how the theory and the attitudes mentioned on the test seemed so amazingly true and reminded them of “definite authoritarians” they knew.

  Trouble was, they got the wrong research program and the wrong test. People were basing their analysis on a theory and scale developed during the 1940s, which has long been discredited and abandoned by almost all of the researchers in the field. So (1) Don’t pay much attention to your score on the RWA scale, and (2) Realize how easy it is to perceive connections that aren’t really there.

  Back to chapter 1

  10 One thing we haven’t discussed is why half of the statements on the RWA scale (and any good personality test) are worded in sort of the “opposite way” such that you have to disagree with them to look authoritarian. The answer, it turns out, is quite important if you care about doing meaningful research with surveys or if you want to be a critical consumer of surveys. People tend to say “Yes” or “Agree” when they (1) don’t understand a statement, (2) don’t have an opinion, or (3) (Horror!) don’t care about your survey. It’s similar to what happens to me when I’m walking down the street, and an acquaintance on the other side yells something at me. If I didn’t hear clearly what he said (an increasingly likely event, I confess) I’ll often just smile and nod and continue on my way. Now this may prove idiotic. Maybe the person yelled, “Bob, you’re walking on wet cement!” But I didn’t know what he said; I assumed it was just a greeting, so I smiled and nodded and moved on. Well sometimes people just smile and nod and move on when they’re answering surveys.

  Political party pollsters know this, and that’s why they word their surveys so that agreement will make their side look good, as in, “Do you think the governor is doing a good job?” If 50 percent of the public truly thinks so, the poll may well show 65 percent like the gov. But the trouble is, on some personality tests you can get so much smiling and nodding that people who are normal but indifferent will score abnormally high, invalidating the results. So it’s wise to balance a scale so that a person has to disagree half the time to get a high score. Balancing doesn’t stop the nodding and noodling, but meaningless agreement with the negatives cancels out the meaningless agreement with the positives and keeps the total score in the middle of the scale, where it can’t do much harm.

  (Beware: the last paragraph was the “fun part” of this note, so you can imagine what the rest is going to be like!) “Smiling and nodding” was at the heart of the hairy mess that early research on authoritarianism got itself into. All of the items on the first “big” authoritarian follower measure, something called the F (for Fascism) scale which came out of that 1940s research program mentioned in the previous note, were worded such that the authoritarian answer was to agree. So its scores could have been seriously affected by “yea-saying.” But other researchers said, “Maybe ‘yea-saying’ is itself part of being a compliant authoritarian follower. Let’s get some authoritarian followers and find out.” “Uh, how are we going to get them?” “Let’s use the F scale to identify them!” “But that’s what we’re trying to decide about!”

  Many researchers were swamped by this dog-chases-its-own-tail whirlpool of reasoning until the mess was eventually straightened out by a carefully balanced version of the F scale. It showed that the original version was massively contaminated by response sets. These studies led to the development of the RWA scale, which was built from the ground up to control yea-saying, and studies with the RWA scale have made it clear that authoritarian followers do tend to agree more, in general, with statements on surveys than most people do. It is part of their generally compliant nature. It only took me about twenty years to get all this untangled, and would you believe it, some people still think fixated researchers have no fun!

  Back to chapter 1

  11 What is a “high RWA”? When I am writing a scientific report of my research I call the 25% of a sample who scored highest on the RWA scale “High RWAs” with a capital-H. Similarly I call the 25% who scored lowest “Low RWAs,” and my computer runs wondrous statistical tests comparing Highs with Lows. But in this book where I’m describing results, not documenting them, I’ll use “high RWAs” more loosely to simply mean the people in a study who score relatively highly on the RWA scale, and “low RWAs” will mean those who score relatively low on the test.

  If I’ve made myself at all clear here, you’ll know that I am comparing relative differences in a sample. I am not talking about types of individuals, the way you might say Aunt Barbara is an extrovert while Uncle Jim is an introvert. High and low RWAs are different from one another but not opposites. It’s a matter of degree, not a hard cut, “100% versus 0%” distinction.

  Back to chapter 1

  12 (As always, reading this note is purely voluntary and in this particular case may even be a sign of madness.) We need to talk about generalizations, don’t we. All of the findings I shall be presenting in this book are generalizations-with-exceptions, which means that whatever the issue, some high RWAs acted the way low RWAs typically did, and some lows acted like highs usually did. That’s the stuff that the social sciences crank out, journal article after journal article: general truths, but hardly perfect ones.

  Some generalizations have so many exceptions that you wonder why they’re worth the bother; a lot of gender differences, for example, turn out to be miniscule. Other generalizations have so few exceptions you can almost take them to the bank; I’ll show you a connection in Chapter 6 between RWA scale scores and political party affiliation among politicians that will knock your socks off—if you’re a social scientist (wearing socks).

  If you really want to know more about this (and you certainly don’t have to; this is going to take a while), let’s look at the fact that tall people tend to be heavier than short people. You compute correlations to get a fix on how well two things, like height and weight, go together. A correlation can go from 0.00 (no connection at all) to 1.00 (a perfect association). The correlation between height and weight among North American adults comes in at about .50, which means the two are “middlin’” connected. That’s important if you’re wondering how big to make the jackets for tall men. So the generalization is valid, and useful, but we all know some tall, skinny people and my wife knows a “Mr. Short and Dumpy” very well.

  As a generalization about generalizations, the RWA scale correlations I present in this book usually run between .40 and .60. Thus they’re about as solid as the connection between height and weight. But how good is that in absolute terms? [Warning: the next sentence will take you back to your high school algebra class, which may trigger unconscious memories of bizarre hair-dos and “meat loaf” in the cafeteria every Thursday. Proceed at your own risk.] Social scientists commonly square a correlation to get an idea of how much of the “Mystery of Thing X” you can explain by Clue Y. So if weight and height correlate .50, (.50 x .50 = .25, or) 25 percent of the difference in people’s weight can be explained by taking into account how tall they are. That’s rather good in this business, because our weight is affected by so many other things, such as how many Big Macs you stuff into yourself, and whether you jog or crawl to the fridge to get more Haagen- Dazs. (Some psychologists, I must confess, say you don’t have to square the correlation to see ho
w much you have explained. Instead, the simple correlation itself tells you that. Bet you wish you were reading a book written by one of them, huh?)

  (Have you ever had so much fun in one note? It gets even worse.) Most relationships reported in psychology research journals can only explain about 5–10 percent of why people acted the way they did. I call those “weak”. If one thing can explain 10 to 20 percent of another’s variability (the statistical phrase is “they share 10 to 20 percent of their variance”), I call that a “moderate” connection. I call 20 to 30 percent a “sturdy” relationship, and 30 to 40 percent gets the designation “strong” in my book. Above 40% equals “very strong,” and you could call above 50% “almost unheard of” in the behavioral sciences.

  This may seem quite under-achieving to you, but it’s tough figuring people out and, as Yogi Berra might put it, everybody already knows all the things that everybody already knows. Social scientists are slaving away out on the frontiers of knowledge hoping to find big connections that nobody (not even your mother) ever realized before, and that’s practically impossible. Ask your mom.

  In terms of precise correlation coefficients, a correlation less than .316 is weak, .316 to .417 is moderate, .418 to .548 is sturdy, .549 to .632 is strong, .633 to .707 is very strong, and over .707 is almost unheard of. These are my own designations, and they are probably set the bar higher than most behavioral scientists do. You can easily find researchers who call .30 “a strong correlation,” whereas I think it is weak. (I could have used labels like “hefty,” “stout,’ and “a great big fat one!” But for some reason I don’t like these designations.)