There is no sphere of life that polling hasn’t touched. The language of data is ubiquitous: X percent of Americans think this, Y percent identify as that. Even when they don’t actually say much, numbers carry a sheen of authority, which is part of why polling has become the unquestioned tool of choice for establishing Truth in the public sphere.
Not all statistics are created equal, of course. Many rely on tiny samples or skewed audiences or biased responses, or are produced by firms with a vested interest in reaching a certain conclusion. These biases can be hard to detect, or too much trouble to decode; and when statistics are reported in the media, they are often embraced with a misguided deference to factiness.
Bad stats are easy targets, though. Setting these aside, it’s much more difficult to wage a sustained critique of polling. Enter Robert Wuthnow, a Princeton professor whose new book, Inventing American Religion, takes on the entire industry with the kind of telegraphed crankiness only academics can achieve. He argues that even gold-standard contemporary polling relies on flawed methodologies and biased questions. Polls about religion claim to show what Americans believe as a society, but actually, Wuthnow says, they say very little.
In its worst form, this kind of critique can be self-indulgent, overly academic, and boring. It’s a micro-polemic: a big, philosophical attack on a topic that’s fairly narrow and small. Wuthnow’s critique, though, is ultimately concerned with how people derive knowledge about themselves, which is important. It speaks to the most basic project of public life: people collectively trying to figure themselves out, trading observations about the nature of existence as they all march steadily toward death.
Polls about religion impose neatness on this messy struggle with existence. They rely on tidy categories, marking points to help people see how others are like or unlike themselves. So if, as Wuthnow says, even the best polls are not that good, and they don’t tell us much, it’s worth considering: What fictions about belief get propagated when statistics are used for self-understanding?
For all its prominence today, modern American religion polling actually has a fairly short history. Wuthnow begins his narrative near the turn of the 20th century, when, “as near as anyone could tell,” he writes, “religion’s influence was declining.” For decades, the U.S. Census Bureau had been tracking data about religious groups; major denominations also tracked their membership and participation. But as the surge of church planting on the American frontier drew to a close, immigrants flooded the country, and religious communities across the country saw declining attendance, clergy faced a new challenge: getting people back into church. To figure this out, sociologists started conducting neighborhood surveys, sending volunteers from house to house with questionnaires. These investigations were also sometimes part of larger attempts to understand urban communities. W.E.B. Du Bois, for example, included interviews with clergy in a massive 1899 study about communal life in Philadelphia.
Looking back at some of these early surveys, it’s remarkable to see how all that’s old becomes new again. In 1926, a Presbyterian preacher named Charles Stelzle convinced 200 daily newspapers to print a questionnaire about religion, God, morality, and prayer; roughly 125,000 readers responded. This method of data-gathering wouldn't pass muster under modern standards—the respondents were likely not representative of the American population, and it would have been impossible to know who the questionnaire missed. But the results still uncannily echo modern times: Nearly a century ago, 91 percent of these poll respondents said they believe in God, compared to the 92 percent who said the same in a Pew poll just a few years ago.
One of today’s polling giants, Gallup, started asking survey questions about religion in 1939. (Again, slightly spooky results: In one poll, 37 percent of respondents said they “happened to go to church last Sunday,” which was the same percentage of self-reported weekly church attenders in a 2013 Pew poll. The methodological differences are crucial, but the irony is still crisp.) Gallup’s engagement with religion polling was critical to its spread, Wuthnow argues, and the organization introduced a novel way of talking about faith. “Polls regularly asked people their opinion of religion,” Wuthnow writes. “They were asked if religion as a whole was increasing or losing its influence on American life. The question assumed they would somehow know, and that the results would somehow be meaningful.”
This is the echo chamber of public-opinion polling: People are asked about their perceptions of others’ beliefs, but those perceptions are likely shaped by what they’ve read and heard about poll results, at least in part. Polls aim to show how people think about the world, but the tool is inherently distorted, attempting to simultaneously feed and record public opinion. If polls were just neutral instruments of truth-telling, maybe this wouldn’t matter, but they can also be an incredible source of tangible power. The main reason religion polling gets so much funding, Wuthnow argues, and the main reason it gets reported on, is that it has consequences for American electoral politics.
Wuthnow traces the strong tie between politics and polling to the 1970s, particularly following the election of Jimmy Carter. This “victory forced major media pundits who had little knowledge of grassroots religion to scramble,” he writes. “Were there really people in the heartland who considered themselves born again? If so, how many? And were they capable of becoming a significant factor in American politics?” Polls about the number and nature of evangelicals surged; magazines and newspapers began to cover religious affiliation as a crucial factor in voting patterns. Gradually, poll numbers became a fixture in punditry. Sometimes, the two were intertwined: In the 1980s, Pat Robertson’s Christian Broadcasting Network funded a Gallup poll about school prayer, abortion, and feminism. Robert Schuller, the televangelist who hosted the famous Hour of Power broadcast, paid for another.
Even polling that wasn’t bought by evangelical Christians tended to focus on white, evangelical Protestants, Wuthnow writes. This trend continues today, especially in poll questions that treat the public practice of religion as separate from private belief. As the University of North Carolina professor Molly Worthen wrote in a 2012 column for The New York Times, “The very idea that it is possible to cordon off personal religious beliefs from a secular town square depends on Protestant assumptions about what counts as ‘religion,’ even if we now mask these sectarian foundations with labels like ‘Judeo-Christian.’”
For as long as polling has been a fixture in American politics, there have been dissenters. Wuthnow cites complaints from publications like Christianity Today and The Christian Century about the framing of poll questions, such as what it really means to be “born again.” A Democratic representative from Michigan, Lucien Nedzi, even called a Congressional hearing in 1972 on the trustworthiness of polls. But as the survey industry has become more prominent, these critiques have become fewer and further between—even as the standards of polling have arguably declined.
These standards are largely what Wuthnow’s book is concerned with: specifically, declining rates of responses to almost all polls; the short amount of time pollsters spend administering questionnaires; the racial and denominational biases embedded in the way most religion polls are framed; and the inundation of polls and polling information in public life. To him, there’s a lot more depth to be drawn from qualitative interviews than quantitative studies. “Talking to people at length in their own words, we learn that [religion] is quite personal and quite variable and rooted in the narratives of personal experience,” he said in an interview.
At least some portion of Wuthnow’s critique can be understood as a form of academic in-fighting, or even suspicion of those who do scholarly-style work outside of the academy. Greg Smith, the associate director of research at the Pew Research Center—the organization that arguably represents the gold standard of polling outside of the academy—said his colleagues are aware of and rightly concerned about declining response rates. Often, pollsters will successfully reach only about 9 percent of people they attempt to contact. This can create problems in survey results, especially if pollsters are unable to reach a group of people who truly represent the rest of the population. One potential bias Pew has recognized, for example, is that “civically minded” people are more likely to participate in surveys. Since there are established correlations between civic participation and religious practice, it may well be that America’s non-believers are underrepresented in public polling about religion.
Despite these liabilities, Smith argues, Pew’s results are on par with other surveys in the industry, including some of those conducted by government bureaus. (There’s no direct government comparison point for religion polling, though; since at least 1976, the U.S. Census Bureau has been legally forbidden from requiring anyone to disclose their religious beliefs or affiliation.) “Anybody trying to conduct surveys would love to be able to get response rates at 70 and 80 and 90 percent. That doesn’t seem to be tenable in this day and age,” Smith said. “So what we try to do is monitor what we are getting, assess the impact it has on the characteristics of our sample, and to continue to be cognizant of it. All the data suggests that these surveys are continuing to produce good, accurate information that we can use to understand religion and all other aspects of American society.”
In general, Smith said, a lot of what Wuthnow writes is “indisputable”: Polls can fall victim to misinterpretation; it’s gotten harder to execute them well; and there are some questions that just can’t be answered through a survey. “If your goal was to understand the totality of religious experiences and belief and the way that religion has meaning in one individual’s life, I don’t think a 20-minute multiple-choice survey is the right tool,” he said. But “it would be a shame if people came to the conclusion … that we can’t learn something meaningful from them.”
In particular, he pointed to issue-specific surveys, on topics with quickly shifting public opinion, such as gay marriage, or views that stay stubbornly constant, like attitudes on abortion. Somewhat grudgingly, Wuthnow agreed that incremental polling on these sorts of political issues can be valuable. “But even there, if you’re talking to someone in a qualitative interview, then of course, the complexity of thought and attitudes and experiences become much more nuanced and complex than pollsters are able to capture,” he said.
“What would we lose if we didn’t have Pew kinds of surveys? Frankly, not much,” he added. For most people who work in polling or media or politics, this probably sounds like an extreme position, and it is. The polling industry is not going away. Wuthnow’s proposed alternative—“an occasional survey that was really well-done, even if it costs a million dollars”—may be rosy sounding, but it’s almost certainly impractical in today’s quickly churning public sphere. The relevant social-science soliloquy is not to poll or not to poll. The question is bigger and denser than that: If, as Wuthnow says, the public is over-reliant on polling and statistics in discussions about religion, what does that say about other tools for meaning-making that are missing from American public life?
* * *
In interviews, people rarely frame their own religious experiences in terms of statistics and how they compare to trends around the country, Wuthnow said. They speak “more about the demarcations in their own personal biographies. It was something they were raised with, or something that affected who they married, or something that’s affecting how they’re raising their children.”
This makes sense. Religion may affect people’s views on politics, but it’s often a guidebook for more immediate, tangible experiences—birth, death, love, relationships, the daily slog of life. Sociologists may have legitimate academic interests in how these experiences play out across demographic groups, but “it has been mostly through polling, and polling’s argument,” Wuthnow argued, that the broader public has “now become schooled to think that it is interesting and important to know that X percentage of the public goes to church and believes in God.”
Another way to put it: Polling has become the only polite language for talking about religious experience in public life. Facts like church attendance are much easier to trade than messy views about what happens to babies when they die, or the nature of sin, or whether people have literal soul mates. There’s an implicit gap between people’s private self-understanding of their own moral nature and the way those complex identities are reduced in the media and public-research reports, like the ones produced by Pew. This is what makes religion polling different from political polling: A question about who a person might vote for is relatively straightforward. A question about whether he or she believes in heaven or an afterlife is not.
This complexity is part of why religion is often not taken seriously at mainstream media outlets—it’s hard to talk about religion well. (If you doubt that mainstream media outlets don’t take religion seriously, pick any major national newspaper and count the number of religion reporters. Then remember that nearly 80 percent of Americans identify as religious in some way, and that the other fifth are also all humans who grapple with life-cycle and existence questions.) While religious diversity undoubtedly brings richness to American life, this also makes it harder to engage questions of religion, belief, and identity in public. When private religious views do bubble up into public life—a baker’s refusal to serve at gay weddings, a school district’s unpopular ban on Halloween in an attempt to respect religious minorities—it’s often accompanied by distrust, vitriol, and, sometimes, threats of violence.
All of this would be probably be complicated with or without the existence of polls, if it’s even possible to imagine such a world. But following Wuthnow’s argument, it seems there are some aspects of religious life that polling can exacerbate or harmfully distort. Triumphalism among Christian conservatives in seeing the shrinking church attendance of mainline Protestants. Narratives of religion’s decline, which are at once over-exaggerated, historically suspect, and part of the feedback loop that says religious questions are losing their importance in modern life. And though the best of polling attempts to carefully examine the distinctive experiences of minority religious groups, it more often reifies the idea of a monolithic norm: white Protestantism. Low response rates mean that minority populations, such as Jews and African Americans, can be underrepresented even in surveys with large sample sizes. Statisticians might give more value to their responses, or weight them, but this can smooth over the differences between respondents and non-respondents, Wuthnow argues.
Bias creeps in subtly, often in the way questions are asked. For example: African Americans are only somewhat more likely than white Americans to go to religious services every week; if that’s the test of religiosity used in a poll, black and white faith may not seem so different. But in a large national study, Wuthnow found, black respondents spent much more time than white respondents at the services they attended. They also expressed their faith in different ways, like praying for fellow congregants. No doubt, some polls capture this subtlety and account for minority believers. But Wuthnow argues that this is the exception, mostly only possible in a context like the academy. This contributes to the lingering myth that white Protestants have ownership over American religious life.
Most of all, in the absence of robust, alternate languages for talking about religion in public life, polls create the illusion that religious questions can actually be captured fully in surveys. This can be true even if pollsters aren’t hucksters, although they sometimes are; even if pundits mean well, which they sometimes don’t. In his book, Wuthnow seems to hint at malice or negligence on the part of pollsters—it’s not overt, but the whisper of conspiracy is there. There’s not really a corps of evil social scientists plotting to reduce human life into neat numbers, though. Polls themselves are not bad; it’s the absence surrounding them that’s more troubling.
So, keep reading Pew polls; they are valuable. It is okay to care about things like belief in God and rates of religious attendance; I, for one, will keep writing about them. If you mourn anything, mourn the meaningful Grappling With Existence that has to happen in private spaces, rather than public ones, an experience that’s not well-understood or often taken seriously. If the old gods have ascended from their graves, as a melancholy German once said, they have become statistics.