Can Incarceration Really Strip People of Racial Privilege?

Lance Hannon, Robert Defina

Sociological Science, March 18, 2016
DOI 10.15195/v3.a10

We replicate and reexamine Saperstein and Penner’s prominent 2010 study which asks whether incarceration changes the probability that an individual will be seen as black or white (regardless of the individual’s phenotype). Our reexamination shows that only a small part of their empirical analysis is suitable for addressing this question (the fixed-effects estimates), and that these results are extremely fragile. Using data from the National Longitudinal Survey of Youth, we find that being interviewed in jail/prison does not increase the survey respondent’s likelihood of being classified as black, and avoiding incarceration during the survey period does not increase a person’s chances of being seen as white. We conclude that the empirical component of Saperstein and Penner’s work needs to be reconsidered and new methods for testing their thesis should be investigated. The data are provided for other researchers to explore.

Creative Commons LicenseThis work is licensed under a Creative Commons Attribution 4.0 International License.
Lance Hannon: Department of Sociology, Villanova University. Email:

Robert DeFina: Department of Sociology, Villanova University.  Email:

  • Citation: Lance Hannon and Robert DeFina. 2015. “Can Incarceration Really Strip People of Racial Privilege?” Sociological Science 3: 190-201.
  • Received: October 16, 2015.
  • Accepted: November 28, 2015.
  • Editors: Jesper Sørensen, Kim Weeden
  • DOI: 10.15195/v3.a10

, , ,

4 Reactions to Can Incarceration Really Strip People of Racial Privilege?

  1. Aliya Saperstein March 18, 2016 at 9:04 am #

    We are flattered by the intense interest that Hannon and DeFina (hereafter H&D) have expressed in our work, both in this Sociological Science piece and in a forthcoming comment, with another colleague, in the American Journal of Sociology (AJS). We are in the midst of responding to the AJS comment, and encourage those interested to read the exchange, which is scheduled to be published later this year. It will include a more complete response to H&D’s re-examinations of our research than we have time or space to cover here. As our AJS reply shows, status-related factors, including having been incarcerated, are significant predictors of racial categorization in models with respondent fixed effects across a variety of subpopulations.

    We are also convinced that our NLSY results are not a “fragile” artifact of particular coding schemes because we find a reciprocal relationship between social status and racial categorization in other datasets, cohorts, and time periods. H&D fault us for not including models with respondent fixed effects in our analyses of arrest and racial classification using Add Health data, but do not mention that the results accounted for perceived skin color, a measure that is not available in NLSY79 and is frequently raised as a confounding factor (see Saperstein, Penner, and Kizer 2014). Nevertheless, models with respondent fixed effects confirm our finding that men who reported an arrest were significantly more likely to be subsequently classified as black than men who were never arrested. (Men who reported an arrest were also significantly less likely to be classified as Asian; results available upon request.) Saperstein and Gullickson (2013) also present fixed effects models demonstrating the association between occupational status and racial classification in historical linked census data. Finally, we recommend that H&D and other interested readers consult Freeman et al (2011) where we present our strongest causal evidence on the influence of status cues on racial categorization.

    More generally, it is important to note that we have a very different conception of the process of racial categorization, and the relationship between race and privilege, than H&D. We would not be comfortable discussing “racially ambiguous” and “unambiguous” people without establishing the criteria for categorization that might justify these statements, and we maintain that future research would be better served by examining directly (rather than assuming) what information or which characteristics are influential when people make racial categorizations. Our research suggests that, in addition to using physical appearance or known ancestry, Americans also consider markers of social status when deciding who “fits” best in which racial category. However, we do not believe the potential for fluidity implied by using time-varying characteristics like social status to assign individuals to racial categories poses a challenge to the “durability” of racial privilege – quite the opposite. As we have argued elsewhere (e.g., Penner and Saperstein 2013), as long as racial fluidity is selective and consistent with current patterns of inequality, it will make racial privilege more durable in the aggregate, and reinforces the very idea of racial difference.

    That said, we would be remiss if we did not acknowledge H&D on one point: case 1738 should have been labeled 1728. We regret the error and any confusion it may have caused.

    • Lance Hannon and Robert DeFina March 25, 2016 at 10:16 am #

      Our replication study focused on three main points: (1) most of the analyses presented in Saperstein and Penner’s Social Problems paper cannot begin to address the question of incarceration-driven fluidity because they do not remove the race-related selection effect regarding who is most likely to be subjected to imprisonment, (2) the few analyses presented that are potentially suitable for the issue at hand produce results that do not stand up to even slight adjustments in methodology, and (3) the best approach to documenting the necessary correlation for supporting Saperstein and Penner’s causal claims involves comparing a respondent’s racial classifications when interviewed in prison/jail to classifications when that same respondent is interviewed elsewhere.

      Saperstein and Penner’s reaction did not focus on these main points. Instead, they noted that in unpublished and forthcoming work significant effects emerge in respondent fixed-effects analyses for a variety of populations. However, this is not relevant for our critique regarding the published results in their Social Problems paper. Additionally, as we noted in our conclusion, even if one were to find a robust within-individual correlation with fixed-effects, doing so is just the first step for supporting their causal argument. A logical next step would involve testing whether the observed correlation varies in ways specifically predicted by the hypothesized causal mechanism. So, for example, one might expect that interviewers hearing directly about a respondent’s arrest history would be significantly more likely to classify a respondent a particular way than interviewers forced to infer an arrest history from subtle cues a respondent might give while privately entering data into a laptop (see our footnote 15).

      Saperstein and Penner admit to one small error. They note that case 1738 is actually 1728. Unfortunately, case 1728 does not match up to their table either. Before noting that we could not find the classification pattern (in our footnote 5), we searched the data for any cases where 7 of 9 pre-incarceration classifications were white. None exist. This case was not simply mislabeled; the date of incarceration is also off by one period. Given the other abnormalities that we uncovered (see, for example, our footnote 11), we encourage Saperstein and Penner to publicly provide the data and code used to produce their tables.

      In addition to our person-year dataset, we now provide a person-level file in our supplemental materials that summarizes the racial classification histories of all 620 ever-incarcerated respondents. In line with the results from our respondent fixed-effects analyses, these summary data suggest that within-individual variation in racial classification is not meaningfully related to within-individual variation in prison/jail interview context. For example, while 61 respondents saw their proportion of classifications as white decrease for the years they were interviewed in prison/jail (relative to before they were ever incarcerated), an equal number saw an increase in white classifications (of almost exactly the same magnitude).

      Outside of the general importance of replication for sociology’s position as a social science, we believe our reanalysis is substantively important in that an exaggerated view of the permeability of racial boundaries in the United States may lead to an underestimation of the degree to which various populations face obstacles beyond their control as individuals. This is what drives our interest.

  2. Aliya Saperstein March 31, 2016 at 5:17 pm #

    As we noted above, the forthcoming exchange in AJS will be our definitive response. We are currently in the process of assembling a full replication package (designed to take people from the publicly available NLSY data all the way through to our AJS tables), and anticipate posting this as part of the website that will accompany my book (the manuscript for which is currently in progress).

    Finally, regarding case 1728: It is true, in addition to mislabeling the case in our Table 5, the shift from pre- to post- incarceration presented there is also off by one position. However, those mistakes were confined to the translation of the data to the published table and, if anything, the incorrect cut in the racial classification history worked against our argument not for it. The correct split is WWWWWWWO before prison and OOWWOOWOO after; which means the respondent was classified as white 88 percent of the time prior to incarceration and 33 percent of the time afterward (rather than the 78-percent white before and 38-percent white after that we reported).

    • Lance Hannon and Robert DeFina April 13, 2016 at 12:32 pm #

      Our understanding is that the AJS comment and response section is meant for short comments and short replies specifically focused on work published in AJS. We hope that Saperstein and Penner will utilize the multiple response options in SocSci to address our distinct critique of their Social Problems paper.

      In our view, the correction for case 1728 actually makes it more of an extreme outlier, which is problematic given Saperstein and Penner’s claim that the selected cases “exemplify the pattern of results in both our descriptive findings and the multivariate analyses that follow” (p. 106). By extreme outlier, we mean that 99% of cases saw less of a decline in percent white classification pre-to-post than case 1728 did and that the mean difference is practically 0 (the median difference is exactly 0). We strongly encourage readers to examine our supplemental figures that illustrate the racial classification histories for all respondents with valid data.

      We are happy to hear that Saperstein and Penner foresee the eventual creation of a replication package for the AJS tables, but this will not address the multiple coding abnormalities we uncovered in their Social Problems paper. In particular, concerning the unaddressed issue we raise in our note 11, we believe that this is likely the direct result of the coding complications associated with an “ever” measure, especially when one is trying to be true to the statement that “some respondents are missing data on their type of residence at the time of the survey; we remove these cases from our analyses” (Saperstein and Penner 2010, p. 99).

      Saperstein and Penner summarize their Social Problems paper by saying, “In the NLSY study, we examined the effect on racial classification of being interviewed while incarcerated” (Saperstein, Penner, and Kizer 2014, p. 118). To us, this means a focus on incarceration at the time of the interview, not ever being incarcerated, and thus it is unnecessary to wrestle with what it means to be ever missing on respondent residence data. Still, like the results from our classification history analyses, the ever-incarcerated variable uniformly produced statistically insignificant results in our fixed-effects models (P>.10) and the estimates were particularly small when the sample was limited to observations with continuously valid residence data (never missing for an ever measure). Detailed results, data and code are all provided on our data/code page.

      Saperstein and Penner have posed some very interesting questions in their work that will continue to be important for sociology. However, the evidence and conclusions in their Social Problems paper need to be reconsidered.