Moving to opportunity? Challenging an analysis of poverty, opportunity and PTSD.
12 September, 2016 | Sarah Theissen |
|
|

Dr David Norris
A guest blog by David C. Norris, who together with Andrew Wilson recently published ‘Early-childhood housing mobility and subsequent PTSD in adolescence: a Moving to Opportunity reanalysis’ in our Preclinical Reproducibility and Robustness channel.
In the 1990s, Congress mandated the ‘Moving To Opportunity for Fair Housing Demonstration’ (MTO)—a randomized, controlled social experiment. MTO enrolled 4604 households living in distressed inner-city public housing, and randomly assigned some of them to get housing vouchers that could empower them to move. The overarching theme was: What happens when families trapped in the worst public housing projects get a chance to move out?
A 2014 study report in the Journal of the American Medical Association examined adolescent mental health at 10-15 years’ follow-up, and found that giving families voucher-based opportunities to move had helped their girls, but hurt their boys. The effect on boys was remarkable not only for its direction, but also for its sheer size: vouchers tripled boys’ odds of getting Posttraumatic Stress Disorder (PTSD).
Why reanalyze this specific data set?
The 2014 report by Ronald C. Kessler et al. initially got my attention for the same reasons that JAMA’s editors probably wanted to bring it to their readership. Concentrated poverty, if you consider the magnitude of human potential it destroys, is probably the greatest ‘killer’ of children in the US. This report seemed to offer a useful biomedical perspective on the problem, and on related policy questions.
But reading the report, I began to worry that a narrowly biomedical perspective could actually promote a wrongheaded interpretation. I was already aware of findings from the Birmingham Youth Violence Study, where children’s exposure to community violence had a perversely protective effect—by ‘desensitizing’ them to violence experienced at home and in school. If a similar desensitizing effect was at work in MTO, then by uncritically accepting the ‘PTSD’ of the Kessler report as a straightforward measure of ‘mental health harm’ we would risk tacitly conceding that desensitization to extreme violence is in some respects ‘healthy’. From a social justice perspective, it would be devastating if we endorsed the view that boys from certain socioeconomic backgrounds are best nurtured in violent housing projects. What would we be saying? That at least this toughens them up for lives inevitably destined to be “nasty, brutish and short?”
After writing about these ‘construct validity’ concerns in the JAMA Letters section (and I must say being somewhat curtly brushed off), I set to work specifying a protocol for detecting desentization in the MTO data. But as I plodded through a preregistration on ClinicalTrials.gov, a new surprise arose. A footnote on page 38 of this document suggested (and Dr Kessler confirmed by email) that the reports’ PTSD outcome had been imputed by a dubious method that was neither mentioned in its Methods section, nor documented anywhere else. As I wrote to Dr Kessler in November 2014, “Any question of construct validity now seems moot until this imputation procedure has withstood scrutiny of a more prosaic and purely statistical nature.”
The key findings
Our most important ‘finding’ was probably just finding out what was actually done in that 2014 JAMA paper – by reproducing the authors’ original results using code and data they shared with us.
MTO adolescents got a self-administered, computerized psychiatric interview that was abridged; consequently, PTSD could be ruled out, but not actually ruled in. So, instead of the desired zeros and ones for a binary PTSD outcome, Kessler et al. had a data set that was basically zeros and question-marks.
To convert their question-marks to 1’s and 0’s, Kessler et al. extrapolated from another study where a complete set of PTSD questions was asked. Trouble is, that was a study in a general-population sample of adults—hardly the group to extrapolate from when imputing PTSD for inner-city MTO adolescents. Figure 1 (see below) in our paper shows these two samples scarcely overlapped in age.

Figure 1. Age distributions of the NCS-R and MTO Youth surveys. The PTSD imputation model used in the Kessler report was estimated in the former population, and applied to the latter.
Compounding this already incredible error, the technical details of the PTSD imputation made it overly sensitive to several arbitrary choices in the original analysis. Figure 2 and Figure 3 of our paper show results of prespecified bootstrapping experiments, which we initially hoped would allow us to estimate the sampling distribution using random sampling methods. But we uncovered such pathological patterns of dependence—on the choices of two pseudorandom number generator (RNG) seeds that fed into the imputation, and also on what variables were included in the logistic regression model used for the extrapolation—that it seemed nonsensical even to describe a sampling distribution.
At the end of the day, it must be said that the original claim of a ‘statistically significant effect’ on boys’ PTSD was still standing after the beating it took from our bootstrapping. On balance, though, especially considering its depantsing by Figure 1, you wouldn’t say that claim is striking a very dignified or credible pose at this point.
We restricted our reanalysis to questions and conjectures prespecified long before we set eyes on the data. But my coauthor Andy Wilson has noticed that nearly 25% of the boys were uninterviewed, yet were included in the final analysis, weighted identically to those interviewed, and contributed pivotally to PTSD counts. Andy is currently preparing a new analysis to investigate how sensitive the PTSD findings were to the inclusion and weighting methods for these uninterviewed boys.
The Power of Open Data
Our findings demonstrate how social science can go wrong when done in the vacuum of closed data and hidden code. Credible, evidence-based public policy needs underlying science that is open to public scrutiny. Researchers must expect their work will be reproduced and reanalyzed routinely. A large amount of effort on the part of Dr Kessler’s team and the National Bureau of Economic Research (NBER) was necessary to make our reproduction and reanalysis possible. Such effort should probably be designed-in to all publicly funded research, with the contracts specifically allocating funds for open data and open code, and for independent reanalysis by third parties.
|