“What is research but a blind date with knowledge?” -Will Harvey
How do you relax?
Some people enjoy a nice glass of wine.
Others enjoy binge-watching a new show (“Netflix and Chill”).
Me? I enjoy a good organizational case study.
(I’m a simple guy with simple pleasures.)
For many years, I traveled to congregations across North America, doing presentations about how to make synagogues more innovative. Early on, my colleagues taught me a golden rule: Never give a congregation a case study that is impossible for them to imitate. Nothing makes a Jewish community angrier than a presenter who can only provide stories of success from congregations that are wealthier and bigger. In contrast, provide well-constructed stories of success, and that same presenter fosters instant credibility.
While this rule of thumb cultivated much-needed empathy for me early in my career, it also fostered a lifelong search to understand how we know whether or not something “really” works in the Jewish world. A few weeks ago, we discussed John List’s Voltage, an amazing analysis of why certain ideas scale and others do not. This week, we will explore one theory of why the Jewish organizational world lacks even a modicum of data to understand institutional effectiveness and one possibility on how we might turn a corner.
Replication Crisis
Fortunately, Moneyball Judaism has several academics and quants who subscribe to this newsletter. And many of them, and I imagine a number of you, are aware that there is an ongoing debate in many scientific fields about the accuracy of certain research findings, particularly in behavioral science, sometimes called the “replication crisis.” While some object to calling this issue a “crisis,” it’s important to define what this issue is and why it matters.
Every day, dedicated professionals work to produce research that someone will deem revolutionary. The stakes of producing revolutionary research are high: advances in medicine, economics, chemistry, psychology, etc., can benefit millions of people, and the researchers who produce or patent these findings can make billions of dollars or win prestigious prizes (e.g., Nobel Prizes, MacArthur Genius Grants, etc.).
However, original research is the first step towards revolution; ultimately, when that researcher publishes their process for reaching their conclusion, someone with similar expertise should be able to follow those instructions and get roughly the same result (i.e. the findings should be replicated).
And this is where we come to our problem.
Much has been written on this topic, and I will avoid writing about specific people whose careers have been rocked by this ongoing debate.1 Given my lack of expertise, I think it’s foolish for me even to keep my pinky toe into specific conflicts. But since we want to be informed, I’d recommend that you read two things to learn more about the replication crisis:
Meta-Analysis: John P.A. Ioannidis published a paper in 2005 entitled “Why Most Published Research Findings Are False,” where he concludes that we need to “acknowledge that statistical significance testing in the report of a single study gives only a partial picture, without knowing how much testing has been done outside the report and in the relevant field at large.”2 In particular, Ioannidis is a major proponent of something called a “meta-analysis,”3 which essentially combines multiple independent studies in the same field to evaluate bodies of research.
Reproducibility Project: Second, the Center of Open Science created something called the “Reproducibility Product,”4 which is essentially doing research to reproduce other research. Here’s a video summary of their project. Yes, it’s an infomercial, but an essential one for the following reason: Many researchers do not have the time or budget to try and reproduce other people’s research while also trying to produce their own original research. As such, a third party with different skin in the game can be valuable.
But here’s where the situation is even worse for our readership…
On a macro-level, the fields mentioned above do have research that has been replicated successfully, so there is plenty in the sciences that stands up to scrutiny.5 It’s doubtful we can say the same about research into Jewish organizational life. But on a micro-level, it goes back to my opening anecdote: people want to know what experiments done by others will work for their organization, and they can smell whether or not certain stories of success simply cannot be replicated. Analyzing how we might research these questions is critical because it is something that people desperately need.6
Random Acts of Medicine
Since Jewish organizational life largely lacks peer-reviewed empirical studies, it’s hard to say that there is anything like a “replication crisis.” That said, the lack of experiments requires us to think about how we can learn something closer to an objective truth using the data we do have. For this reason and others, I hope you’ll consider reading Anupam (“Bapu”) Jena and Christopher Worsham’s Random Acts of Medicine.
In medicine, the gold standard of research is the randomized controlled trial. However, Jena and Worsham argue in the introduction that there are several questions difficult to explore using randomized controlled trials, and thus their book focuses on “natural experiments,” which they define as experiments that “occur without the influence of any manipulating hand.”7 In particular, natural experiments answer important questions where the only way to conduct an experiment would be to put someone’s life in danger, an ethically dubious proposition. My favorite one they analyze is the relationship between marathons and mortality rates.
When a city hosts a large marathon, certain streets must be completely closed off to allow thousands of runners to complete the racecourse (for example, the New York Marathon closes off much of 5th Avenue and 2nd Avenue.) However, one consequence of this decision is that the fastest routes to major emergency rooms in that city are no longer as accessible during the race, requiring that ambulances, cars, cabs, etc. take alternative routes. Jena and Wortham wanted to examine whether or not one could learn about any relationship between the effect of a marathon on the health of all people in that city.
In a study published in The New England Journal of Medicine, Jena and a group of researchers examined eleven different marathons in the United States over a decade and made two key findings:8
First, a similar number of patients were hospitalized on marathon days for heart attacks and cardiac arrests, meaning that there was no major increase or decrease in the number of people who went to the hospital just because a marathon happened to be taking place.9
Second, and more controversially, they found that marathon days led to an “absolute increase of 3.3 percentage points in a thirty-day mortality,”10 which they primarily attribute to the fact that ambulance transport times were an average of 18.1 minutes on marathon days, and 13.7 minutes on non-marathon days.11 In other words, the research shows that there is a slightly higher risk of death on marathon days because it will take longer to get to the closest hospital.
As you can imagine, the second finding caught people’s attention, and I’d strongly suggest reading the entire chapter to ensure you understand the entire arc of their research, especially because some hospitals are aware of this issue and make appropriate plans. However, the findings provide just one of several amazing examples of how natural experiments can provide insights difficult to impossible to find any other way.
Add this to the reasons that Jewish organizations need to be better at collecting data. Yes, it’s great when someone will commission original research to produce much-needed data. But imagine what someone could learn just from analyzing well-kept data?
For a Quick Summary…
What I Read This Week
Habits of the Top 1% of Engineers: I know little about engineering, but the more I read about it, the more I realize its principles can be applied to many non-engineering fields. I loved this piece.
Yoel Roth’s Story That You Need to Hear: Yoel Roth’s story is one of courage and tragedy that everyone needs to read. I won’t repeat the title.
8 Friends You Need: I love thinking intentionally about friendship, and I never thought about the idea that we need different kinds of friends for different moments. Here’s what sparked my thinking.
The Importance of Mentorship: While it hardly seems useful to tell you that leaders need mentors, Tyler Cowen’s Marginal Revolution summarizes how much having the “right” mentor matters for someone's career. Read the post, and consider reading the full research.
The Rise of the Yom Kippur Appeal: I find Tablet hard to read for various reasons, but they occasionally have an article that jumps to the top of my list. This was one of them.
In particular, some of these debates around specific researchers involve a gender component which is my main reason for not naming names.
I’d be irresponsible if I did not mention that an online publication called Data Colada has been one of the most frequent critics of research that allegedly cannot be reproduced. I’m avoiding citing them because they are central to several major conflicts on this topic, including one with significant legal implications. But regardless of the merits, one should know they exist.
Of course, one might argue that producing this kind of research is “impossible.” And while it’s a plausible hypothesis, to me, it seems like a cop-out. Calling something “impossible” is often a way to avoid trying.
Ibid., 77.
Ibid., 80.