To Assess Evidence-Based Policy, First Look Inward
Steven Putansu, PhD, is a Senior Social Science Analyst at the US Government Accountability Office, a Lecturer at American University, USA, and the author of Politics and Policy Knowledge in Federal Education: Confronting the Evidence-Based Proverb (2020). Read the chapter “Moving Beyond the Evidence-Based Proverb” free on Springer Link until 24th June 2020.
Days before Christmas in 2018, I was frantically refreshing news feeds to follow progress on the Foundations for Evidence-Based Policymaking Act. The Act made it through Congress on December 21st, was sent to the President on January 2, 2019, and signed into law on January 14. I had been working on my book on evidence-based policy for nearly ten years, and was filled with a mix of excitement about potential improvements to how government produces and uses policy knowledge, skepticism about the details of implementation, disappointment that my work hadn’t contributed to the decision, and a bit of fear that my research might suddenly become irrelevant.
In that moment, I was guilty of many of the behaviors my research had set out to critique. My excitement was driven by the unchecked optimism that improvements to evidence-based policy would lead to better, less political decisions, while my skepticism driven by the fear that political influence would hamstring effective implementation. A more nuanced view of the joint importance of politics and policy knowledge was abandoned completely, while I relied on the evidence-based proverb that treats them as an either-or proposition. I had attended many of the Commission on Evidence-Based Policy meetings, provided comments on several drafts of their report, and feedback on some text of the law. I was surprised and excited by the bipartisan support for the bill, and felt that the Act was a major step forward. Nevertheless, I repeated a harmful pattern I would identify in my own work, and treated the non-use of a single piece of policy knowledge—which was not even published—as a disappointing signal that the decision somehow lacked in merit.
Over the next several weeks, I reassessed my research through the lessons I hoped it would provide to others. First, I assessed the purpose of my work: to offer applied frameworks for understanding the role of politics and policy knowledge in decision making, and to offer practical guidance for researchers and decision makers to improve the relevance and use of data, information, and evidence. I reassessed how major changes to the cases I had been studying (Title I of the Elementary and Secondary Education Act and Federal Student Loans) had provided new insights to my work, and decided that Evidence Act could be another such opportunity.
I started to take note of press releases about the act, media coverage, and commentary from the academics and practitioners on my Twitter feed. I listened to presentations from agencies, the Office of Management and Budget, and scholars and activists who had pressed for the act. From these, I concluded that the world had not suddenly changed, and the misperceptions of bright lines between facts and values, politics and policy knowledge, good and bad decisions all persisted. I could still offer some insights into how these politics and policy knowledge interact, overlap, and blend together. There was still potential to move toward more evidence-based expectations about whether and how policy knowledge could support decisions about the effectiveness, efficiency, and equity of government action. This realization gave me an opportunity to consider the similarities and differences between the Act and the fifty years of evidence-based reforms that preceded it. Building on that analysis, I was able to tweak and reframe some of the analysis and conclusions. The prior ten years of work had not been in vain.
This experience was a reminder of the powerful siren’s song of the evidence-based proverb, and also the ability of personal interest to influence how we experience and understand decisions. Even with ten years dedicated to studying the value of nuance and context of understanding decisions, my instinct was to think about my personal impact first, and then rationalize my dissatisfaction by pointing towards some imperfection in policy knowledge. This was a powerful lesson that humility is required not just of those trying to inform and influence decisions, or to those making them, but also to ourselves as we assess those actors.
For me, this was a reminder that the world is not always as simple as we want it to be. We are constantly processing opinion and fact in tandem, while dealing with competing preferences and priorities, with wicked policy problems that our current policy knowledge cannot fully explain or solve. When we take absolute positions of right and wrong, we often set ourselves up to overlook incremental progress, opportunities for compromise, and uses of policy knowledge to frame, lead, and negotiate decisions. The first step to improving the use of policy knowledge, as it turns out, may well be reassessing our own use of data, information, and evidence and the roles our own politics play in that use. If we can understand ourselves in this way, perhaps we can offer more realistic expectations for setting and prioritizing political goals, and for designing, implementing, and assessing the public policies and programs created to pursue them.