We only use your email address to send you the newsletter and to see how many people are opening our emails. A full privacy policy can be viewed here. You can change your mind at any time and update your preferences or unsubscribe.

Can mixed methods promote evidence-informed policy-making for women’s empowerment?

Nepali girl. Photo: ADB

This blog by Innovations for Poverty Action summarises discussions from a researcher gathering on measuring women’s empowerment in impact evaluations. Read the summary blog on the IPA website.

Previously, Dr. Sarah Baird told us about the benefits and challenges of integrating qualitative methods into impact evaluations on women’s empowerment, particularly a multi-country longitudinal study on gender and adolescence.

Today, we’ll hear from the director of that study, Dr. Nicola Jones, to understand the qualitative researcher’s perspective. In this post, we discuss why qualitative methods are important for understanding the social norms that shape women’s lives. She also shares her experiences using mixed methods studies to promote evidence-informed policymaking with qualitative methods by helping to illuminate the mechanisms behind quantified impact estimates. As we discuss below, this combination can allow researchers to think more critically about their findings, tailor dissemination to different stakeholders, and explain unexpected results.

Nellie:    Last week, Sarah Baird and I talked about how impact evaluations that are looking at women’s empowerment could benefit from incorporating qualitative methods, particularly in the DFID-funded Gender and Adolescence Global Evidence (GAGE) mixed methods study that you’re working on with her. Could you tell us why, in your experience, qualitative and participatory methods are particularly useful in evaluations that focus on women and girls?

Nicola:    Qualitative methods—particularly those that allow the participant to lead the conversation—can allow you to unpack the role that deeply-entrenched, often sticky, social norms have in perpetuating the barriers to women’s empowerment. Norms don’t necessarily disappear, but they can be quite malleable, as they come to be practiced in more covert or underground ways. Recently in Ethiopia we’ve seen that, while people might be more aware that child marriage is illegal, and even aware of the penalties of going against those legal bounds, they are still finding ways to continue practicing this tradition. Qualitative methods help us understand those nonlinear change pathways that won’t necessarily come up in quantitative survey data.

Nellie:    What we’re able to capture or not about social norms came up continually throughout the workshop that inspired this blog series. Could you expand on why quantitative methods alone might not be able to capture what is going on in social norms?

Nicola:    There is progress being made in potential survey modules on social norms, but there are limitations to what can be done quantitatively. First, it’s not clear whether surveys administered to individuals are capturing individual attitudes or really capturing norms that are operating at a community level. Understanding those levels remains tricky without making the survey incredibly long and cumbersome. The beauty of qualitative methods is being able to unpack whether it’s the sanction of nonconformity that is driving people’s behavior or whether it’s really a deep-seated belief that they indeed hold.

Nellie:    I imagine that this has come up in the work you’re doing in the Gender and Adolescence Global Evidence (GAGE) project. Could you briefly introduce what you’re studying and how your qualitative research on adolescent girls is strengthened by bringing together an interdisciplinary investigative team?

Nicola:    GAGE is a nine-year research initiative funded by DFID that aims to better understand the types of program interventions that enhance adolescent girls’ well-being over the course of the second decade of life and as they transition to early adulthood. We focus in low- and middle-income country contexts, some of which are conflict-affected. We have two sets of research questions. First, we’re trying to understand how adolescents perceive the changes they’re undergoing, as well as the broader social and political order in which their lives are playing out. The second set of questions is focused on the effects of interventions on a broad range of outcomes from economic empowerment to education, voice and agency, psychosocial well-being, freedom from violence, and sexual and reproductive health. We’ve deliberately kept it broad to understand the ways in which programs—even if they’re targeting a particular dimension of adolescents’ well-being—have effects on the other dimensions of their lives.

We brought together an interdisciplinary investigative team, including psychologists, sociologists, and geographers, to understand these multiple dimensions of adolescent well-being. And perhaps less common in qualitative endeavors, we are working with several political scientists to help us unpack the politics behind the ways in which the programs are implemented and the ways in which they connect with broader structures and systems.

Nellie:    And specifically, what can an economist—Sarah, for example—bring to the table in a mixed methods study?

Nicola:    Having development economists on the team helps us to think in a much clearer and more precise way about impact, what are we measuring, and what are the claims that we can make about change. A development economist brings questions and tools critical to distilling out the specific contributions of a program intervention on the broader experience in adolescence. For myself, and I think some other colleagues who tend to use more qualitative methods, it’s been valuable to see the creative ways that experienced development economists can think about exploiting multiple treatment arms to tease out the contribution that different program components or variations are having on adolescent lives. For example, in several of the countries in which we’re working, particularly Rwanda, Nepal, and Ethiopia, we’re examining the effects of an intervention working with very early adolescents versus adolescents in the middle of the second decade. There’s a treatment arm that lasts two years and one that lasts five years, both starting with 11-year-olds. Then there’s a third arm with a delayed start. We typically don’t exploit that kind of multi-arm design within qualitative research. In fact, good examples of qualitative methods that sit alongside multi-armed interventions are very hard to come by in the literature, beyond just tacking on focus group discussions in each group.

Nellie:    So, development economists can construct a design that unpacks the different components or variations of an intervention, but if, for example, the five-year program doesn’t actually work much better than the two-year program, the qualitative methods could help us understand why.

Nicola:    Exactly, that’s critical.

Nellie:    Given the potential benefits of integrating qualitative methods into impact evaluations looking at women’s empowerment, I naturally wonder why don’t we see more partnerships between researchers across disciplines in this space. Is there an example of a study that you worked on that could have benefited from a stronger quantitative component? What were the factors that played into why you weren’t able to or didn’t include more robust quantitative methods in that study?

Nicola:    A couple of cases come to mind. In one study, we examined citizen perceptions of cash transfers for vulnerable populations in the Middle East and Africa. We found that it’s critically important to make sure transfer recipients have opportunities to engage in that programming, rather than taking a top-down approach. But while the findings that our country partners unearthed through the in-depth qualitative work were fascinating, receptiveness to those findings among policymakers was relatively limited, partly because we weren’t able to quantify some of the impacts that we were seeing. It also could have led us to think more critically about some of our qualitative findings if there wasn’t a convergence.

Nellie:    It sounds like in this case, policymakers may have had a bias toward quantified results. And so, providing context for your qualitative findings with quantitative work would have encouraged take-up of those findings on a local level.

Nicola:    Yes, ministries of finance especially are more receptive to quantitative findings. Also, when you are trying to shift discourse, multiple methods allow you to tailor arguments to different stakeholders. If you’re confirming what people want to hear, then they don’t interrogate your methods necessarily with the same depth. But when you are bringing new ideas, it becomes particularly pertinent to use robust mixed methods.

Nellie:    That makes a lot of sense. Finally, what would you say to encourage development economists to incorporate qualitative methods into their work? What are some of the challenges that you’d want them to be prepared for?

Nicola:    First, an understanding of the change pathways underpinning the numbers that they’re finding will be critical if they need to defend their findings, particularly if the results are disappointing or unexpected. Sometimes the program can have a very robust design but still might fail due to the political context. In those cases, qualitative work with a political science perspective could shed light on why the policy processes in which the programs are situated are either facilitating or hindering program effectiveness.

In terms of the drawbacks, mixed methods research is more time and resource intensive, both in terms of data collection and in terms of the analysis process. Particularly because there’s not yet a universal consensus on what constitutes gold standard mixed methods work. We are still learning the best way to weave together different approaches in a way that is most compelling and methodologically robust.
And sometimes participatory approaches, where there may be repeat interactions with individuals or communities, are more transformative for participants than the actual program that’s being evaluated. In the Gaza Strip, some of the adolescents involved in our participatory research said that the questions that we were asking had transformed the way they thought about their current situation and their next steps more than the curriculum that we had been evaluating. On the one hand, it’s really positive that we can have that kind of effect, but it’s important to then disentangle what the intervention is aiming to do from the effects more interactive evaluation methods might have on program beneficiaries.

Nellie:    Those are all excellent points. I think it’s important when researchers want to use mixed methods that they go into it with eyes open and see the potential challenges that they might face. Do you have any lingering comments?

Nicola:    I think we’ve covered most of the key points. I’d add that we need to have a continued investment in, not only assessing what are the content lessons that we get from mixed methods research, but as you are doing through this blog series, trying to learn from the process of mixed methods or multi-disciplinary approaches as well. Until it’s a much more developed science, continuing to learn and document and share both the challenges and the opportunities from such a process is itself a really important endeavor in my view.

Nellie:    On that point, I’m really glad that you were able to join in on this process and make time. GAGE sounds like a huge undertaking, so I really appreciate that you could also take time to be part of these conversations.