When I ask policy makers and practitioners what they would like to know about a particular intervention, they usually reply, almost as a knee jerk response, that they want to know whether it is effective or whether it works. If I ask them what effective or ‘works’ means for their intervention, they usually are unable to answer.
By creating a research question -probably the most difficult part of a research project and the key to knowledge development – you can direct the project and if you find an answer then it can be the most satisfying part of research since this is a partial resolution of a problem that interests you. So a research question can suggest a method and define the problem. A quality question indicates a deep interest and knowledge of the topic, surprising and engaging the reader. The effectiveness question does not do this, since it shows a lack of interest in the topic by choosing a generic formula that fits all social problems.
When I worked in charitable foundations as a policy officer, I would be sent glossy reports of evaluations and invited to their launches that showed how effective the intervention of the grant holders had been. The publications consisted of unsubstantiated claims, mostly, of an underdefined notion of ‘effectiveness’. It used to drive me mad since I was quite interested in the problems these interventions were addressing so I would read them occasionally but there was a remarkable lack of curiosity about the clients and the problems they were facing. Reading the pamphlets, I learnt nothing about the interventions, the participants and their problems, but I did learn how fantastic the agency was and why they should be re-funded. As such, I came to regard these evaluations as part of the agencies’ fundraising efforts. As you can imagine, I became quite cynical.
In my last funding job, I was able to design a grants programme that attempted to support agencies to develop research questions that would help them understand something more about their interventions and so improve their offering to their clients. Sounds sensible and easy, doesn’t it?
It wasn’t and I lost my job. Some of the agencies developed interventions that failed and were so trusting that they told us. This did not go down well with the senior management at the Fund who wanted all the projects to be effective. Never mind that failure can be a rich source of learning. In one instance, the learning from a project enabled a consultant to advise other teams from different hospitals not to go down the route she had followed to prevent HIV transmission from husbands to their pregnant wives. Although the intervention was not successful, knowledge from it helped her and, hopefully, others to design better interventions.
This attachment to effectiveness is prevalent amongst academics as well as practitioners. I left the voluntary sector to pursue an academic career. Foolishly, I imagined that academics would not fall into the effectiveness trap of using the word to cover implicit assumptions that we are all meant to understand and so do the work of the researcher by filling in the gaps for ourselves. I imagined that academics would be engaged with understanding social problems and independent of funding interests.
Recently, I have been working with a professor and an associate professor who seem happy to use the effectiveness or ‘what works’ tropes in their research questions without defining what they mean. Therefore, it is unclear what the problems are that these interventions are addressing or even who they are for. In their reluctance to discuss the specifics or uncertainties surrounding this population, they display a remarkable lack of interest in this social problem.
Does our interest in effectiveness, mean that we reduce all social interventions to some form of cost/benefit analysis where both the costs and the benefits are a matter of opinion of the more powerful? Is this form of research a product of capitalism and neoliberalism, and essentially conservative?