You are here

  1. Home
  2. Business and Law in the time of COVID-19
  3. “We do good things, don’t we?”: considerations about social enterprises’ evaluation approaches

“We do good things, don’t we?”: considerations about social enterprises’ evaluation approaches

Image of a woman working at a laptop looking up thoughtfully

This blog is written by Dr Francesca Calo, Lecturer in Management at The Open University Business School (OUBS). This blog was originally published on 2 February 2021 and was written for the OUBS Department of Public Leadership and Social Enterprise (PuLSE) blog.

In 2009, Alex Nicholls titled his article, in which he discusses the emergent reporting practices used by social enterprises, with the question “We do good things, don’t we?”. After more than ten years and a context that has changed quite a lot in the last ten months, it is the same question that resonates in my mind when I think about social enterprises and more broadly non profit organisations, specifically in relation to their potential role in the world post Covid-19.

Before Covid-19, social enterprises have been tasked with competing for, and delivering, health and social care contracts on behalf of the state (Alcock et al., 2012; Hall et al., 2012) on the perception that they provide higher levels of innovation, cost-effectiveness and responsiveness (Bovaird, 2014). Despite this rhetoric, the evidence that provision by social enterprises is ‘better’ than available alternatives is notoriously weak (Calò et al., 2018). While the evaluation of interventions in health and social care has arguably become increasingly more sophisticated, this has not been the case where social enterprise is concerned. However, the evaluation of the impact that social enterprises have, will become even more relevant after Covid-19 when resources should be invested in players that can do “good things”.

In a study that I carried out with colleagues at Glasgow Caledonian University, we assess the potential of three methodological approaches common in health evaluation – systematic review, realist evaluation and quasi-experimental investigation – and we apply them to the complex realm of social enterprises. Systematic reviews which collate and assemble all the up-to-date empirical evidence to answer a specific research question (Shemilt et al., 2010), have been considered a robust form of research because, if undertaken correctly, they are believed to increase the generalisability of results and assess consistency of evidence (Mulrow, 1994). Realist evaluation focuses on how a specific intervention works, for whom, and in what circumstances (Pawson and Tilley, 1996), and is designed to identify the combination of generative mechanisms and contextual characteristics in achieving outcomes (Fletcher et al., 2016). Quasi-experimental investigation aims to assess the effectiveness of specific health interventions (Craig et al., 2008) and determines what works in terms of measurable outcomes (Fletcher et al., 2016; Moore et al., 2015).

We faced different challenges when applying these approaches to the social enterprise realm. Two were common to all the methods employed.

  • First, difficulties were faced in identifying a suitable comparator for a social enterprise and consequently addressing the question of what would happen without the social enterprise intervention.
  • Second, generalisability of the findings was quite difficult. In the systematic review, studies included were heterogeneous in terms of contexts, interventions and beneficiaries. Also, in the realist evaluation, it was not possible to generalise results, generated by a single case study, to the heterogeneous world of social enterprise. In the quasi-experimental investigation, very little could be said about how the findings might be applied in new settings or among other populations.

Reflecting upon the limitations of the three methods used in this study, two main lessons can be drawn that could be useful for policy-makers, researchers and practitioners. The first relates to choosing the most appropriate comparator, the second relates to the possibility of generalising results linked to the appropriateness of employing methods commonly used in public health into the context of social enterprise.

  • First, we have understood that there is not only one right comparator for all the studies related to social enterprises. To choose the ‘right’ comparator, it is necessary to recognise the diversity of the existing service providers and community initiatives. The comparator group selected should reflect meaningful choices in policy and real-world practice and should be chosen based on the study question to be addressed. Context, therefore, is a key aspect to explore how to evaluate the intervention.
  • Second, some scholars have discussed the importance of creating ‘one size fits all’ tools to compare the social value generated by very different social enterprises as a solution for the need to increase generalisability (Kroeger and Weber, 2014). However, contextual variables can greatly affect how the social enterprise ‘works’ and the achievement of outcomes. Thus, tools that do not consider the complexity of interventions and the contextual variations involved will be of limited use in adding to the evidence base and informing policymakers. Conducting the same process with the same tool for all organisations and measuring the same outcomes could actually also have the perverse result of producing negative effects on an organisation’s ability to address its social mission. It can affect the nature of the social enterprise and reduce the legitimacy of the organisation, particularly if it is done in connection with accessing public funds. It could create an instrument promoting only specific areas of policy, or investing only in specific organisations, with a resultant reduction in the wide variety of social enterprises.

If in the future, after Covid-19, policymakers aim to understand the added value of social enterprise organisations, an integrative research approach based upon the context combining different research methods and design should be implemented to improve generalisability. Selection of comparator groups (if a comparator group is important to address the research question) should be based on an in-depth analysis of the context in which the social enterprise is embedded, and the comparator group chosen should be based upon the policy aims the evaluation wishes to address. Evaluate that “we are doing good things” doesn’t come without a cost. Each of the methods adopted in our paper was time-consuming and resource intensive and require if the researcher to possess advanced skills. Public officials should then recognise the complexity and resource-intensive nature of such evaluations, and resource it accordingly. Only in this way, the question “we do good things, don’t we?” will be addressed.

Read the original blog on the PuLSE website

References

  • Alcock, P., Millar, R., Hall, K., Lyon, F., Nicholls, A., Gabriel, M., 2012. Start-up and Growth: National Evaluation of the Social Enterprise Investment Fund (SEIF) (Report), Third Sector Research Centre. Third Sector Research Centre and Health Services Management Centre, Birmingham.
  • Bovaird, T., 2014. Efficiency in Third Sector Partnerships for Delivering Local Government Services: The role of economies of scale, scope and learning. Public Management Review 16, 1067–1090. https://doi.org/10.1080/14719037.2014.930508
  • Calò, F, Roy, M.J., Donaldson, C., Teasdale, S., Baglioni, S., 2021. Evidencing the contribution of social enterprise to health and social care: approaches and considerations. Social Enterprise Journal. In press
  • Calò, F., Teasdale, S., Donaldson, C., Roy, M.J., Baglioni, S., 2018. Collaborator or Competitor: Assessing the Evidence Supporting the Role of Social Enterprise in Health and Social Care. Public Management Review 20, 1790–1814. https://doi.org/10.1080/14719037.2017.1417467
  • Craig, P., Dieppe, P., Macintyre, S., Michie, S., Nazareth, I., Petticrew, M., 2008. Developing and Evaluating Complex Interventions: the New Medical Research Council Guidance. BMJ 337, a1655. https://doi.org/10.1136/bmj.a1655
  • Fletcher, A., Jamal, F., Moore, G., Evans, R.E., Murphy, S., Bonell, C., 2016. Realist Complex Intervention Science: Applying Realist Principles Across All Phases of the Medical Research Council Framework for Developing and Evaluating Complex Interventions. Evaluation 22, 286–303. https://doi.org/10.1177/1356389016652743
  • Hall, K., Alcock, P., Millar, R., 2012. Start Up and Sustainability: Marketisation and the Social Enterprise Investment Fund in England. Journal of Social Policy 41, 733–749. https://doi.org/10.1017/S0047279412000347
  • Kroeger, A., Weber, C., 2014. Developing a Conceptual Framework for Comparing Social Value Creation. Academy of Management Review 39, 513–540. https://doi.org/10.5465/amr.2012.0344
  • Moore, G.F., Audrey, S., Barker, M., Bond, L., Bonell, C., Hardeman, W., Moore, L., O’Cathain, A., Tinati, T., Wight, D., Baird, J., 2015. Process Evaluation of Complex Interventions: Medical Research Council guidance. British Medical Journal 350, h1258. https://doi.org/10.1136/bmj.h1258
  • Mulrow, C.D., 1994. Systematic Reviews: Rationale for Systematic Reviews. BMJ 309, 597–599. https://doi.org/10.1136/bmj.309.6954.597
  • Nicholls, A. 2009. ‘We do good things, don’t we?’: ‘Blended Value Accounting’ in social entrepreneurship. Accounting, Organizations and Society 34, 755-769.
  • Shemilt, I., Mugford, M., Vale, L., Marsh, K., Donaldson, C., 2010. Evidence-Based Decisions and Economics: Health Care, Social Welfare, Education and Criminal Justice. Wiley-Blackwell, Oxford.
The Open University triple accreditation logos

Upcoming Events

Nov 26

Student Hub Live: Support and wellbeing for student carers

Tuesday, November 26, 2024 - 11:00 to 12:30

Online with Student Hub Live

Nov 28

Student Hub Live: Memory and Learning: making the connections

Thursday, November 28, 2024 - 19:00 to 20:00

Online with Student Hub Live

See All