Hans Peter Dietz
It is that season of the year again. The medical faculty of my university bids us to attend another celebratory dinner, an opportunity to hand out awards and congratulate ourselves on how well we are all doing. In recent times such events leave me a bit confused. On the one hand I agree. My university attracts many millions in competitive research funding and produces much world-class research. Yet I have the distinct feeling that we are on an international playing field that is no longer level, and it is tilting further and further by the year. The future does not look that bright.
I believe that an awareness of (and a positive attitude towards) productivity has to lie at the core of any activity, whether it be gardening, surgery, or research. As Paul Krugman (Nobel Prize in Economics 2008) has said: “Productivity isn’t everything, but in the long run it’s almost everything.” The secret of success is to put your resources where they are going to yield maximal effect, and to make sure the conditions are optimized for productivity. It does not matter so much as to how much money you put into research—what matters is the “bang for the buck,” i.e., what you can achieve with a given resource base of people, money, skills, and ideas. Unfortunately this concept is not commonly accepted, and that is evident from the way we measure research output (Dietz 2012).
From official reviews of biomedical research and conversations with colleagues in my country, Australia, as well as in the UK, USA, and Japan, it has become apparent that most developed countries have become over-regulated and that this has led to a decline in productivity. Indeed, the regulatory environment has drifted beyond what is reasonable, and it is increasingly sapping productivity. Research governance in its widest sense was introduced to protect patients/research subjects from unscrupulous researchers, and nobody would doubt that this is a reasonable goal. This may be achieved through research governance institutions (Walsh, McNeil et al. 2005), mandatory education of clinical researchers (Babl and Sherwood 2008), compulsory self-audit (Crammond, Parker et al. 2011), and external audit of research conduct (Poustie, Taylor et al. 2006). There is little doubt that such investments in research governance have the potential to reduce the risk of harm to research participants due to research misconduct.
However, as with any other risk minimization technique, the relationship between up- and downsides, between cost and benefit, tends to change over time. The low-hanging apples get picked first, and with time it gets harder and harder to obtain additional benefits without incurring exorbitant cost, whether in road traffic or biomedical research. People are injured or killed on the roads, and patients get hurt in research, although the latter is surely less common by several orders of magnitude. It is axiomatic that any regulatory effort will be subject to the law of diminishing returns. The more regulated a system is, the more effort has already been expended to stop things from going wrong, the harder (i.e., the more expensive) will it be to obtain further improvement.
We now seem to have reached a stage where further reductions in risk are possible only at exorbitant cost, with a corresponding negative effect on productivity. This is a common property of complex man-made systems (Homer-Dixon 2008) and does not bode well for the health of the system itself. The phenomenon seems to affect our entire civilization (Costa 2010), involving every aspect of our working lives: from individual clinical practice to public hospitals, the main research funding organizations, federal and state governments, and of course the universities, the core providers of biomedical research.
It is time to admit that biomedical research is in trouble, mainly due to rampant over-regulation. Many developed countries have created a managerial class that uses opaque language derived from the social sciences, often of little relevance to real-world situations, and often used by people who in fact may not want to be understood. More and more individuals in our society are non-productive and thrive on regulation because it is frequently the only justification for their continuing existence. Hence it is not surprising (and this may be the worst aspect of our current predicament) that all attempts at reform seem to lead to greater administrative overheads due to greater centralization, bureaucratic control, and regulation (Dietz 2009). Parasitic managerial structures, excessive micro-management by increasingly intrusive political structures, and redundant, counterproductive processes make any system less able to do the job it is supposed to do. The commonly used argument quoting a (however small) legal risk is a fallacy because any risk management activity has to be subject to risk-benefit analysis. The magnitude of any risk needs to be considered, and those tasked with decision-making are frequently not those with the capacity to assess both risks and benefits objectively.
And all this comes at the worst possible time. It may be argued that we are currently experiencing a general slowing down of technological progress, most evident in basic fields of research such as physics and chemistry, but increasingly obvious in secondary fields such as biomedical research and aerospace engineering. Forty-five years after the first moon landing we do not have the capability of repeating this feat, let alone travel to the planets. We may have plucked most, if not all, of the low-hanging apples of the industrial revolution, as Tyler Cowen argues in “The Great Stagnation.” (Cowen 2011)
We need to focus on productivity and make it our top priority. Anything that negatively impacts on this top priority ought to be avoided. And often the “cost” of such avoidance is zero or negative, conveying a clear net benefit, as in the case of duplicated governance processes (Vaughan, Pollock et al. 2012), a common problem in clinical research. Otherwise we may just as well pack up shop and get out of research and development altogether.
- Babl, F. and L. Sherwood (2008). “Research governance: current knowledge among clinical researchers.” MJA 188(11): 649-652.
- Costa, R. (2010). The Watchman’s Rattle. London, Virgin Books.
- Cowen, T. (2011). The Great Stagnation. New York, Penguin Group (USA).
- Crammond, B., et al. (2011). “Self-audit as part of a research governance framework for health research.” MJA 194(6): 310-312.
- Dietz, H. (2009). “Our public health system: an accident waiting to happen?” MJA 191(6): 345-346.
- Dietz, H. (2012). “Indicators for research performance evaluation: an overview (letter).” BJUI 109: E40-41.
- Dietz, H. and B. Stokes (2012). “A plea for professional independence.” Med J Aust 196(2): 104-105.
- Homer-Dixon, T. (2008). The Upside of Down. Melbourne, Text Publishing Company.
- Poustie, S., et al. (2006). “Implementing a research governance framework for clinical and public health research.” MJA 185(11/12): 623-626.
- Vaughan, G., et al. (2012). “Ethical issues: The multi-centre low-risk ethics/governance review process and AMOSS.” Aust NZ J Obstet Gynaecol 52(2): 195-203.
- Walsh, M., et al. (2005). “Improving the governance of health research.” MJA 182(9): 468-471.
HANS PETER DIETZ, MD, PhD, FRANZCOG, DDU, CU, graduated from Heidelberg University, Germany, in 1988, obtaining an MD there in 1989. He arrived in Australia in 1997 and completed his training in Obstetrics and Gynaecology in 1998. Between 1999 and 2002, he undertook urogynaecology subspeciality training in Sydney and obtained a PhD with UNSW. Since 2009 he is Professor in Obstetrics and Gynaecology at the Sydney Medical School Nepean, University of Sydney. He is the author of 223 peer-reviewed publications and 13 book chapters and lives in the Blue Mountains west of Sydney with his wife Susanne and two sons.