Welcome to this Special Issue of EuroScientist on: Alternative research funding!
Like the wavy lines of the painting illustrating this issue, reinventing research funding may not follow a straight path. It may not happen overnight either.
In this special issue of EuroScientist, we explore the two facets of funding mechanisms that would need to be revisited: at the macro level, where R&D policy shapes the way research funding is allocated, and at micro level, where peer review shapes the way research funding is distributed.
At a macro level, the trouble is that research funding policy choices have been informed by static indicators of R&D performance for years. For as long as this was the only way of measuring research output, policy makers had to base their decisions on incomplete and imperfect information. But now that the our modelling and simulation capabilities have much evolved, it is time for science policy to take advantage of the granular picture offered by improved analysis tools, based on a vast array of R&D input indicators, and to correlate them with their effect over time. This approach could ultimately help decision makers apply policies that present a greater likelihood of yielding the desired research objectives, in line with society’s most pressing needs.
At a micro level, the trouble on that front is that peer-review may no longer be the best mechanism to fund research. And there is a growing consensus that alternative are needed, shying away from the current all-consuming bureaucratic sport it has now become. Or the current process needs, at least, to be adapted to encompass scientists’ achievements in their complexity and through all of their multidimensional expression and not just limited to bibliometric indicators. Read on our lead article by Anthony King to find out what are the alternatives funding approaches already tested.
Responding to our readers’ demand, we now make a print version of this special issue available, to make it easy for our readers to access it offline on a tablet or in print.
Editorial
Funding policy tools: up for revamping
By Sabine Louët, EuroScientist editor.
No one-size-fits-all approach
Lead article:
Alternative modes of research funding: exceptions or growing trend?
By Anthony King, science journalist, Ireland.
Exclusive Skype interview, a case study of the Virtual Liver Network, Germany:
Adriano Henney: experimenting with novel funding mechanisms
By Sabine Louët, EuroScientist editor.
Research funding: trust, freedom and long-term vision pay off
By Thomas Sinkjær, Danish National Research Foundation, Denmark.
Read also our previously published article:
Under the diktat of paperwork
By Anthony King, science journalist, Ireland.
Mentors, mates or metrics: what are the alternatives to peer review?
By Arran Frood, science journalist, UK.
Science policy harnesses XXIst century modelling capabilities
Predicting science policy outcomes with agent-based model
By Petra Ahreweiler, European Academy of Technology and Innovation Assessment, Germany.
Economic models: ever evolving target for adequate policy making
By Torben Andersen, University of Aarhus, Denmark.
Featured image credit: CC BY-NC-ND 2.0 by Mark Chadwick
:::
Funding policy tools: up for revamping
The research ecosystem is in constant evolution. Funding policy tools, however, have not evolved as fast as the research activity itself. At the macroscopic scale, the policy shaping the way research funding is allocated could be improved by gaining more precise evidence-base of the potential effect of policy choices in achieving desired research objectives. Indeed, the science underpinning the research funding policy—also known as the science of science policy—is in infancy.
It is clear that many factors contribute to policy choices, which are often the object of compromises. However, on a mere practical level, having adequate tools to inform policy is of growing importance. The trouble is that research funding policy choices have, for years, been informed by static indicators of R&D performance. Thus, funding policy decisions have been based on incomplete and imperfect information. It is therefore time for science policy to take advantage of the improved simulations and modelling tools than can be applied to analyse the vast array of R&D input and output indicators and take into account their evolution over time.
Using big data, complex systems and network analysis could, for example, support the achievement of specific research policy objective. This is because such highly analytical tools make it possible to better evaluate the condition of their realisation than before. For example, one of the most advances model in this field, the agent-based SKIN model, has been tailored to analyse the effect of investment on the basis of historical performance. It can also be used to predict the effects of policy choices over time. So far, it has been tested in EU funding in the ICT sector.
It is likely that improvement to such policy models will emerge in the future, as the science of science policy matures. For example, to refine this bottom-up agent-based modelling, one option would be to combine it with a top-down optimisation algorithm. This, in turn, would improve the chance of getting the most from available resources. And get thus one step closer to reaching desired research objectives.
Such tools could provide the more precise evidence-base that has been lacking to further refine R&D funding policies. In the context of increasingly limited resources, it is now time to be a bit more demanding on the quality of the evidence base related to how funding is allocated.
Featured image credit: ceanelamerez via Flickr
Sabine Louët, Editor, The EuroScientist
:::
Alternative modes of research funding: exceptions or growing trend?
Peer-review of projects dominates when it comes to decision on how to allocate funding for science. But is it really the best way? Funders certainly think so. Over 95% of biomedical funding in the UK, for example, relied on peer-review grant allocations, a 2012 report found. In the absence of tried and tested alternative, peer review has become the default solution. But there is a clear demand for new and less onerous ways of funding research.
Now, alternatives to peer review are springing up. For example, crowdsourcing websites such as petridish.org and experiment.com are funding specific science projects through public donations. Some might argue that the public may not necessarily be relied on to pick the best or most deserving science projects. In reality, crowdfunding is more a beauty pageant for popular areas of research and suitable or projects requiring smaller amounts of money. For example, projects asking €3,600 ($5,000) to decode hyena calls in Kenya are sure to get funded. But, perhaps less appealing scientific fields with higher funding requirements in disciplines traditionally less understood by the public, such as parts of chemistry or physics, are less likely to attract cash. There is a real need for alternative means of allocating funding.
Distributed review, as an alternative to peer review
Recently, an article in EMBO Reports sketched out an alternative way of funding scientists, not projects, which would drastically cut down on the burden of grant writing. “If peer review is to happen, let’s involve all the peers,” says Johan Bollen, informatics scientist at Indiana University, Bloomington, USA, and a report author. So how would it work?
His distributed model involves every scientist receiving the same slice of funding from the original funding pie; however, everyone must then redistribute a specific proportion of their slice, say half. This would allow money to circulate and decisions of worth to be made by many scientists.
Bollen says the current US system, where the NSF grant proposal success rate is below 10%, wastes a huge amount of scientists’ time; this issue concerns European scientists applying to EU funds too. By contrast, his new proposed way of allocating funding would take just a few minutes each year for a scientist. And it would incentivise openness in research and change how scientists communicate with each other.
Some ask whether this approach would truly link funding with quality. And serious questions could arise for public agencies asked to be accountable for the outcome of taxpayer’s funds allocated to researchers. “My concern is that people would allocate money only to within their own field. Also a researcher would have to spend a lot of effort to come to a reasonable judgement about who to give money to,” says Steven Wooding, lead author of a 2013 report by UK non-profit policy research organisation RAND Corporation, entitled ‘Alternatives to peer review in research project funding.’ He adds: “we need evidence of how it would work in practice.”
This RAND report summarises the many criticisms of grant peer review and outlines alternative approaches currently in use. For example, there is the so-called sandpit funding strategy, where a diverse group of experts come together for a workshop and brainstorm, with the best proposal awarded funding. And there is mentored funding, where all applications are mentored by the funding body, and multi-stakeholder committee, including end users. There are also online iterative methods, where panels review applications online through a number of rounds, rating and ranking them and discussing options.
It is unlikely that there will be one best way to allocate research funding, according to Wooding. But “it seems unlikely that peer review will be the best solution in all circumstances and we haven’t really tested other approaches,” he notes.
Science of science policy under-developed
It is striking how little evidence there is that peer review is best. Wooding argues that we desperately need empirical evidence about what science funding mechanisms are most effective. “We should try to understand peer review better, but that is difficult without having something else to compare it to, so it’s important to experiment with more diversity,” he says.
“Some funding needs to be allowed to investigate what would be alternative methods of allocating money, but nobody wants to fund research on research,” points out research policy expert Merle Jacob at Lunds University, Sweden. There is no incentive for funders to spend money looking at their own way of working, suggesting that they need to improve, she adds.
Besides, Jacob is skeptical that the public could choose between the best science projects and believes the scheme outlined in EMBO Reports would quickly run aground against reporting requirements for funding bodies. She describes the proposal as intriguing and says everyone genuinely struggles when asked to come up with an evidence-based alternative to what we have now.
The absence of research into alternatives boils down to funder’s attitude. “The funders don’t want anyone telling them what to do,” says Jacob. She recently expressed concern about how smaller subjects are being squeezed out by competitive national and European funding. Instead, she suggested basic research funding could be transferred from national to the European Level.
Merle also co-authored a report for the OECD looking at how developing and emerging economics could embark on performance-based research funding. She says: “The one thing we found is that if the decision to move from block allocation to performance-based research funding is directed at individuals or groups, then it increases the cost of governing the system.” She concludes: “If your resources are limited, you are better off sticking to block allocation, funding institutions not individuals.”
Funded on one person’s whim
This runs counter to a rising trend in the US, where single donors have become even more involved. A New York Times article on how billionaires are privatising American science quoted an AAAS policy analyst as saying that the practice of science is becoming “shaped less by national priorities or by peer-review groups and more by the particular preferences of individuals with huge amounts of money.” The rich pick the scientists that impress them, which is a new funding model. Is that the way ahead? They are successful business people after all. Meeting an entrepreneur over lunch would cut down on paperwork, but it has its own pitfalls too. What if they do not like you or your area?
That approach would run counter to the model of taxation and public funding of science that Europe has adopted and has downsides. It could skew research to more trendy sciences, for one. But for now Europe does not have so many heavyweight philanthropists involved in science. Some non-profit funders like the Welcome Trust are considered significant actors in Europe when it comes to funding sectors like biomedical and health. To a lesser extent, the likes of the L’Oréal Foundation, in France, and the La Caixa Foundation, in Spain, The Volkswagen Foundation, in Germany and the Wallenberg Foundation in Sweden, offer somewhat more limited research funding.
And although such funders might step outside the box and experiment with new funding paths, in the mainstream, conservatism rules. “It is ironic given the whole raison d’être of science is to understand how things work that scientists have been so unwilling to understand how their own systems work,” says Wooding.
But alternatives such as Bollen’s distributed model are worth considering. “The first response from people is this is crazy,” says Bollen, adding: “but I always joke the more you think about it the more attractive it seems compared to the present system.” Clearly, he is right about one thing: the need for more creative alternatives to peer review that could be tried and tested. Then it could be decided, on evidence, which systems are best for targeting funding to research.
Featured image credit: Jimmy via Flickr
Anthony King
Anthony is a freelance journalist based in Dublin, Ireland.
:::
Adriano Henney: experimenting with novel funding mechanisms
Adriano Henney has a medical background and many years academic research experience as a researcher in cardiovascular disease in London, Cambridge and Oxford. His interests have focused predominantly on atherosclerosis, with studies ranging from pathology, through molecular and cellular biology to molecular genetics. Following an academic career her went over to work in the pharmaceutical industry.
After over ten years at AstraZeneca where he eventually was involved in Systems Biology. Henney has been pursuing his interest in this topic, as programme manager for a major German national flagship programme: the Virtual Liver Network (VLN). In this interview he shares his views on alternative ways of organising research funding based on his experience with the unique funding and management structure of the VLN.
Unique funding structure
“[The Virtual Liver Network] is quite a unique project in a number of ways, it’s a sense that’s a sort of flagship, it’s a €50 million programme for 5 years,” Henney explains. It involves around 200 scientists across the Germany, working on 44 projects. “It’s probably the largest Systems Biology programme in Europe and it is focused on a single country rather that as part of a Europe-wide consortium,” he adds.
“We have an integrated and unified vision towards an objective of creating this model of a liver. This requires scientists to work across teams, across groupings throughout Germany and functional multi-disciplinary teams. So it’s not just individual groups working in different universities ».
But the way the project funding is structured is unique too. “The other interesting component of it is the specific ring-fencing within the funding for non-scientists, non-practising scientists, with a strong managerial experience.” He adds: “That would liberate the senior scientists and the post-docs to don’t worry about the administrative burden of the programme and concentrate to of delivery of science and that’s very very different.”
And it also distinguishes itself by bringing industry experience. “It also has focused on the acquisition, in my case, in bringing the management practise from industry to help organise and drive forward a very complex [project]. It’s not a consortium. We are working as a distributed team and to focus on how we can deliver that across a wide geographic distribution.”
Alternative EU funding mechanisms
This raises the question of whether such funding structures be applied to other pan European research consortia.“I think very strongly that if we could do this sort of things in one country… then if we could translate that onto a European scale to tackle some of the major challenges that we have in biomedicine and XXIst century healthcare, then potentially we could actually have quite a significant success.”
He also clarifies: “You could argue that what we are doing in Germany isn’t dramatically different from any EU grants… but actually it is different in particular in the focus on having a professional management recognising the complex interaction in a network of this type.” The differentiating factor from EU project is the distributed team, which is truly a spider’s web network, where teams with complementary skills come together.
Changing policy covering the way research funding is allocated may not be that simple.“Having worked on the both sides of the fence – academia and industry, I believe they’re still a reluctance to engage in applied research and that in some ways, applied research is less pure than applied science, at least in the life sciences. It’s not much like that in engineering obviously. . . The challenges we are facing in healthcare are huge … and it does require the application the best scientific thinking. And it requires much closer collaboration between industry and academia.
In that sense it’s actually asking for the best scientists to apply themselves to the problem to go forward.” He also believes that under Horizon 2020 there is a potential for strategic pre-competitive programme to emerge.“But the key element of this is really to understand what we mean by impact. And it is not enough to say, at the end of the grant this particular project would be expected to have this impact. . . I would argue should be the other way around. You should say, this is the impact you want to have. Then you structure the proposal to try and ensure that you get as close as possible to that impact.” He adds: “We need to understand and ensure an equitable distribution between this kind of applied research and blue skies research . . . We need to have an equitable approach to try and understand the balance between these things.”
Out of the box thinking
Finally, responding on whether the current funding structures at the national and EU level favour out of the box thinking, Henney concludes: “I think lot of the projects that go forward are innovative and they get funded, certainly at EU level, certainly within the research council, within the UK and across Europe. Certainly, some of the real high risks, potential high rewards innovative ideas that may be happening in industry don’t get a chance to breed simply because industry is focused on delivering to the bottom line.”
However, it may be true that “in some cases some really risky project that have been put forward that are based on ideas that have a very limited amount of primary evidence to support the concept won’t get funded.” Yet, it depends on many of things: “how well the project is written, how well you argue your case, how well you can explain that despite the risk the relative merit of undertaking something is this, and this is the reason for bring it forward, and trying to get an understanding of how you might share the risk with the funding body….The second thing, probably more difficult one to overcome, is just how innovative and open-minded their review panel may be.” It may not be that black and white.
Featured image credit: Virtual Liver Network
:::
Research funding: trust, freedom and long-term vision pay off
A recipe for how to stimulate breakthrough research would include the following ingredients: long-term commitments, large flexible grants, trust, and the funding body’s continuing interest in the research. This is precisely the approach that the Danish National Research Foundation (DNRF) has adopted with its ‘Centres of Excellence’ concept, over the past 22 years. In short, the DNRF provides the very best researchers with sufficient and flexible base funding that Danish universities are not in a position to provide. This is, in reality, not much different from what the ETH in Switzerland and other top universities do.
The foundation’s core activity is to fund frontline research in highly creative environments. By recognising and trusting the talent of top researchers, the foundation expects them to deliver potentially ground breaking results. And, in the process, the foundation hopes that this strategy will boost the international competitiveness and impact of Danish research. Each time there is a new call for bids to create a new Centre of Excellence all research fields are eligible for funding.
Excellence, independence and flexibility
The foundation also ensures that all of its procedures are transparent throughout the application process. These procedures are driven only by criteria that seek out excellence. The selection of new centres is accomplished by a small board of trustees composed of internationally recognised researchers.
Compared with many public funding agencies in Europe, the DNRF’s stands out in the way it is managed. Its strengths are supported by three pillars. First, the foundation is a truly independent body established by an act of the Danish Parliament in 1991. As such, it has been able to focus on curiosity-driven research and to remain free from political influences.
Second, the DNRF’s activities do not depend on annual appropriation bills. The foundation has its own capital, which it invests in its Centers of Excellence. Third, the large, long-term and flexible funding, of up to 10 years per centre, ensures that researchers have a sustainable funding source. Therefore, they can address daring research questions with limited funding risk. They can also quickly adapt to new research questions in well-equipped and coherent research environments.
Recognised funding approach
DNRF’s achievements have been recognised by independent experts. In 2013, the Danish Minister of Science had the foundation evaluated by an international panel headed by Wilhelm Krull, general secretary of the German Volkswagen Foundation. The panel concluded: “One of the success factors is the DNRF strategy to focus on outstanding talents, to provide them with sufficient funds, a long term funding perspective and to grant a far-reaching autonomy with respect to the research agenda and the use of its funds. This enables researchers to venture into novel and often risky projects which may eventually lead to ground breaking results.”
As a past leader of a DNRF-supported Centre of Excellence, I recognise many of the strengths noted by the international evaluation panel. Our sense of having developed a very efficient and successful mode of supporting research has now been backed by substantial numerical evidence during this evaluation process.
Part of that evidence is backed up by the comprehensive bibliometric analysis that was conducted as part of the evaluation exercise.
This analysis demonstrates in different ways that the impact and quality of the research conducted in the centres is high. Furthermore, we see that the DNRF centres can compete with the very best research institutions in the world, including Stanford and MIT, when it comes to the impact of articles published in prestigious multidisciplinary journals such as Science, Nature and PNAS.
In addition, the commercialisation statistics for Danish research in 2007-2012 showed that approximately 15% of all spin-off companies and approximately 15% of all patent applications submitted from a public research institution came from a DNRF Center of Excellence. These numbers exceed the share one would expect when one considers that the DNRF allocates approximately 2% of all public research funds. This demonstrates a substantial potential for the application of research results, even though the foundation does not make this a criterion when selecting new centers or extending existing ones.
Thomas Sinkjær
Director of the Danish National Research Foundation, Copenhagen, Denmark.
:::
Predicting science policy outcomes with agent-based model
Today, investments in R&D—be it through higher education institutions or science-industry networks—are expected to immediately produce high commercial returns. Science policymakers, innovation managers, and even the public are often disappointed and raise legitimacy issues, when such returns fail to materialise promptly. These situations show the limits of conventional steering, control and policy making associated with research funding.
Science policy experts often refer to the frustration of third parties, for whom the messy and complicated features of funding targets simply “do not seem to compute.” Better solutions to help improve returns from research funding are therefore needed. Unfortunately, what is referred to as the science of science policy is still in its infancy. To assist in R&D policy decisions concerning funding issues, modelling and simulation can help. Indeed, by harnessing the capabilities afforded by complex systems analysis tools, it is possible to gain unprecedented insights into the consequences of specific policy decisions. The question is how such tools can contribute to policy making in complex environments such as science, research and innovation.
Dealing with complexity
Without a doubt, socio-economic systems are confronted with a high degree of complexity. This is particularly true, when it comes to the development of new knowledge, its diffusion, and its commercial application in innovation. Furthermore, the trouble is that actors of such activities—that we can refer to as agents—are confronted with true uncertainty. This makes any forecasts and predictions on innovation success or failure impossible. Any analytical approach that tries to offer guidance and support for policy decision makers has to acknowledge this intermingling of rich complexity and uncertainty.
Enters the agent-based SKIN model, which stands for Simulating Knowledge Dynamics in Innovation Networks. It has been designed to simulate knowledge generation and diffusion in inter-organisational research and innovation networks. Since its first prototype in 2001, it has been developed further into a platform with many modules and applications. And it has since been adopted by a number of policy modelling studies, applying it for science policy.
The largest application of the SKIN model to date focuses on impact assessment and ex-ante evaluation of European funding policies in the Information and Communication Technologies (ICT) research domain. The corresponding version of the model, referred to as INFSO-SKIN, had been developed for the Directorate General Information Society and Media of the European Commission (DG INFSO) in 2011. It was intended to help to understand and manage the relationship between research funding and the goals of EU policy.
Testing future policy
The agents of the INFSO-SKIN application are research institutions such as universities, large diversified firms or small and medium-sized enterprises (SMEs). The model simulates real-world activity in which the funding calls of the European Commission specify the composition of consortia, the minimum number of partners, the length of the project, the deadline for submission and a range of capabilities, including a sufficient number of which must appear in an eligible proposal, as well as the number of projects that will be funded.
The model implemented rules of interaction replicating the actual Framework Programme (FP) decision paths. To increase the usefulness of the model to policy makers, the names of the rules within the model closely matched FP terminology. For the Calls numbered 1 to 6 in FP7, the model used empirical information incorporating the number of participants and the number of funded projects, together with data on project duration, average funding and size; the latter measured through the number of participants.
Analysis of this information produced data on the functioning of actual FP collaborative networks and their internal relationships. Using this data in the model provided a good match with the empirical data from EU-funded ICT networks in FP7. Indeed, the model accurately reflected what actually happened. And it could be used as a test bed for potential future policy choices.
Changing parameters within the model is analogous to applying different policy options in the real world. The model could thus be used to examine the likely real-world effects of different policy options before they are implemented. Thus, altering elements of the model that equate with policy interventions—such as the amount of funding, the size of consortia, or to encourage specific sections of the research community—made it possible to use INFSO-SKIN as a tool for modelling and evaluating the results of specific policy interactions; typically occurring between policy interventions, funding strategies and agents.
Agent-based model
On its most general level, SKIN is an agent-based model. Its agents are knowledge-intensive organisations, which try to generate new knowledge by research—be it basic or applied—or by creating new products and processes through innovation. These agents are located in a changing and complex social environment, which evaluates their performance. For example, what matters are the market performances, if the agents target innovation, or those of the scientific community, if the agents target publications through their research activities.
Agents have various options to act: each agent has an individual knowledge base called its ‘kene,’ which it takes as the source and basis for its research and innovation activities. The agent’s kene is not static. The agent can learn, on its own by doing incremental or radical research. Or it can also learn from others, by exchanging and improving knowledge in partnerships and networks. The latter feature is important, because research and innovation happens in networks, both in science and in knowledge-intensive industries.
This is why SKIN agents have a variety of strategies and mechanisms for collaborative arrangements. They are able to choose partners, form partnerships, start knowledge collaborations, create collaborative outputs, and distribute rewards. In short, usually a SKIN application has agents interacting on the knowledge level and on the social level while both levels are interconnected. It is all about knowledge and networks.
Weak prediction
This general architecture is quite flexible, which is why the SKIN model has been called a platform. It has been tested for a variety of applications ranging from small scale—such as simulating the Vienna biotech cluster—via intermediate size —through simulations of the Norwegian defense industry—to large-scale, such as INFSO-SKIN described above.
The SKIN model applications use empirical data and claim to be realistic simulations insofar as the aim is to derive conclusions by so-called inductive theorising. This means that the quality of the SKIN simulations derives from an interaction between the theory underlying the simulation and the empirical data used for calibration and validation.
These new approaches enable the modelling of science policy initiatives to take into account more parameters than previously possible. It also makes it possible to perform simulations to forecast potential impacts of proposed science policy measures. Yet, it is still early days for this field of the science of science policy.
Looking into the future of the SKIN model development, the establishment of this conceptual framework combining the application of empirical research, computational network analysis, and agent-based modelling will yield a more integrated and comprehensive understanding of science policymaking than has been achieved to date. In contrast to conventional methods of social research, this approach will be capable of dealing with that fact that research and innovation do not follow a linear path and are highly complex.
Petra Ahrweiler, director of the European academy of technology and innovation assessment in Bad Neuenahr-Ahrweiler, Germany
Featured image credit: CC BY-SA 3.0 by Nick Youngson from ImageCreator
:::
Economic models: ever evolving target for adequate policy making
The inability to predict the financial crisis has raised a debate on an important toolkit of economists: economic models. How reliable and useful are they? To what extent can policy makers rely on model analyses in forming policies? And to what extent, can they be used, for example, for science policy to ensure most effective allocation of limited funding resources?
An economic model is a mathematical representation of the economy. There are many models differing in specific assumptions and the specific purpose of the model use. Different types of models are called for when, for example, making short-term forecasts or analysing the long-term consequences of ageing.
Economics deals with complex interdependencies and interactions between numerous decision makers. A model is a way of keeping track of these. Making these elements explicit has the advantage of ensuring consistency in the assumptions made, and it enforces discipline. This is important in its own right. But it also makes it possible for outside observers to assess the ingredients build into a given model.
A primary purpose of a model analysis is to gain insights and quantification of the likes of macroeconomic development but also the effects of more specific interventions like tax reforms or R&D funding. Insights are obtained by assessing the role of various assumptions made for the outcome. Model builders spend much time on such exercises to understand their tool-kit. Quantification is essential to assess the impacts and consequences of policy changes.
Empirical validation of models is essential. Are the specific structures and assumptions made consistent with available empirical evidence? This is an ongoing process within the profession. Evidence is accumulated, theories are tested, and models are reformulated.
Leaving aside the thorny question of statistical issues in model validation, one crucial caveat should be noted. Empirical validation is inevitably backward-looking depending on historical data. This is important information, but it misses new events like a financial crisis. This is why theoretical modelling is important to explore possible events which have not been observed historically. This is an ongoing process within the profession with progresses, but also shortcomings.
A case in point is the financial crisis. Mainstream models of the business cycle neglected financial factors, not that they were unimportant, but they were seen as an add-on not in itself a source problems and business cycles. The financial crisis has induced intensive research activity trying to resurrect the role of financial factors.
The outcomes of model analyses are inherently uncertain. Model builders and users are very well aware of such limitation. When the media report that model forecasts for, say , output growth next year are 2 %, the model analysis may say that with 95% certainty the growth rate will be between say 1.5% and 2.5 %. The best point estimate is 2%, but it is uncertain. This kind of uncertainty is difficult to communicate. And the media abstain from doing so, demanding clear-cut and simple messages. In this sense, model outcomes are often misused or over-interpreted.
Policy makers often find that model analyses of policy proposals are a straightjacket. But that is precisely their purpose. Policies should be based on careful assessments and evaluation. And not just beliefs. This is not implying that models are perfect – they are not. They must constantly be up-dated and reformulated to capture the ongoing changes in society. In that sense a good model is a moving target.
Torben Andersen, professor economics at the department of economics and business, Aarhus University, Denmark.
This is a post sponsored by ESOF 2014. The role of economic modelling will be discussed during a session entitled ‘Fiscal austerity and growth: what does science say?’ at the ESOF 2014 conference, due to be held in between 21st and 26th June in Copenhagen, Denmark.
Featured image credit: CC BY-SA 2.0 by Ten Keegardin