Credit: Rawpixel

Policy matters: transparency is rarely a bad thing

Dealing with the lack of transparency in policy advice and impact scoring for funding

In an era where everyone in research circles is calling for greater transparency, policy decisions can appear as opaque as ever. Indeed, transparency is relevant to everything from how scientific advice is used–or whether it is used at all–to how research impact is defined. Buzzwords like ‘horizon-scanning’, ‘benchmarking process’ and ‘evaluation metrics’ are repeatedly brandied about by governing bodies and funding agencies alike. But this pseudo-management speak doesn’t make it any clearer how policy makers really go about their business. Transparency is not enshrined in decision-making; opacity is only encouraged. But is that mere wishful thinking? Or can policy decision become more transparent in the near future? In this article, EuroScientist looks into how greater transparency is needed in two fields: policy-making based on evidence and research impact policy.

Scottish failure to account for evidence

Available scientific evidence does not always inform policy making. Take, for example, the recent decision by the Scottish Government to ban the cultivation of GM crops in the country, which was not based on scientific advice. Instead, “potential wider economic ramifications” for the food and drink industry prevailed, according to First Minister Nicola Sturgeon.

The move prompted an open letter to the Scottish minister for rural affairs, Richard Lochhead, signed by 28 research organisations, including the Scotland-based Roslin Institute (creator of Dolly the Sheep), the European Academies Science Advisory Council (EASAC) and Academia Europaea. The signatories complained that the decision was political and not based on any informed scientific assessment of risk. “It is an approach to evidence that surprises and disappoints many scientists and non-scientists alike,” it read.

The Herald Scotland also reported that ministers were unable to consult the Scottish Government’s Chief Scientific Adviser (CSA) because the post is currently vacant; a previous CSA for Scotland, Anne Glover, left the role to become Europe’s CSA. The on-off-on saga following that appointment has been previously covered in EuroScientist. (We also previously pondered over whether European countries need a chief scientific adviser at all.)

In the absence of a go-to person, some believe that the Scottish Government could have called a group of scientists, via the Royal Society of Edinburgh for example. This would have allowed them to evaluate the actual and perceived risks of GM crops for Scottish consumers. “The Scottish Government has admitted that its decision was not based on science,” says Christine Diehl, executive director of EASAC, “Whilst we agree that scientific arguments are not the only ones that count, and that a science perspective can sometimes be outweighed by other considerations, we think the Scottish Government should have sought scientific advice in this case.”


Resolving EU opacity in policy

The GM crop ban in Scotland not only highlights the need to have scientific advice at hand, but that it is seen as present and in the room for transparent decision-making. This very issue is being addressed by the European Commission which, in May 2015, announced the formation of the new Scientific Advice Mechanism (SAM), which is a remodeled substitute for the former EU CSA position.

Combining the strength of the EC’s Joint Research Centre, as a key source of in-house scientific advice, with a “structured relationship” with scientific advisory bodies such as national academies in Member States, the new SAM will be independent and backed by a high-level group of seven scientists. They will provide advice on immediate issues such as Ebola outbreaks, as well as everything from energy to food security issues.

The SAM is designed to ensure transparency, including regarding the topics it will address, the evidence provided by the academies and other bodies, as well as the final opinions of the high-level group, according to EC Spokesperson for Research, Science and Innovation Mirna Talko. “The details of the working modalities are however still being worked out and will be presented once SAM is set up,” notes Talko.

This new mechanism is one way to bring some transparency to the commission when it comes to decisions that need scientific knowledge, according to Lidia Borrell-Damian, director of research and innovation at the European University Association, Brussels, Belgium. “This approach is becoming more common,” she says, adding: “Many politicians are realising that decisions cannot just be taken based on ideology or certain political party rationale. They must take into account the existing knowledge in society.”

She thinks the SAM will be important, because you need a group of very knowledgeable people to be able to make judgments and really advise to the best of their knowledge. “Policymakers and governments should seek the best scientific advice possible but be transparent about their discussions.” The SAM is expected to start early operations during the autumn 2015.

Assessing the impacts

But this lack of transparency does not simply affect the day-to-day policies at EU or national level. Another major area where greater transparency is often called for is in policies governing how impact of grant proposals is defined and measured.

Europe’s best pupil in this field is probably the ERC. Indeed, it publishes the minutes of the plenary meetings of its Scientific Council, the governing body that defines the funding strategy and methodologies, according to Marcin Monko, press adviser, at the European Research Council (ERC). “When it comes to the evaluation of grant proposals, the names of the review panel chairpersons are made public before the evaluation starts,” Monko says. “The ERC also discloses the full composition of all 25 peer-review panels after the grant award decision is adopted.”

These are surely welcome measures to curious scientists. After all, the nitty-gritty world of grant applications is where careers are forged or founded. But here the all-pervasive impact of research agenda has some scientists wondering if what’s expected of them is really clear.

For a start, the definition of impact varies from country to country, between funding councils, and even individual calls. The apparent change in focus of elements of Horizon2020 towards commercialisation is also causing concern. “Impact has emerged as an extremely influential policy driver, but what one should really understand as impact still seems to be very much up for grabs,” says David Budtz-Pedersen, an expert on research and innovation policies and associate professor at the University of Copenhagen, Denmark.

And when it comes to scoring impact, Budtz-Pedersen says we have seen in the UK’s Research Assessment Exercise that research in the chemical and medical sciences that are closest to the market place often score higher on the final ranking. “So there still seems to be some kind of industrial company-level bias in the way UK researchers are evaluated in the impact assessments.”

Fairness and transparency

This could become a major issue for the research-centric institutes and departments that are strong in the social sciences and humanities. “Then I think the question of transparency becomes very urgent because for the people working in the social science or humanities, or even the more theoretical natural sciences, it might be quite intransparent what actually to list or describe as impact,” adds Budtz-Pedersen.

This is make-or-break stuff for academics. As never before, they are duty-bound to explain and quantify exactly why their projects should receive a share of the limited resources stemming from taxpayers’ funding. It’s a fair point: why should scientists be endowed with the public’s cash if their work has no value to other scientists, or cannot inform policy-makers to build a better society?

This opens the door to the whole new area of Responsible Research and Innovation (RRI) recently introduced in EU funding policies. RRI requires that scientists become more aware of their responsibility in science governance and expects them to engage with society concerning the choices and consequences of their research findings.

The human impact of assessment

Everyone wants a fair and transparent assessment system, including the policymakers who set the guidelines. Then, there is the unavoidable truth that whatever the process and level of transparency in the guidelines, decisions and judgments are made by human beings.

The trouble is that value judgments are very difficult to avoid and that some are probably implicit or subconscious. That’s the central claim of a paper entitled, Science, Policy, and the Transparency of Values, written by Kevin Elliot, a philosopher based at Michigan State University, USA, who specialises in science and ethics.

“My solution is for scientists to be as transparent as possible about the sorts of value judgments they’re making, so at least we can be aware of it and decide if we approve of the sorts of value judgments they might be making,” says Elliot. “Ideally, scientists could be 100% transparent in acknowledging whenever they make a value judgment, so the public could decide how they feel about the value judgment. But of course scientists often aren’t aware that they’re making them.”

In terms of policy makers, Elliot thinks scientists should be finding ways to require that scientists doing research on sensitive topics make their data more widely available. He cites two areas as especially important: pharmaceutical safety testing and industrial chemical safety testing.

And these are the precise areas where the EC has been dragged into controversy, namely in the regulation of bisphenol A and other endocrine disrupters. In this Special Issue of EuroScientist, Thomas Hartung, founder of the Evidence-based Toxicology Collaboration, at John Hopkins University Bloomberg School of Public Health, Baltimore, USA, accurately describes the issues associated with modernising toxicology tests on the basis of available evidence.

Inherent human bias is one thing, when weighing different definitions of impact for example. But there’s little excuse for a lack of transparency in areas such as making corporate safety data available and logging how elected officials use it or lose it. “It’s the responsibility of the policy developers to act to the best of their knowledge,” says Borrell-Damian. “But not to hide or ignore any available evidence.”

Featured image credit: Rawpixel via Shutterstock

EuroScientist is looking for contributors!

If you would like to write guest posts in EuroScientist magazine, send us your suggestions of articles at office@euroscientist.com.


Arran Frood

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.