How Can Science Be Governed and Evaluated?
Giorgio Sirilli, former chairman of the OECD Working Party of National Experts on Science and Technology Indicators (NESTI), is offering lectures to students in the Master’s programme in Governance of Science, Technology and Innovation (STI).
Giorgio Sirilli has been cooperating with the HSE Institute for Statistical Studies and Economics of Knowledge (ISSEK) since the mid-1990s, when Russia joined NESTI and started began developing a national methodology for statistical measurement of measuring the science, technology and innovation (STI) in line with international standards. The group traces its origins back to the 1960s, and it is one of the oldest expert groups in the OECD. Its primary goal is to develop internationally harmonised approaches for collecting and analyzing STI statistics. Thanks to Russia’s participation in the work of NESTI, it is possible to make cross-country comparisons of data.
This January Giorgio Sirilli visited HSE to deliver a guest lecture on science, technology and industry indicators to students in the HSE ISSEK Master’s programme and was invited to give three more lectures in April. The topics of the new series of lectures included the role of the OECD as an intergovernmental discussion platform in developing S&T policy framework, the measurement of innovation, and research evaluation.
Professor Sirilli has a very positive impression from his interactions with the students of the programme, praising their strong command of English and relevant questions during the lectures. He noted that programmes such as HSE’s Governance of STI are not very widespread and are therefore unique in that they provide not only basic training in economics, management or engineering but give in-depth knowledge of how to manage technology and innovation at different levels, e.g., in private companies, public research organizations or at the governmental level.
Measuring innovation
As Professor Sirilli explains, the statistical measurement of innovation started in the 1980s with some countries doing it in an ad hoc manner. Later on, an internationally organized and harmonized system was established. A lot of work has been done over a considerable period of time. Both quantitative and qualitative indicators that are currently being used in innovation surveys are based on definitions and examples provided to ensure comparability of data across different companies. As a result, time series for these data provide interesting material for international comparisons and analysis.
The questionnaires include questions on the introduction of innovations in products, processes, marketing, and organization. In other words, innovation is understood not only as solely technological phenomenon, but also organizationally and from a marketing point of view. An innovating company provides a lot of information about the objectives, hampering factors, collaboration, the costs incurred and relevant activities. It is worth noting that a company is deemed innovative if it actively uses innovative technology, whether technology resulting from a company’s own research or that is developed in collaboration with other organizations. Innovation is basically a complex phenomenon and the sources of new ideas and new techniques may vary.
All member and observer countries follow the same methodology of innovation measurement adhering to the single manual developed by the OECD the EU and using a harmonized questionnaire. Of course, there are linguistic and cultural differences. For example, when answering questions that require an interpretation of a definition, respondents from the U.S. and Japan will give different answers that would be culturally conditioned. Therefore, there is a need to be aware of certain limitations to statistics and be very accurate while formulating questions. On the other hand, we can and should make comparisons over time and geographical regions and adjust conclusions based on the results of such comparisons to avoid falling into cultural traps.
Russia has adopted the recommendations that were developed by the OECD and the EU. Therefore, Russian statistical data are fully compatible with international standards. Each year, ISSEK provides data for the international organisations, filling out their questionnaires; these figures are processed and introduced into reports. Unfortunately, the data demonstrate that the level of innovation in Russia is not very high at present.
Governance of science
Professor Sirilli believes that changes in the sphere of governance of science happen very slowly. Additionally, over the last few years, due to the crisis, resources for research and higher education have been decreasing in many countries after the 2008 crisis, which poses serious problems. In his view, in Russia, the largest part of the research activity has historically been conducted in the institutes of the Russian Academy of Sciences, so that was and currently is the issue that has to be addressed. Research, however, should be done not only in research institutions but also in universities. Following the Humboldt model, the university is not just a place where education and dissemination of knowledge take place; it is also necessary to increase knowledge and to teach students how to increase knowledge. Such a close linkage between research and teaching is fundamental. However, universities have their own limitations in terms of subject coverage. For instance, no university can manage the European Organization for Nuclear Research (CERN). A special organization is needed for that. So, it is good to join the forces of universities and academies of science to form laboratories or research centres.
A positive development would then be for universities to do more research. However, this cannot change overnight. On the other hand, academies have to accept young blood, young PhDs. Students should go there more often. Additionally, links with companies are important. Overall, the system has to be more open and flexible.
Professor Sirilli stressed that changes in the system of governance should not be abrupt and should be implemented with the participation of the scientific community (as opposed to a top-down approach), otherwise collaboration and integration between science and society cannot be achieved. There should certainly be a relationship between the government and academia, as well as a degree of control over the resources given to the academies. However, it is important to maintain a proper balance between the money and resources coming from the government and the people who are actually doing research. These people should exert their intellectual and organisational autonomy in order to be productive and socially relevant.
Evaluating the efficiency of science
Professor Sirilli emphasizes that the scientific community itself evaluates achievements through a process of expert review. ‘Freedom is essential here; over the course of history, learned society has developed its own internal rules. Research in the university environment is very competitive; people go to conferences, give lectures and receive feedback. Researchers confer with each other and exchange ideas and opinions internationally.’ It is not a good idea to change the rules of the scientific community dramatically, as it needs a certain degree of security, coherence in planning and a long-term approach. One recent example is CERN, which has been working for a long time. A lot of money has been invested for years and years, but fundamental results appear occasionally, for example, in the discovery of the boson.
It is important to understand that there are clear indicators of efficiency in some cases, but in other cases, there are not. This applies not only to science, but also to other areas, such as public administration, foreign affairs or justice. In science, the outcome is quantifiable and measurable only to a certain extent.
Professor notes that using rigid formal criteria might be harmful for the development of science. Somebody who is publishing a lot is not necessarily a better scientist than somebody who publishes less frequently. Perhaps the latter is focusing on a very profound subject that requires long-term research. Furthermore, the scientific arena is a very skewed phenomenon because there are very few successful stories and many other unsuccessful ones. Nevertheless, it’s all part of the game. Timing is also an important element; quite often people are more creative when they are not under pressure to produce publications or results. Another issue is that in different sciences people tend to be more creative at different stages of their lives. Mathematicians, for instance, are likely to be more creative in their youth whereas in social sciences people of older age produce more results. In other periods, there will be fewer publications and fewer visible ‘results’.
In many countries, there is now a trend towards strengthening control and imposing formal evaluation criteria, such as the number of annual publications, spin-offs, patents, etc. As Professor Sirilli notes, ‘at the end of the day these are deceptive parameters because they stimulate the wrong kind of efforts – towards meeting these criteria on paper. Proper success in science takes time. Furthermore, having university technology transfer offices often comes at such a large cost that it exceeds the income generated from the commercial exploitation of research results.’
In Professor Sirilli’s opinion, most governments are now often guided by short-time economical motives. This injects short-term values and a financial mentality into policies that ask for value for money and accountability. This is not to say that such requests are unjustified. Some of them are perfectly legitimate; others are not and may end up damaging the overall system of science.
By Maria Besova
Source: HSE English language website