Innovations in Russia often remain on paper
Large state research organisations in Russia are quite actively making themselves known on the pages of scientific journals, but visibly lag behind foreign companies in terms of actual results of their intellectual activities, such as new technologies registered as patents, utility models, and design specifications. This was the topic of report entitled “Assessing Results Produced by State Research Organisations in Russia” presented at the “STI Policy” session of HSE International Academic Conference “Foresight and STI Policy” by Konstantin Fursov, head of HSE ISSEK’s R&D Performance Analysis section.
For more than 15 years, there was a significant increase of investments into the Russian R&D sector, but its productivity remains very low. Contribution to the global science measured as the number of international academic papers, reviews and reports presented at leading international conferences, albeit increased in absolute figures, hardly exceeds 2% of the global publications flow. The number of patent applications for inventions is stagnating since 2010. This contradiction means there’s a need to develop new science, technology and innovation policy tools, based on understanding how the Russian R&D sector actually works.
Internationally, research organisations’ and universities’ productivity is measured both by government agencies and academic communities. Research organisations judged to be inefficient may be cut off from programme funding, deprived of their licences (for universities), restructured, or even closed altogether.
Since the early 2000s, the debate is going on in Russia regarding the need to conduct such assessment. Furthermore, a number of attempts have been made to measure productivity, financial and organisational sustainability, and viability of research organisations. However, these projects were either limited to a local level or seen by government officials just as a formality — conducted using only quantitative indicators without involving a wide range of external experts. The results have never been made public, and had practically no consequences for research organisations.
Assessment will be more objective
At HSE international academic conference “Foresight and STI Policy”, Konstantin Fursov presented the first findings of a project to put in place a large-scale monitoring and productivity assessment system for state research organisations.
“The new assessment system presented at the end of 2013 is an open and inter-departmental one, which allows to make assessments more objective by analysing a rather wide range of research organisations’ results”, Fursov explained. “It’s based on a single information source and implies annual collection of data about results obtained by research organisations and universities, with assessment being performed every five years”. I.e. research organisations have an opportunity to figure out how the system works, and prepare for assessment: knowing the main criteria applied to measure their productivity, organisations can adjust their strategies accordingly.
Konstantin Fursov also noted that following the assessment, each research organisation should be placed in one of the three groups: leaders, sustainable organisations, and outsiders. The system of indicators for monitoring and assessment includes 24 sets of measures grouped in blocks: research and financial productivity, development of human potential (training highly skilled staff), integration into international environment (including participation in international projects), etc.
According to the approach currently being discussed, organisations should be assessed within reference groups defined by one or more relevant research areas and their intersections (40 such areas were identified by expert panels), and organisations’ activity profiles:
• “Knowledge generation” expressed as the number of publications in international and Russian scientific journals;
• “Technology development” measured as the number of created intellectual property results;
• “Provision of S&T services” assessed by measuring the overall volume of S&T works and services provided on contractual basis.
The bigger — the more productive
The first assessment results indicate that there are rather few very inefficient research organisations in Russia (between 2% and 8%, depending on the research area). There’s a clear correlation between basic research expenditures, the organisation’s size, and results in terms of international publications.
The larger and more powerful the organisation is, the more opportunities it has to present the results it obtains. At the same time, such parameters as organisation type, legal status, and the share of public funding do not allow to clearly differentiate the results and use them as criteria for dividing organisations into reference (notionally comparable) groups.Certain interconnections between organisations’ research areas and their success in obtaining particular kinds of results were also assessed in the course of the study. E.g. the following natural science areas had the largest shares of leaders both in the “Knowledge generation” and “Provision of S&T services” groups: “Physical chemistry, biophysics, polymers” and “Geology, mineralogy, and mining engineering”: 17% and 17.1%, respectively. It means that in terms of formal quantitative indicators, research organisations specialising in these areas more successfully combine contractual work with basic research. As to the leaders among “knowledge generators” only, the largest shares of efficient organisations were found in astronomy (22%), mineralogy and mining engineering (29%), and physiology, general and organismic biology (28%).
Still, Konstantin Fursov stressed that these were just preliminary estimates based on 2014 data, so we can’t yet make any systemic conclusions. Possibly the presented distributions were caused by current (short-term) results achieved by organisations active in relevant research areas. To be able to reliably estimate specific features of particular research areas, we must carry on accumulating dynamic data and analysing distributions in subsequent periods.
Though the new approach to measuring productivity of research organisations is certainly more adequate than the previously employed ones (at the very least it doesn’t involve comparing “physicists with lyricists”), it still has certain problems with analysing research productivity.
“The current version of the system is very sensitive to data quality and only allows to measure overall results”, concluded Konstantin Fursov. Some organisations may have more than one activity profile (especially universities), so it’s not always clear how to assess them. Analysis of quantitative characteristics must be supplemented by evaluation performed by independent experts.
Full text of the presentation (PDF)
By Ekaterina Shokhina, for OPEC.ru