How to tell a good scientist from a bad one? In the absence of basic trust in researchers, scientometric practices have been widely introduced in Russia.
Katerina Guba
Photo: Russian science officials are constantly looking for reliable indicators of the effectiveness of their scientists. (Photo by Markus Spiske on Unsplash)
Russian science has become a vivid example of the dominance of scientometrics, which has permeated not only the offices of high-level university managers in charge of important decisions but also academics’ day-to-day activities.
The Russian experience indicates that scientometrics has two research management objectives:
- to evaluate a given project, researcher, or organization in the process of resource allocation; and
- to confirm the status of a researcher, scientific organization, or journal.
How did scientometrics come to dominate Russian science?
Calculating University Effectiveness
University reporting has long resorted to scientometric indicators, which universities are required to disclose annually. One of the most important of these indicators is the number of publications and citations in three different research databases per 100 faculty members. Since 2012 this form of reporting has been used to monitor educational organizations’ effectiveness.
Project 5-100 also places special emphasis on scientometrics. Select universities receive considerable state resources in exchange for improvement in rankings and relevant indicators. The most important benchmark has turned out to be the number of publications and citations indexed in international databases.
Earning Effectiveness Points
The new benchmarks not only affected the two dozen institutions participating in Project 5-100 but were picked up by other universities, which likewise began to encourage their faculty to intensify publication activity through the use of bonuses and so-called effective employment contracts.
Prior to the introduction of such contracts, professors could start thinking about publishing closer to the end of their tenure. Temporary employment agreements were renewed almost by default, although faculty still had to go through a formal procedure and show a certain number of published works.
With the advent of effective contracts, publications now play the principal role in determining professors’ employment.
Effective contracts stipulate that a portion of a faculty member’s salary is calculated in accordance with a points-based system. Publishing is not the only way to earn points, but it is often the one that guarantees the highest number of points.
At the same time, journal articles have different weights depending on the “value” of a given journal.
- Publications indexed by the Russian Science Citation Index (RSCI) are worth the least.
- Articles in journals from the Higher Attestation Commission’s (VAK) list are valued relatively more than those in the RSCI.
- Journals indexed in Scopus and Web of Science (WoS) are the most valued.
This leaves some room for maneuver for those who are not ready to publish in international journals. But having one’s article in these journals does bring additional financial benefits.
How Research Institutes’ Effectiveness Is Evaluated
Scientometrics is used to evaluate not only universities, but also research institutes, which undergo an expert evaluation once every five years.
Research institutes provide annual data covering several dozen indicators, which serve as the basis for calculating threshold performance values subsequently used to compare each institute’s benchmarks.
At the next stage, the institute is evaluated by an expert, who uses, among other methods, scientometric variables.
As a result, organizations are divided into three categories:
- leaders;
- stably operating; and
- laggards
According to the results of a 2017 expert evaluation, institutes in the first category—leaders—have 0.68 research publications per researcher in journals indexed by WoS.
Institutes in the second category have just over half as many—0.36.
Finally, institutes in the laggards category have only 0.13.
The results of these assessments are considered during funding allocation. Institutions that lag behind can expect closure or reorganization.
In Search of Excellence
In response to publication pressure from management, scientists commonly resort to dishonest practices. In turn, scientometrics continues to perfect its instruments of influence. The race becomes more sophisticated.
For example, at first it was enough to publish in indexed journals. Today, however, what matters for the calculation of bonuses is not only the fact of publication, but also the quartile of a given scientific journal. Publishing one article in a first-quartile journal is equivalent to publishing two articles in lower-quartile journals.
The new Holistic Publishing Performance Score for research institutes will consider co-authorship and multiple affiliations in addition to quartiles.
Confirming the Status of Scientist
There are different situations when a scientist might need to confirm her research achievements—for instance, to become an expert at a foundation, join an examination committee, or earn a place on the editorial board of a journal.
This confirmational function of scientometrics is especially common when awarding degrees.
In the late 1990s, defending a dissertation was not a Herculean task. It required, aside from submitting the manuscript itself, passing three examinations and publishing the results of the study in an academic journal, which could be substituted by conference abstracts.
Since the early 2000s, however, this simple procedure has been amended with complications and clarifications, including a number of scientometric requirements.
Examination committee members are required to report how many publications and citations they have according to citation databases.
Editors of VAK journals must provide data on the number of articles published by reviewers and editorial board members and specify the number of academics, corresponding members, PhD holders, and candidates of science, as well as report the journal’s scientometric data.
Dissertation research results are accepted only if they were published in journals with confirmed status.
Regulations dating back to 1994 contained a vague note that dissertation results must be published in an academic journal, without specifying any characteristics that the journal needed to have. Furthermore, patents and abstracts were accepted as equivalent to articles.
In 2006, a new directive prescribed a mandatory number of publications and created a special list of designated journals under the VAK.
In 2013, the number of required publications rose:
- To defend a doctoral dissertation in social sciences and humanities, no fewer than 15 were required (for other disciplines, no fewer than 10),
- To defend a candidate of science dissertation, three and two, respectively, were required.
The criteria for VAK journals have been further formalized; they must now provide quantitative data about their operations.
How Scientometrics Assesses Research Project Leaders
The Russian Science Foundation uses scientometric benchmarks to judge research competitions. Academics in charge of research projects must provide evidence of articles in indexed journals to confirm their status as a published author.
Moreover, over the past five years, the publications threshold has risen significantly. In 2014, research project leaders had to have had no fewer than three articles in indexed journals over the course of three years. In 2017, this number rose to five articles over the course of five years. In 2021 it reached eight.
Social scientists and humanities researchers can no longer rely on articles published in journals indexed by the Russian Science Citation Index.
Scientometrics as a Substitute for Trust
In many cases, scientometrics substitutes for the expert opinion of scientists themselves.
We can seek our colleagues’ advice on who should be part of an examination committee or editorial board of a journal. But given the lack of basic trust in academics, managers want additional data.
The situation is especially dire in the social sciences and humanities, as evidenced by a new decree requiring academics in these disciplines to publish at least 50 articles in first-, second-, and third-quartile journals over the course of 10 years. For other academic fields, the number is no fewer than 30, but only in first- and second-quartile journals.
Such unexpected discrimination can easily be explained by government suspicion that social scientists and humanities researchers are more likely to produce low-quality research—hence the need to set such a high bar.
Scientometric indicators therefore serve as a guarantee of sorts, eliminating the risk of author collusion.
* * *
As long as this distrust of scientists persists, scientometrics will continue to permeate academic management practices.
At the same time, broad reliance on scientometrics triggers retaliatory dishonest practices intended to circumvent the system. Plagiarism, publishing in predatory journals, and fake co-authorship are just a few examples.
These practices in turn serve to confirm officials’ belief that scientists cannot be trusted. The search for reliable metrics continues, threatening to become a vicious cycle.
0 Comments