Journal Search Engine
Search Advanced Search Adode Reader(link)
Download PDF Export Citaion korean bibliography PMC previewer
ISSN : 1598-7248 (Print)
ISSN : 2234-6473 (Online)
Industrial Engineering & Management Systems Vol.19 No.1 pp.273-288
DOI : https://doi.org/10.7232/iems.2020.19.1.273

Economic and Mathematical Methods for Ranking Eastern European Universities

Gennady Osipov, Svetlana Karepova, Vadim Ponkratov*, Alan Karaev, Andrey Masterov, Marina Vasiljeva
Institute of Socio-Political Research of Theoretical and Applied Sociology of the Russian Academy of Sciences, Moscow, Russian Federation
Financial University under the Government of the Russian Federation, Moscow, Russian Federation
Atlantic Science and Technology Academic Press, Boston, USA; Autonomous Non-Profit Organization “Publishing House Scientific Review” (Nauchnoe Obozrenie), Moscow, Russian Federation
*Corresponding Author, E-mail: ponkratovvadim@yandex.ru
November 12, 2019 November 25, 2019 February 3, 2020

ABSTRACT


Assigning accurate qualitative ratings to Eastern European higher education institutions is critical in the face of high student mobility and competition with similar institutions in the USA, Central Europe, and Canada. This article aims to develop a methodological economic and mathematical approach to rating higher education institutions in Eastern European countries. An indicator system of the integral assessment of the education quality in Eastern Europe, considering the specific characteristics of national higher education systems, was formed during the research with the help of experts and taxonomic methods. A group of rating indicators of universities and their significance in determining the university’s ranking was substantiated. Using regression analysis, a multifactor model was developed to determine the rating of higher education institutions in the countries studied. The analysis demonstrates that the universities of Russia, the Czech Republic, and Poland are dominant in Eastern Europe. The study’s practical implementation can serve as a reference point and incentive for encouraging higher education institutions to provide a higher quality of education.



초록


    1. INTRODUCTION

    In the context of progressive knowledge globalization and higher education system competitiveness, an independent assessment of the quality of Eastern European higher education institutions becomes an indicator of an establishment’s academic potential assessment on national and global levels (Collins and Park, 2016;van der Wende, 2017). Ratings and their criteria systems are increasingly acquiring the characteristics of global and regional trends to improve the competitiveness of universities and higher education systems (Hussein et al., 2017). Ratings have become the quality measure of a higher education institution’s scientific potential, since they record the preparation quality of human resources for their professional activities and the social and intellectual contribution of universities to the development of society’s scientific and technological progress (Nelson, 2018;Rauhvargers, 2013;Jusuf et al., 2020).

    Updating higher education institution ratings is conditioned by the information needs of a global econ-omy that requires highly qualified specialists. They are expected to be capable of continuous intellectual devel-opment of high professional skills, adaptation to uncer-tainty, independence and creativity in decision-making, and involvement in lifelong education, which can be ensured, first, by the higher education system through a qualitative learning process (Ivančević and Luković, 2018). If a university is ranked among the top 500 universities globally, its attractiveness is significantly increased not only for prospective students but also for employers and investors. Moreover, this can also ensure additional state subsidies (Irkhin, 2013). The position of a higher education institution in global rankings contributes toward a country’s image and toward improving its position in the system of other important indicators (for example, the human development index and country competitiveness). Thus, for example, by attracting foreign students, the USA and Australia annually earn around $28 billion from educational services, which is more than 18% of the worldwide income in this field (Dodd, 2017). A reasonable focus on the indicators of the world’s best universities contributes to the improvement and modernization of the higher education systems of Eastern European countries, the quality of educational processes, the fundamental na-ture of research, attracting foreign students, etc. (Corrente et al., 2018;Millot, 2015). In addition, the membership of most Eastern European countries (Hungary, Slovakia, Russia, Poland, Bulgaria, Moldavia, and Ukraine) in the World Trade Organization (2019) advances the urgency of stating this problem for a higher education institution’s community.

    Eastern European countries are characterized by one of the highest mortality rates in the world; ac-cording to current information from the World Health Organization (n.d.), the average mortality rate in Eastern Europe is 12.3 per 1,000 people. This demographic factor affects the reduction of the number of students in Eastern European national higher education institutions. The current situation has also been accompanied by the growth in the population’s prosperity over the past 20 years (the GDP in Eastern Europe has increased by 4.5 times), which has led to the increase of student mobility (The World Bank, 2018). It has provided the opportunity for education abroad, both in other Western European countries and beyond. Today, Eastern European universities compete not only within the national framework but also with foreign institutions, such as those of the USA, where the cost of education (without living expenses) can reach $34,220 per year, and Great Britain (around $30,800 per year), which is expensive for Eastern European students. Additionally, Eastern European universities are also competing with their counterparts in Asia, where education costs less (from $3,900 annually) (Martin, 2017). Therefore, the process of preparation and participation of Eastern European universities in national and global international rankings and the promotion of scientific papers in world information and analytical databases are becoming increasingly important at this stage of the higher education system’s and the information economy’s development. Because the largest share of student migration in Eastern Europe falls within this region (United Nations Educational, Scientific and Cultural Organization, 2018), it is neces-sary to improve the approach to the rating of Eastern European universities.

    This study analyzed the representativeness of the current indicator system of methods to rate higher education institutions. It substantiated the structure of indicators to assess the quality of Eastern European education and determined representative indicators for ranking Eastern European universities. The authors ranked the leading universities in Eastern European countries based on a representative selection of education quality indicators. The paper is divided into several sections as follows: Section 2 reviews the literature, followed by an outline of the factors and hypotheses of this study in Section 3 was describes the research methodology and the data collection, and the data analysis and results are discussed in Section 4 Lastly, Section 5 summarizes the conclusions of this study.

    2. LITERATURE REVIEW

    Currently, there are four main global ratings of universities, which most applicants and students tend to consider.

    QS (Quacquarelli Symonds) is a global univer-sity ranking assessing the efficiency of a higher educa-tion institution per its research, training, employment, and internationalization of students (QS Top Universi-ties, 2018). The Academic Ranking of World Universi-ties (ARWU) (2018) assesses the efficiency of a univer-sity in the field of research and lists the world’s most famous researchers. The Times Higher Education (THE) World University Rankings assess the world’s academic institutions in the fields of training quality, research, knowledge transfer, and international perspectives (Times Higher Education, 2018). The US News & World Report (2018) has maintained its high prestige in educational rankings for over 30 years. The Best Global Universities rating provides a comprehensive assess-ment of research universities around the world (US News & World Report, 2018).

    The undoubted advantage of the presented ap-proaches to the ranking of universities in the world is that they provide aggregated information on the education quality. This allows for operating with this information when deciding on the choice of a university both for education and for the development of joint projects by business structures. But it should be noted that the construction of world university rankings is not without shortcomings, which serve as a constant source of criticism. This applies primarily to three aspects of the methodology: the choice of indicators reflecting the quality of universities; the quality of data collection on them and the choice of indicator aggregation method. The main types of indicators used in the construction of world ratings are bibliographic indicators that measure publi-cations and citation, measure the reputation of universities in the academic community and among employers. The use of these types of indicators raises complaints on how the data collected on them are interpreted, which, according to a number of researchers, reflect the wrong informative purpose that is assigned to it. In view of this, the main and completely insoluble problem regarding the quality and adequacy of university rating is the problem of determining a set of indicators that would reflect the most important characteristics of the university and not contain insignificant indicators (Barron, 2017;Perez-Esparrells and Orduna-Malea, 2018). This in turn is given by a distorted assessment of the quality of education at universities, which affects their well-being and reputation. Thus, the issue with rating Eastern European universities is that the results of ranking universities in world ratings differ significantly depending on their methodical approach. This is due to a different set of indicators and methods of calculation (Stack, 2016;Thompson-Whiteside, 2016). Depending on ratings, the positions of universities in the top 1,000 table and their status as the most prestigious university in their respective country vary. The positions of the universities of these countries in world rankings for 2018 are shown in Table 1 (Academic Ranking of World Universities, 2018;QS Top Universities, 2018;Times Higher Education, 2018;US News & World Report, 2018).

    Thus, the highest-ranked university in Belarus, Belarusian State University, is not among the top300 ranked universities according to the QS World University Rankings, and it is not even among the top 1,000 in-stitutions according to other systems. The situation is similar in Bulgaria, Russia, Slovakia, and the Czech Republic (Academic Ranking of World Universities, 2018;QS Top Universities, 2018;Times Higher Education, 2018;US News & World Report, 2018).

    The values of the variation coefficient indicate a strong deviation in the rating data of 21.2%-56.3% for countries such as Belarus, Hungary, Poland, Russia, Ukraine, and the Czech Republic. The average variation coefficient of 14.1%-14.7% is observed in Bulgaria and Romania (Table 1). The variability level is rather high, which confirms the heterogeneity of the information that is provided by the ratings agencies regarding higher education institutions in Eastern Europe. Due to the varied rating results, the assessment of the education quality in Eastern European countries is problematic (Barron, 2017;Corrente et al., 2018;Irkhin, 2013;Kara-Murza, 2013;Katsarova, 2015;Marope et al., 2013;Tremblay et al., 2012). It is connected with the system of performance indicators of each rating in question.

    Education quality is characterized by the edu-cation level index and the national education system performance index, which includes the previous index (Pearson, 2016). However, neither of these indicators reflects the higher education quality in countries that determine university rankings (Hanushek and Kimko, 2000;Roser et al., 2019). The education level index is based on the adult literacy rate and the total proportion of students receiving primary, secondary, and higher education (Times Higher Education, 2018). The impos-sibility of using this as a leading indicator in characteriz-ing higher education quality is due to the fact that popu-lation literacy and the proportion of students are the results of the functioning of the national education sys-tems in Eastern Europe and are also largely dependent on the quality of education received by students in uni-versities of other countries, which is becoming relevant under the conditions of high student mobility. For de-veloped countries whose universities are ranked in the top 100-500 globally, the education level index is the education quality indicator, as students have no incen-tive to study abroad in less prestigious universities. However, as ranking positions of universities decline, the indicator representativeness in the rating decreases.

    For Eastern European countries, whose universities are at the bottom of global rankings (with the exception of Lomonosov Moscow State University in Russia), the education level index is less informative, since the significant part of acquired competencies and, accordingly, population literacy level are also formed in more prestigious foreign universities. Thus, in Russia 0.9%, in Poland and Bulgaria 0.5%, in Hungary 0.2%, in Belarus 0.5%, in Romania 0.7%, in Ukraine 1.5%, in Slovakia 0.6%, and in the Czech Republic 0.3% of the total number of students receive their education from the universities of other countries (United Nations Edu-cational, Scientific and Cultural Organization, 2018). Every year these indicators increase.

    The second indicator is the national education system performance index, which is based on the edu-cation level index and the indicator of cognitive skills (Pearson, 2016). The second component of the indicator does not reflect higher education quality, since it characterizes the quality of primary and secondary schools (the level and quality of reading and understanding of texts by primary school students; the competence level of secondary school students in the field of natural science and mathematics; the literacy level of secondary school students; and the ability of students to put school knowledge and skills into practice (Pearson, 2016). Thus, one may state that the education quality in Eastern European countries is not reflected in the world leading rankings of universities under study.

    3. DATA AND METHODS

    3.1 The System of Indicators used to Assess the Quality of Education in Eastern Eu-rope

    The list of indicators for building an index of higher education quality in Eastern European countries is based on the following logic. The key indicator of education level is the adult literacy rate (% of people aged 15 years and older; Х1). In the framework of this study, the quality of education is characterized by a set of basic competencies formed by students in the educational process. This indicator depends on the quality of teaching and a set of academic disciplines formed by the curricula of educational areas. The literacy of the population itself results from the functioning of both national and foreign higher education systems (Pearson, 2016). The more students study abroad, the less influence the national education system has on the population’s education quality. The factor confirming that the number of students studying abroad is inversely proportional to the rating of higher education quality in a country is that, as a rule, students receive better education abroad than in their own countries (Redden, 2018). Therefore, the second assessment indicator, which has an inversely proportional effect, is the indicator of outgoing mobility in the higher education system (Х2). The indicator of incoming mobility in the higher education system, i.e., the number of foreign students studying in a given country, has the reverse impact on education quality compared with the outgoing mobility indicator (Х3).

    Student mobility (indicators X2, X3) is affected not only by the relative education quality factor but also by the cost of education. Therefore, instead of indicators of the total number of students studying abroad and foreign students, this analysis employed the proportion of students studying in a different country with a higher cost of education out of the total number of people registered in the national higher education system (X2'), and the percentage of foreign students who came from countries with a lower cost of education out of the total number of people enrolled in higher education institutions (X3'). The application of these indicators allows for the offsetting of the economic factor and the focusing on education quality.

    To maintain the competitiveness of an education system under conditions of dynamic development, it is necessary to apply means of innovation, which requires adequate funding (Aleksandrov, 2017). The higher the level of financing education, the greater the possibilities of material, technical, informational, communicative, pro-grammatic, personnel support, which has a positive effect on the education quality, are. The indicator of public expenditure per student in USD (Х4) is directly proportional to education quality.

    The purpose of education is to acquire the competencies required for obtaining employment; therefore, the level of unemployment expressed as a percentage (X5) that has an inverse proportionality influence was used in the study as an indicator of education quality.

    The representativeness of these indicators for assessing higher education quality in Eastern Europe is confirmed by an expert estimate. Regardless of being used under conditions of limited information, this method provides high reliability and validity of conclusions, which is confirmed by the experts’ competence (Gutsykova, 2011). An expert group formed to assess the representativeness of education quality indicators included representatives from the Ministries of Education and Science of Eastern Europe-an countries. The survey involved 10 higher education quality assessments and university ranking specialists from the Eastern European countries included in the study: Russia, the Czech Republic, Poland, Hungary, Belarus, Romania, Slovakia, Ukraine, and Bulgaria. The total number of experts included in the group was 90. The professionalism and competence of experts is ensured by the fact that they are dedicated specialists from the Ministries of Education and Science of Eastern European countries, who are knowledgeable in the subject matter and have at least 7 years of experience. The statistical indicator confirming the competence of experts is the competence coefficient calculated by the formula (Tikhomirova and Matrosova, 2016):

    K i = i = 1 m e i j m
    (1)

    where Ki is the coefficient of the i-th expert’s com-petence; eij is the expert’s rating equal to “0” if the j-th expert considers another (i-th expert)incompetent and does not deem it expedient to have them included in the expert group and “1” if the j-th expert expresses the need to include another expert (i-th expert)in the group; and m is the number of experts.

    Since the experts are representatives of the ministries of different countries, they do not know each other and cannot evaluate each other’s competence. Thus, the evaluation was carried out individually, by country.

    When evaluating their colleagues, each partici-pant provided a binary evaluation of the expedience of having other experts included in the group. A “0” score refers to the incompetence of an expert being evaluated and the reluctance to include the min the group expressed by another expert; “1” indicates high competence and the need to include the expert in the group.

    The competence coefficient calculated by the formula (1) is measured within the range [0, 1]. The higher the coefficient, the more desirable the involvement of an expert in the survey. The threshold value of the competence coefficient sufficient for an expert’s inclusion in the working group is 0.5 (Gutsykova, 2011). Table 2 presents the experts competency coefficients E1-E10 per countries and weighted averages of the competency level of an expert group per countries (K), which were calculated using the formula (1) based on the binary scores. E1-E10- expert serial number (Е).

    The calculated coefficient of experts’ compe-tence by country is as follows: for Belarus -1; for the Czech Republic, Poland, and Slovakia -0.9; and for Russia, Hungary, Ukraine, Romania, Bulgaria -0.8. The values of the coefficients exceeded 0.5 (Gutsykova, 2011), which indicates high competence levels among all experts.

    In order to evaluate the representativeness of indicators, the experts were asked to assess each indicator by the following features: 1) does the indicator specify higher education quality? (Yes/No) and 2) is the proposed list of indicators enough to describe the country’s higher education quality? In response to the second question on indicator selection sufficiency, the experts’ scores ranged from 0 to 5. The higher the score given by an expert, the higher, in their opinion, the level of sufficiency. The ex-pert assessment was carried out remotely. According to the results of expert evaluations, all indicators (X1, X2', X3', X4, X5) are designated as indicators of higher educa-tion quality. All experts gave positive answers to the first question. The average percentage of selection sufficiency was calculated as the ratio of the sum of expert scores ( i = 1 n b i , n = 90 ) to the maximum possible sum (Table 3). bi – evaluation of the i-th expert regarding the selection sufficiency, set within the range of 0-5; n – the number of experts.

    Table 3 shows the scores (b) set by experts (E1-E90), which characterize the sufficiency of a sample of indicators (Х1, Х2', X3', Х4, Х5) to assess the quality of education in Eastern Europe. For 90 experts, given that the scores were in the range of 0 to 5, the maximum possible sum was 450. The coefficient of indicator selection sufficiency (Cs) totaled 88.9%. This variation of expert estimates could be compared to the multidimensional factor analysis, which also selects the optimal number of indicators (factors) from the set of indicators describing the system being studied. The criterion for the factor analysis quality is a factorization percentage - an indicator demonstrating how the developed system of indicators describes the studied system as percentage - a sufficiency indicator. In factor analysis, a sufficient level of factorization is 80% (Jolliffe, 2002). Accordingly, for this expert estimate, the percentage of indicator selection sufficiency equal to 88.9% can be considered sufficient to describe the system of higher education in Eastern Europe. The consistency of expert opinions is indicated by the values of the coefficient of variation of the estimates, which does not exceed 9%.

    Since the experts represented all the countries whose education system was evaluated, and the discus-sion of the indicators for assessing the education quality in the countries lasted until a consensus was reached among representatives of all countries (consistency of experts’ opinions), the results of expert assessment can be considered unbiased and objective.

    Since the quality of higher education was ex-pressed by a variety of indicators (Х15), an integral indicator was calculated to improve the ranking of East-ern European universities based on this criterion. The construction of the integral indicator was based on the taxonomic analysis. The use of this method is justified by the initial absence of the value of the resulting indi-cator (education quality in the country) and the relative importance of particular indicators (X1-X5). In an at-tempt to determine the relative importance of particular indicators by an expert method, consensus was reached at the level of the same significance of indicators (≈0.2). The statistical bases for comprising the integral indica-tors are the values of indicators (X1-X5) for 2011-2018 in the aforementioned Eastern European countries. Since the population literacy rate is not evaluated every year, its previous year value was used for the study by country (Knoema, 2018). Since the indicators of the quality of education have different units of measurement, in calculating the integral indicator of higher education quality, standardized values of indicators (Х15) were used to bring them into commensurable form (Formula 2-4) (Rousseau et al., 2018):

    X s t , j = Х i j X ¯ i j σ i j ,
    (2)

    where Xstij is the standardized value of the i-th indi-cator in the j-th country; Xij is the actual value of the i-th indicator in the j-th country; Xij is the average value of the i-th indicator in the j-th country; and σij is the mean-square deviation of the i-th indicator in the j-th country. Standardized values Х15, are presented in Table 4.

    Taxonomic analysis involves the calculation of the reference vector, which corresponds to the potentially highest level of education quality in the country, calculated based on the optimal (maximum or minimum depending on the nature of the effect) values of indicators for a sample of Eastern European countries. A reference vector was calculated based on the classification of indicators on stimulants and disincentives to calculate the maximum value of the stimulants’ indicators and the minimum value of the disincentives (by sample of Eastern European countries). Stimulants were considered indicators, the increase of which contributes to education quality; disincentives are indicators, the increase of which leads to decrease in education quality. The interval between the actual state of the study subject (level of education quality in the j-th country) and the reference state (corresponding to the reference vector) was calculated using the formula:

    d j = ( X s t i j X 0 i j ) 2
    (3)

    where dj is the interval between the actual level of education quality in the j-th country and the reference value, and   X 0 i j is the reference value of the i-th indicator of the j-th country.

    The integral indicator of the education quality in the country was calculated using the formula:

    { I e q j = 1 d j d d = d ¯ + 2 σ
    (4)

    Where Ieq j is the integral indicator of education quality in the j-th country; d is the average value of the interval between the actual state of the subject and the reference value; and σ is the mean-square deviation of the intervals between the actual state of the subject and the reference value.

    3.2 The System of Indicators for University Ratings

    When building international rankings, various rating agencies use a wide range of indicators, i.e., evaluation criteria, which determine differences in the ranking positions of universities. Since the total number of universities forms a country’s national education system, its position in the international university rankings should reflect the national education system’s quality. With this in mind, a correlation-regression analysis was conducted to identify the most representative indicators in university rankings that reflect the quality of education in a country; i.e., the correlation coefficients between university ranking criteria used in world rankings and the integral indicator of education quality in the country were considered (Table 4, Figure 1). The dependent variable for constructing correlation-regression models is the calculated integrated assessment of the education quality by country for 2011-2018.

    The education quality indicator (the dependent variable) was calculated for a country, and it was represented by its number of universities. The larger the number of universities represented in world rankings and the higher their position, the higher the country’s education quality ought to be.

    Using ranking criteria of the most prestigious university of the country as independent variables for constructing models for quantitative expression of the ranking will not give a reliable score. This is because a situation in which a country is represented by one university wherein an insignificant number of students’ study while the majority of young people remain uneducated could occur. Using the average criteria, weighted for the number of universities included in the top 100, top 500, top 1,000, etc. rankings also does not provide an objective estimate, since a large number of universities in world rankings may reflect a low quality of education owing to the low ranks these universities occupy. The number of students in national universities compared to this number should be considered.

    The results of the World University Rankings, QS World University Rankings, and ARWU rankings are represented by a university’s position (rating), and the ranking criteria values are presented in numerical form. For these ratings, the higher the value of the indicator in kind, the higher the university’s position in the ranking (Academic Ranking of World Universities, 2018;QS Top Universities, 2018;Times Higher Education, 2018;US News & World Report, 2018).

    Thus, a weighted average of the partial criteria for the rating formation was used to build the university ranking criteria impact models on the quality of education in Eastern Europe and to determine representative criteria reflecting the education quality at the university, with the assessment of education quality according to the World University Rankings (Times Higher Education, 2018), QS World University Rankings (QS Top Universities, 2018):

    Z k = Z i 1 S 1 + Z i 2 S 2 + ... + Z i n S n S
    (5)

    where Zk is the indicator of education quality in the k-th country, calculated by the i-th criterion; Zin is the value of the i-th criterion to the n-th university of the k-th country; Sn is the number of resident students who study at the n-th university of the k-th country; and S is number of young people in the k-th country.

    The method to determine representative indica-tors used to build university rankings reflecting education quality involves the calculation of the X indicator for all Eastern European countries and all THE World University Rankings, QS World University Rankings, and ARWU ranking criteria. For US News & World Report’s Best Global Universities, partial criteria are expressed by position in the ranking (Academic Ranking of World Universities, 2018;QS Top Universities, 2018;Times Higher Education, 2018;US News & World Report, 2018). Taking the pattern into account, the higher the quality of education, the smaller the numerical value of the position in the ranking; there is an inverse relationship between these indicators. Therefore, the formula used to determine the education quality indicator calculated according to the Best Global Universities ranking criteria, has acquired the form:

    Z k = 1 R 1 S 1 + 1 R 2 S 2 + ... + 1 R n S n S
    (6)

    where Zk is the indicator of education quality in the k-th country, calculated by the i-th criterion; Rn is the ranking position of the n-th university of the k-th country upon the i-th criterion; Sn is the number of resident students who study at the n-th university of the k-th country; and S is the number of young people in the k-th country.

    The indicator Z was calculated for all Eastern European countries using all the criteria of the Best Global Universities rankings. The indicator 1 R n is used to reflect the inverse relationship between education quality and the numerical value of a university’s position in the rankings. In addition to indicators of the methods of university ratings outlined above, the study considered the indicators of The Guardian University Guide, The Complete University Guide (The Guardian, 2018), and Rating Agency Expert (2018).

    Thus, a statistical sample for constructing mod-els for determining representative criteria that reflect the education quality at the university was formed:

    dependent variable (Y) is an integrated assessment of the education quality by country (Ieq) for 2011-2018 (Table 4);

    Independent variables (Zi) are the indicators of education quality calculated by formulas (5)-(6) according to university ranking criteria according to world ratings.

    The number of observations for analysis is N = 72.

    Initially, the correlation analysis was performed for the indicated data set. Representative indicators for developing rankings of Eastern European universities that reflect a country’s education quality were identified from the correlation analysis. Representative indicators are those that pair correlation coefficients in combination with the integral indicator of higher education quality (Table 4) exceeding | 0.7 |. The indicators are as follows: academic reputation (Z1); reputation among employers (Z2); percentage of foreign teachers (Z3); percentage of foreign students (Z4); research reputation (Z5); graduates awarded Nobel Prizes and medals (Z6); and employees awarded Nobel Prizes and medals (Z7). The characteristics of these indicators are represented in Table 5. In order to ensure the objectivity of the research results, the model for determining university rankings, built based on Z1-Z7 indicators, was tested for multicolinearity. The results showed that the paired correlation coefficients between the indicators do not exceed |0.3|, which indicates the absence of multicolinearity and the adequacy of the model.

    The proof of the criteria Z1-Z7 representative-ness for constructing the universities ranking in Eastern Europe reflecting the education quality in the country is based on the constructed one-factor regression models from the dependent variable (Y), independent (Z1-Z7) (Table 6). The adequacy of the constructed regression models is evidenced by the sufficiency of the sample (N=72); normal law of variables’ distribution; statistical significance indicators: the multiple correlation coefficient (R→1), the determination coefficient (R2є [0.79; 0.90]), the Fisher F-test, the calculated values of which (9.57-59.49) for all models exceed the Table 3.98, student t-test, whose calculated values (3.97-8.98) for all models exceed the tabulated 1.9944 at p=0.05. Thus, with a 95% probability, one can state the statistical significance of the representativeness of the criteria (Z1-Z7) for constructing the universities ranking in Eastern Europe, reflecting the education quality in the country.

    The method used to calculate indicators (Z1 – Z7) was similar to the methods of world ranking estimates (Academic Ranking of World Universities, 2018;QS Top Universities, 2018;Times Higher Education, 2018;US News & World Report, 2018). The expediency of using these indicators and existing methods for indicator calculation is due to the significant correlation coefficients. The relative significance of indicators for improving ranking is deter-mined using the expert method. In this case, priority is given to expert assessment, since, unlike statistical methods, it gives a more reasonable assessment of the significance of indicators. Significance is based on the knowledge and experience of experts. In addition, when using statistical methods, a situation is possible that the very high closeness of the connection between certain independent indicators and the resultant for one university compensates for the weak connection for other universities, which distorts the real significance of the indicators.

    The same group, which had been involved in evaluating each country’s higher education quality indicators, was involved. The experts were asked to rank the significance of the indicators on a 7-point scale, the highest value of the indicator being7, the lowest being 1. The reliability of expert estimate results was confirmed by a concordance coefficient amounting to 0.81 at a sufficient level of 0.7-0.75.

    A methodological approach to improving the ranking methodology of universities in Eastern Europe, on the basis of the methodology described above, is presented in Figure 2.

    This study analyzed the positions of Eastern European universities in the most prestigious world rankings of higher education institutions: The World University Rankings (Times Higher Education, 2018), QS World University Rankings (QS Top Universities, 2018), Best Global Universities (US News & World Report, 2018), and Academic Ranking of World Uni-versities (2018). The rating was compiled for the 54 largest universities in Eastern Europe.

    4. RESULTS

    Figure 1 presents the findings obtained as a re-sult of the implementation of the taxonomic analysis method.

    Thus, according to the recognized education quality ranking for Eastern Europe, Russian universities held the top position in 2018 (the integrated index value is 0.244; Table 4, Figure 1). It is necessary to highlight that Russia’s educational sector receives government funding of around 4% of the gross national income (GNI). This is comparably lower than in Central European countries but above the average level of government funding in Eastern Europe. However, we must admit that incoming students’ mobility is a significant factor contributing to Russian leadership. A total of 243,752 students attending Russian universities were foreign (4.8% of the total number of students) (Knoema, 2018;United Nations Educational, Scientific and Cultural Organization, 2018); in other Eastern European countries, this rate ranges from 1% to 1.1% (United Nations Educational, Scientific and Cultural Organization, 2018).

    The Czech Republic, which has the highest level of government funding for education ($4,247 per student, corresponding to 3.9% of the GNI) and the lowest unemployment rate (2.5%), takes second place in the ranking (2.234) (Knoema, 2018). Poland’s high position in the ranking (0.233) is partly caused by incoming exceeding outgoing mobility by 0.6%, a high level of government funding (4.7% of the GNI), and a lower unemployment rate (5%) than in Bulgaria, Russia, Ukraine, and Slovakia (Knoema, 2018). Hungary ranks fourth in the education quality ranking with an integrated index of 0.231 (Table 4, Figure 1). The main factor influencing the development of the higher education system in Hungary is the high level of government funding, which amounts to $2,968.72 per student (4.5% of the GNI) (Knoema, 2018).

    The authors highlight that the ranking estimation results, presented in the current study, differ significantly from the Global Index of Cognitive Skills and Educational Attainment and the Education Index data (Knoema, 2018). According to the world education effectiveness and quality ranking, Russia holds a lower position than the Czech Republic, Poland, and Hungary (QS Top Universities, 2018). The research results are based on the leveling of the economic factor in education (the cost of education) and the academic competence factor (the unemployment rate), which makes adequate estimation of education quality in different countries possible. This approach, unlike its existing theoretical and practical counterparts (Academic Ranking of World Universities, 2018;QS Top Universities, 2018;Times Higher Education, 2018;US News & World Report, 2018), provides an opportunity to estimate the quality of national education systems without regard for the cost of education, which varies significantly even within Eastern Europe and does not guarantee quality (Ronstadt, 2009).

    In Belarus (ranked fifth: 0.224) and Romania (ranked sixth: 0.137), the students’ outgoing mobility exceeds incoming, which suggests that studying abroad is more attractive. Belarus has the lowest level of government funding for education (4.82% of GDP) and the lowest unemployment rate (0.8% of total labor force), with almost all university graduates employed (Knoema, 2018). In Romania, the amount of funding per student exceeds that of Belarus with 2.8%, while its unemployment rate is higher at 9.8% (Knoema, 2018). Slovakia ranks seventh in international education quality, with an integrated index value of 0.133. Meanwhile, the highest unemployment rate and one of the lowest levels of government funding are in Ukraine, which has an integrated index value of 0.043. These data demonstrate the ineffectiveness of state macro-economic policy, which contributes to higher education quality. The worst quality of higher education is in Bulgaria (0.027), which has a high unemployment rate, a low level of government funding, and an outgoing mobility that exceeds incoming mobility by 2.2% (Knoema, 2018).

    Table 5 represents the characteristics of indica-tors, determined by the correlation and regression analysis method, and their relative significance, determined by the expert judgment. Given the list of indicators and their relative significance, the ranking model of universities in Eastern Europe is as follows:

    I = 0.23 Z 1 + 0.24 Z 2 + 0.11 Z 3 + 0.13 Z 4 + 0.18 Z 5 + 0.06 Z 6 + 0.04 Z 7
    (7)

    where I is the rating evaluation of the universi-ties

    The most significant indicators that determine the ranking of a university, which reflect the quality of education in the country, are indicators Z2 - “Opinion of employers on graduates and quality of education” (the significance of the indicator is 24%) and Z1 - “Opinion of academic experts on the educational process in a university” (significance indicator - 23%). The greatest degree of significance of these indicators is explained by the fact that the immediate purpose of the university is to ensure an effective educational process, ensure the quality of education, and create a positive rating among employers and academic experts. Less significant indicators are: Z5 - “The volume of university research, its research reputation and research income” (18%), Z4 - “Part of foreign students among university students” (13%), Z3 - “Part of foreign teachers in faculty com-position” (11%). These indicators characterize the activity of universities in the scientific field, which determines the scientific reputation of universities, the ability to self-financing and development (indicator Z5), as well as the attractiveness of foreign students (indicator Z4) and teachers (indicator Z3). The Z3 indicator, in addition to the attractiveness of the university, has a positive effect on the ranking of the university and from the perspective of improving the quality of education by foreign experience of the faculty, providing opportunities for greater student mobility, and the development of communicative competencies. According to the results of expert evaluation, the indicators Z6 and Z7 are less significant due to the small number of universities in Eastern Europe with these characteristics.

    The higher the value of the indicator I, is calcu-lated by the formula 7, the higher the position of the university in the ranking.

    The ranking of Eastern European universities made in accordance with the proposed method is presented in Table 7. The developed ranking includes the most prestigious universities in Eastern Europe that hold international ratings; the ranking was composed to compare the obtained results with the international rankings studied within the scope of this article.

    The calculated integrated index value for each Eastern European country contributes to the development of the higher education ranking model for the countries in this study. According to survey results, the most prestigious university in Eastern Europe is Lomonosov Moscow State University (MSU) in Russia, whose education quality index (61.02) is 47.88 points higher than the average index of other Eastern European universities (Table 7). The ranking results coincide with the QS Emerging Europe and Central Asia University Rankings for 2018 (QS Top Universities, 2018). MSU is the only university in Russia that has fulfilled the President’s objective of entering the list of top 100 universities in the world. The university ranks first among universities in Moscow and Russia (Lomonosov Moscow State University, n.d.).

    The given achievements confirm the correct vector of the MSU Development Program based on the intensification of interdisciplinary research and im-plementation of the most modern scientific means and methods in the educational process. Around 12% of all scientific discoveries registered in the former USSR were made by MSU scientists. Presently, research is conducted in 350 scientific fields. There are 11 Nobel Prize Laureates and six Fields Medal Laureates among MSU (n.d.) graduates.

    Further, it is necessary to indicate that six Rus-sian universities are ranked among the top10 Eastern European universities, as presented in the current study. These include the following: Novosibirsk State University (ranked third), Tomsk State University (fourth), the Moscow Institute of Physics and Technology (MIPT/Moscow Phystech; fifth), National Research Nuclear University MEPhI (Moscow Engineering Physics Institute; sixth), and Saint Petersburg State University (eighth; Table 7). Moreover, the presented universities hold leading positions in international rankings. Other universities ranked among the top 10 include: Charles University (Czech Republic; ranked second), Czech Technical University in Prague (seventh), Masaryk University (Czech Republic; ninth), and the University of Warsaw (Poland), which ranks 10th. There is a wide margin (more than 10 positions) between the aforementioned universities’ education quality ratings and the average indicator value of the others presented in Eastern European universities.

    5. DISCUSSION AND CONCLUSION

    Thus, the presented methodological approach to university ratings is based only on Eastern European universities, which allows for solving such a problem in modern rating systems as taking into account the factor and quality features of the national education system (Marope et al., 2013;Tremblay et al., 2012;Corrente et al., 2018;Irkhin, 2013;Roser et al., 2019). Since the totality of universities forms the national education system, the country’s position in international university ratings should reflect the quality of the national education system and consider student mobility rates in Eastern Europe. The representativeness of the education quality rate in Eastern Europe is implemented using the integrated index calculation for the following aspects: population literacy; incoming and outgoing mobility rates in the higher education system; the proportion of students studying in another country with more expensive education compared with the total number registered in the national higher education system; the proportion of foreign students from coun-tries with cheaper education compared with the total number of students enrolled in universities; and govern-ment funding per student and the country’s unemploy-ment rate. The given approach presents a complex estimation of the national higher education system and levels the influence of the economic factor (the cost of education) and the quality of foreign education factor. The representative system of the Eastern European university rankings model is created with the indexes considering the similar functioning conditions of higher education systems in the countries under study. These include the following: academic reputation; reputation among employers; proportion of foreign lecturers; proportion of foreign students; research reputation; graduates awarded the Nobel Prize and medals; and Nobel Prize and medal laureates among employees. The given approach demonstrates that the universities of Russia, the Czech Republic, and Poland are dominant among Eastern European university rankings, which correlates with the integrated regional education quality estimation’s results. The indicated system of indicators has a statistically significant effect on the quality of education in the studied country. The multiple correlation coefficient is in the range 0.89-0.95. This approach solves the problem of the presence of a sufficiently high level of variability, indicating the het-erogeneity of information provided by world-renowned ratings for higher education institutions in Eastern Eu-rope (Barron, 2017;Irkhin, 2013;Vasiljeva et al., 2019;Katsarova, 2015;Marope et al., 2013;Tremblay et al., 2012). The quality of education is predetermined primarily by higher education institutions, therefore their ratings should also correlate with the quality of education in the country.

    The presented methodological approach to uni-versity ratings is based only on Eastern European universities in current conditions, this is of great practical importance, owing to high student mobility and the steady growth of the number of Central Asian students in these countries. The developed approach will provide university administrations and government officials with a clearer and more adequate picture of the quality of higher education and its institutions in Eastern European countries, as compared to the assessments based solely on accredita-tion data. In addition, these rating scores can be a guide and an incentive, encouraging universities in Eastern Europe to a higher quality of education and research. They can also serve as a useful tool in terms of problem awareness and planning.

    However, it should be considered that the pre-sented ranking method is based on objective data on education quality without taking into account difference in education cost and university information available online, which may define a university’s popularity. In our opinion, these criteria are sufficiently capacious for the research and require detailed processing: for example, determination of cost criteria and features of their inclusion in the rating system, justification of university search systems on the Internet, and justification of their quality, among others. These factors should be included in future scientific research in the sphere of higher education ranking improvement for Eastern Europe.

    It is necessary to indicate that the rankings of universities from the Republic of Moldova are not pre-sented in this study, as official ranking data were not available. The terms of the study were partly caused by the unavailability of annual population literacy rates in Eastern Europe. As the purpose of this study is the im-provement of the methodology for ranking Eastern European academic institutions, its practical significance is not diminished by the limitations.

    Figure

    IEMS-19-1-273_F1.gif

    Ranking of education quality in Eastern Europe, 2018.

    IEMS-19-1-273_F2.gif

    Improving ranking methods for universities in Eastern Europe.

    Table

    Positions of Eastern European universities in world rankings for 2018

    The value of the experts’ competency by country

    Sample sufficiency indicators for assessing the education quality in Eastern Europe

    Standardized Values of Indicators of Education Quality in Eastern Europe for 2018

    Significance of Indicators for the development of Eastern European universities’ Rankings

    Regression models for determining the representativeness of university ranking criteria, dependent variable - an integrated assessment of the education quality by country

    Ranking of the Top Universities in Eastern Europe, 2018

    REFERENCES

    1. Academic Ranking of World Universities (2018), cited 2019 Dec 5, Available from: http://www.shanghairanking.com/.
    2. Aleksandrov, A. (2017), The increase of higher education institution competitiveness as the investment attractiveness factor, The Bulletin of the Economics, Law, and Sociology, 4, 11-14.
    3. Barron, G. R. S. (2017), The Berlin principles on ranking higher education institutions: Limitations, legitimacy, and value conflict, Higher Education, 73(2), 317-333.
    4. Collins, F. L. and Park, G. S. (2016), Ranking and the multiplication of reputation: Reflections from the frontier of globalizing higher education, Higher Education, 72, 115-129.
    5. Corrente, S. , Greco, S. , and Słowiński, R. (2018), Robust ranking of universities evaluated by hierarchical and interacting criteria, Multiple Criteria Decision Making and Aiding, 274, 145-192.
    6. Dodd, T. (2017), Education exports are worth $28 billion a year, nearly 20pc more than we thought, Financial Review, Available from: https://www.afr.com/leadership/education-exports-are-worth-28-billion-a-year-nearly-20pc-more-than-we-thought-20171005-gyvc8v.
    7. Gutsykova, S. V. (2011), Method of Expert Assessment: Theory and Practice, Institute of Psychology of the Russian Academy of Sciences, Moscow.
    8. Hanushek, E. A. and Kimko, D. D. (2000), Schooling, labor force quality, and the growth of nations, American Economic Review, 90(5), 1184-1208.
    9. Hussein, K. , Buhari, S. , Tsaramirsis, G. , and Basheri, M. (2017), A new methodology for ranking international universities, Indian Journal of Science and Technology, 10(36), 1-21.
    10. Irkhin, Y. V. (2013), World university rankings as a management factor in systems of higher education, Ars Administrandi, 1, 97-113.
    11. Ivančević, V. and Luković, I. (2018), National university rankings based on open data: A case study from Serbia, Procedia Computer Science, 126, 1516-1525.
    12. Jolliffe, I. (2002), Principal Component Analysis, Springer, New York.
    13. Jusuf, E. , Herwany, A. , Kurniawan, P. S. , and Gunardi, A. (2020), Sustainability concept implementation in higher education institutions of Indonesia, Journal of Southwest Jiaotong University, 55(1), cited 2020 Feb. 5, Available from: http://jsju.org/index.php/journal/article/view/488.
    14. Kara-Murza, S. G. (2013), On ineffective Russian universities: Methodological problems, Social and Humanitarian Knowledge, 1, Available from: https://cyberleninka.ru/article/n/16568169.
    15. Katsarova, I. (2015), Higher education in the EU. Approaches, issues and trends, European Parliamentary Research Service, Available from: https://www.europarl.europa.eu/EPRS/EPRS-IDA-554169-Highereducation-in-the-EU-FINAL.pdf.
    16. Knoema (2018), World data atlas, Available from: https://knoema.com/atlas.
    17. Lomonosov Moscow State University cited, (n.d.) cited 2019 Dec 5, Available from: https://www.msu.ru/.
    18. Marope, P. T. M. , Wells, P. J. , and Hazelkorn, E. (2013), Rankings and accountability in higher education, Uses and Misuses, UNESCO Publishing, Available from: https://unesdoc.unesco.org/ark:/48223/pf0000220789.
    19. Martin, E. (2017), Here’s how much it costs to go to college in 25 countries around the world, Available from: https://www.cnbc.com/2017/10/13/cost-of-college-tuition-around-the-world.html.
    20. Millot, B. (2015), International rankings: Universities vs. higher education systems, International Journal of Educational Development, 40, 156-165.
    21. Nelson, A. (2018), Ranking university innovation: A critical history, Entrepreneurship Education, 1, 1-10.
    22. Pearson (2016), National education systems effectiveness ranking, Available from: https://gtmarket.ru/ratings/global-index-of-cognitive-skills-and-educational-attainment/info.
    23. Perez-Esparrells, C. and Orduna-Malea, E. (2018), Do the technical universities exhibit distinct behavior in global university rankings? A Times Higher Education (THE) case study, Journal of Engineering and Technology Management, 48, 97-108.
    24. QS Top Universities (2018), QS world university rankings, cited 2019 Dec 5, Available from: https://www.topuniversities.com/university-rankings.
    25. Rating Agency Expert (2018), Rating agency expert, Available from: https://raexpert.ru/rankings/vuz/method.
    26. Rauhvargers, A. (2013), Global university rankings and their impact: Report II. Brussels: European University Association.
    27. Redden, E. (2018), For international students, shifting choices of where to study, Inside Higher Ed, Available from: https://www.insidehighered.com/news/2018/08/24/international-enrollments-slowing-or-declining-some-top-destination-countries-look.
    28. Ronstadt, R. (2009), High tuition doesn’t equal quality, Available from: https://www.forbes.com/2009/03/10/college-debt-smart-shoppers-personal-finance-retirement-ronstadt.html#4e6120e9127b.
    29. Roser, M. , Nagdy, M. , and Ortiz-Ospina, E. (2019), Quality of Education, Available from: https://ourworldindata.org/quality-of-education.
    30. Rousseau, R. , Egghe, L. , and Guns, R. (2018), Statistics. In: R. Rousseau, L. Egghe, and R. Guns (eds.), Becoming Metric-Wise, Chandos Publishing, Oxford, chapter 4, 67-97.
    31. Stack, M. (2016), Visualizing excellence: The times higher education ranking, Global University Rankings and the Mediatization of Higher Education, Palgrave Macmillan, London, 51-69.
    32. The Guardian (2018), The Guardian, Available from: https://www.theguardian.com/education.
    33. The World Bank (2018), World Bank open data, Available from: https://data.worldbank.org.
    34. Thompson-Whiteside, S. (2016), Zen and the art of university rankings in art and design, She Ji: The Journal of Design, Economics, and Innovation, 2(3), 243-255.
    35. Tikhomirova, A. and Matrosova, E. (2016), Peculiarities of expert estimation comparison methods, Procedia Computer Science, 88, 163-168.
    36. Times Higher Education (2018), World university rankings 2019: Methodology, cited 2019 Dec 5, Available from: https://www.timeshighereducation.com/world-university-rankings/methodology-world-university-rankings-2019.
    37. Tremblay, K. , Lalancette, D. , and Roseveare, D. (2012), Assessment of Higher Education Learning Outcomes, Feasibility Study Report. OECD, Available from: http://www.oecd.org/education/skills-beyondschool/AHELOFSReportVolume1.pdf.
    38. United Nations Educational, Scientific and Cultural Organization (2018), Global flow of tertiary-level students, cited 2019 Dec 5, Available from: http://uis.unesco.org/en/uis-student-flow.
    39. US News & World Report (2018), U.S. news education rankings, cited 2019 Dec 5, Available from: https://www.usnews.com/best-colleges/rankings.
    40. Van Der Wende, M. C. (2017), Opening Up: Higher Education Systems in Global Perspective, Centre for Global Higher Education, London.
    41. Vasiljeva, M. V. , Ivleva, M. I. , Volkov, Y. G. , Karaev, A. K. , Nikitina, N. I. , and Podzorova, M. I. (2019), The development of meta-competencies in undergraduate students using personality development theory, Opcion, 35(Special Issue 23), 1524-1543.
    42. World Health Organization (n.d.,) Crude death rate per 1000 population, Available from: https://gateway.euro.who.int/en/indicators/hfa_22-0070-crude-death-rateper-1000-population/visualizations/#id=18829&tab=table.
    43. World Trade Organization (2019), cited 2019 Dec 5, Available from: https://www.wto.org/english/thewto_e/whatis_e/tif_e/org6_e.html.