Productivity and Drivers of Research
The relationships between inputs (Resources and Environment) and outcomes (Output and Connectivity) provide measures of productivity of the higher education sector. Through capable management or strongly motivated academics, a country’s tertiary institutions may be able to offset to some degree limited funding or onerous government controls.
Current outcomes need to be compared with past input settings to allow for lags in behaviour. Increases in research funding will take several years to be reflected in an increase in publications and citations; the full effect of a government funding an increase in the participation rate will occur only when the first cohort graduates. While we now have five years of U21 ranking data, substantial improvements were made in the early years to data quality and variables included. The oldest comparable data we have are those for the 2013 rankings for Resources, and the 2015 data for Environment. We use the original outcomes data as described in section 3, except that in measuring productivity it is appropriate to omit total publications (O1) from the total Output module and confine empirical work to measures standardised for country size. But outcomes also include domestic and international connectivity. Our second outcome measure thus combines results for both the Output and Connectivity modules, using weights of 40 and 20 per cent respectively. (This year’s OECD correction of the data for the United Kingdom has been backcast to 2013.)
In order to measure productivity we first regress each of the two outcome measures on the scores for Resources and Environment. The results are as follows (standard errors are in parentheses beneath the coefficients):
Output = -29.80 + 0.756 Resources + 0.506 Environment, R2= 0.731, n=50
(13.0) (0.085) (0.171)
Output + Connectivity = -37.56 + 0.785 Resources + 0.638 Environment, R2= 0.760, n=50
(13.0) (0.085) (0.171)
The regression results show that around three-quarters of the variations in the outcomes are explained by the inputs, with both the Resources and the Environment scores significant at the 1 per cent level. The effect of resource levels is quantitatively a little larger than that of environment. (All variables are scaled to a maximum value of 100.) Interestingly, Connectivity exhibits a very similar relationship with Resources and Environment as does Output.
The predicted values from the equations give an estimate of the average outcome score for a given level of resources and policy environment settings. The actual score for each country can then be compared with the predicted value given the resource levels and policy settings for the country. Actual scores above predicted values indicate above-average efficiency of the nation’s higher education institutions, and conversely for actual scores that are less than predicted. However, there still remains a timing issue with some measures where the lags are very long, such as the qualifications of the workforce. The results also depend crucially on what we include in our measures of the policy environment and outcomes. In particular, insofar as our Environment module does not include relevant policy variables, the productivity results will reflect, in part, government policy as well as institutional productivity. With this and other caveats in mind, we present the results only by quintiles. Within each quintile countries are arranged alphabetically. Countries appearing in the top quintile for both definitions of outcomes are, in alphabetical order, Australia, Greece, Germany, Israel, Italy, Slovakia, South Africa and the United Kingdom. Interestingly, the top quintile includes Greece, which ranks last for Environment, and South Africa, even though its scores are low for long-term outcome measures such as qualifications of the work force.
We conclude by quantifying the drivers of research performance. Specifically, we look at how research publications and their impact (as measured by citations) are related to research expenditure and the policy environment. We first use as an outcome measure the sum of publications and citations. But research expenditure may be financed from industry and our second measure adds in joint publications with industry in an attempt to pick up the industry effect. The expenditure series used as an explanatory variable is research expenditure as a share of GDP (R4). Specifically, the outcome variables are:
PUBS 1 = publications per head (O2) and average citation rates (O3) added together using our earlier weights (scaled to maximum value = 100).
PUBS 2 = PUBS 1 plus the percentage of publications co-authored with industry (C6), again using our earlier weights (the aggregate is scaled to maximum value = 100).
The estimated equations, using R4 data from the 2013 rankings, are given below (R4 data for Brazil and Saudi Arabia are not available; all coefficients are significant at the 1 per cent level):
PUBS 1 = -24.8 + 0.715 R4 + 0.676
Environment R2= 0.795, n = 48
(13.3) (0.068) (0.168)
PUBS 2 = -15.0 + 0.697 R4 + 0.556 Environment R2= 0.742, n =48
(14.6) (0.074) (0.184)
The results show that research funds and the policy environment explain around three-quarters of the national variations in research performance. At mean values, a 10 per cent increase in research funding is estimated to increase publications and citations (PUBS 1) by nearly 5 per cent.
We can again measure outcomes relative to inputs by looking at deviations around the regression line. When research performance as measured by PUBS 1 is used, the top ten performers relative to inputs are, in alphabetical order, Australia, Belgium, Greece, India, Italy, Slovakia, Slovenia, South Africa, the United Kingdom and the United States. When joint publications with industry are added in to the research performance measure (PUBS 2), the top ten countries are Belgium, Croatia, Greece, Hungary, Italy, Korea, Slovakia, Slovenia, South Africa and the United Kingdom. The United States is eleventh.