Background
In 2018, the Australian Research Council (ARC) conducted the fourth Excellence in Research for Australia (ERA) evaluation. The evaluation collected data regarding the quality of research activity undertaken at all eligible higher education research institutions (Appendix 1) 2018 reference period. These data were then evaluated by eight Research Evaluation Committees (RECs), comprised of distinguished and internationally-recognised researchers with expertise in research evaluation.
The ERA data—from the current and previous rounds—provides Government, universities, industry, and prospective students with valuable information about research performance in Australian higher education research institutions. For example, ERA data is regularly used to inform a range of policy advice and initiatives across various portfolios of Government. It also assists institutions with their strategic planning and decision making and helps with their research promotional activities in Australia and internationally. Finally, through ERA, students, industry and other stakeholders have access to rigorous and fine grained disciplined based information about research performance that is not readily available through other means.
With this fourth round of ERA now complete, the longitudinal data available is extensive covering up to 14 years of research activity. It clearly shows the significant growth and improvement in quality of the research produced by the Australian higher education research institutions across that time. This National Report provides a comprehensive view of the data collected and the outcomes of the ERA 2018 evaluation process.
Objectives of ERA
ERA aims to identify and promote excellence across the full spectrum of research activity, including both discovery and applied research, within Australian higher education institutions.
The objectives of ERA are to:
- establish an evaluation framework that gives government, industry, business and the wider community assurance of the excellence of research conducted in Australia’s higher education institutions
- provide a national stocktake of discipline-level areas of research strength and areas where there is opportunity for development in Australia’s higher education institutions
- identify excellence across the full spectrum of research performance
- identify emerging research areas and opportunities for further development
- allow for comparisons of Australia’s research nationally and internationally for all discipline areas.
Definition of Research
For the purposes of ERA, research is defined as the creation of new knowledge and/or the use of existing knowledge in a new and creative way so as to generate new concepts, methodologies, inventions and understandings. This could include synthesis and analysis of previous research to the extent that it is new and creative.
Institutions must ensure that all research outputs submitted to ERA meet this definition of research. Outputs that do not meet this definition may be excluded from submissions during the ERA submission process or, where they are not excluded from submissions, their inclusion may adversely affect the quality rating assigned by RECs during the evaluation process.
Fields of Research (FoR) Codes
For the purposes of ERA, disciplines are defined as two- and four-digit Fields of Research (FoRs) codes as identified in the Australia and New Zealand Standard Research Classification (ANZSRC) 2008 released by the Australian Bureau of Statistics and Statistics New Zealand. The ANZSRC provides 22 two-digit FoR codes, 157 four-digit FoR codes, and an extensive range of six-digit codes.
The FoR codes as used in ERA 2018 are listed in Appendix 2. ERA undertakes evaluation at both the two- and four-digit FoR code level. Institutions submitted data to ERA at the four-digit level and these were aggregated to form the two- and four-digit Units of Evaluation (UoEs).
The two-digit FoR code is the highest level of the ANZSRC hierarchy; it relates to a broad discipline field, for example, Physical Sciences (02) or History and Archaeology (21). A two-digit FoR code consists of a collection of related four-digit FoR codes.
The four-digit FoR code is the second level of the ANZSRC hierarchy and relates to a specific discipline field of a two-digit FoR code. For example, Astronomical and Space Sciences (0201) or Archaeology (2101).
ERA 2018 Reference Periods
ERA 2018 Evaluation Process
ERA 2018 Research Evaluation Committees (RECs)
The ERA 2018 evaluations were undertaken across the following eight Research Evaluation Committees (RECs):
- Biological and Biotechnological Sciences (BB)
- Humanities and Creative Arts (HCA)
- Economics and Commerce (EC)
- Education and Human Society (EHS)
- Engineering and Environmental Sciences (EE)
- Mathematical, Information and Computing Sciences (MIC)
- Medical and Health Sciences (MHS)
- Physical, Chemical and Earth Sciences (PCE).
See Appendix 2 for a full list of FoR codes and RECs for ERA 2018 purposes.
ERA 2018 Indicators
ERA is based on the principle of expert review informed by indicators. The ERA 2018 evaluations undertaken by RECs were informed by three broad categories of indicators:
- Indicators of research quality—considered on the basis of a publishing profile, citation analysis, ERA peer review, and peer reviewed Australian and international research income
- Indicators of research activity—considered on the basis of research outputs, research income and other research items within the context of the profile of eligible researchers
- Indicators of research application—considered on the basis of research commercialisation income, patents, plant breeder’s rights, registered designs, and National Health and Medical Research Council (NHMRC) Endorsed Guidelines. Some other measures, such as publishing behaviour and some other categories of research income, can also provide information about research application.
The ERA indicators are underpinned by the ERA Indicator Principles. The ERA Indicator Principles were developed by the ARC in accordance with international best practice and informed by the ERA Indicator Development Group with analytical testing of data from the Australian higher education sector. The eight ERA indicator principles have guided the development of the indicator suite. In addition, and at all times throughout the ERA development process, the ARC has been cognisant of the burden of data collection placed on submitting institutions. Each of the ERA indicators has regard to the following criteria:
- Quantitative—objective measures that meet a defined methodology that will reliably produce the same result, regardless of when and by whom the principles are applied.
- Internationally recognised—while not all indicators will allow for direct international comparability, the indicators must be internationally recognised measures of research quality. Indicators must be sensitive to a range of research types, including research relevant to different audiences (e.g. practitioner focused, internationally relevant, nationally- and regionally-focused research). ERA will include research published in non-English language publications.
- Comparable to indicators used for other disciplines—while ERA evaluation processes will not make direct comparisons across disciplines, indicators must be capable of identifying comparable levels of research quality across disciplines.
- Able to be used to identify excellence—indicators must be capable of assessing the quality of research, and where necessary, focused to identify excellence.
- Research relevant—indicators must be relevant to the research component of any discipline.
- Repeatable and verifiable—indicators must be repeatable and based on transparent and publicly available methodologies. This should allow institutions to reproduce the methodology in-house. All data submitted to ERA must be auditable and reconcilable.
- Time-bound—indicators must be specific to a particular period of time as defined by the reference period. ERA does not assess research activity outside of the reference period other than to the extent it results in the triggering of an indicator during the reference period.
- Behavioural impact—indicators should drive responses in a desirable direction and not result in perverse unintended consequences. They should also limit the scope for special interest groups or individuals to manipulate the system to their advantage.
The ERA indicator suite was developed to align with the research behaviours of each discipline. For this reason, there are differences in the selection of indicators. The indicators that apply to each discipline (as defined by two- or four-digit FoRs) are shown in the ERA 2018 Discipline Matrix.
Unit of Evaluation (UoE)
The Unit of Evaluation for ERA is the research discipline for each institution as defined by FoR codes. Evaluations occurred at the two- and four-digit FoR code levels for UoEs that met the low volume threshold. UoEs do not correspond to named disciplines, departments or research groups within an institution.
National-level profiles of disciplines aggregated across institutions at the two- and four-digit FoR code level include information from all submitting institutions, including from those which did not meet the low volume threshold and were therefore not assessed.
Low Volume Threshold
An institution is only evaluated in ERA in a two- or four-digit discipline if the number of research outputs submitted reaches the low volume threshold, so that there was a meaningful level of data to be evaluated.
For disciplines where citation analysis was used, the low volume threshold was 50 apportioned indexed journal articles. No evaluation was conducted for the FoR at a given institution if the submitted number of apportioned indexed journal articles over the six-year research outputs reference period was fewer than 50 in any two- or four-digit FoR.
For disciplines where peer review was used, the low volume threshold was 50 apportioned weighted outputs. For these disciplines, books were given an effective weighting of 5:1 compared with other research outputs. Books were weighted only for the purposes of determining the low volume threshold, in all other instances in ERA, books were regarded as a single research output. Portfolios were counted as one output for the purposes of determining the low volume threshold. No evaluation was conducted for an FoR at a given institution where, over the six-year research outputs reference period, there were less than the equivalent of 50 apportioned weighted research outputs submitted.
For some FoRs at some institutions, there was insufficient research volume to undertake a valid analysis at the four-digit FoR level, but sufficient research volume at the two-digit FoR level. In these instances, evaluation took place at the two-digit FoR level only.
Where the low volume threshold was not met, the UoE for a given institution is reported as 'n/a' i.e. 'not assessed due to low volume'. This means that data submitted on research outputs, research income, and applied measures for the relevant two- or four-digit FoR for that institution was collected but not evaluated under ERA 2018. The institution, therefore, was not considered as research active for that discipline for the purposes of ERA 2018. However, the data submitted still contributed to the construction of the ERA benchmarks and all ERA data was aggregated for national-level reporting irrespective of whether any FoRs within a specific institution met the low volume threshold.
ERA Rating Scale
ERA utilises a five-point rating scale. The rating scale is broadly consistent with the approach taken in research evaluation processes in other countries to allow for international comparison.
Notes on the Rating Scale
- 'World Standard' refers to a quality standard. It does not refer to the nature or geographical scope of particular subjects, or to the locus of research nor its place of dissemination.
- Each point within the rating scale represents a quality ‘band’. For example, one UoE might be rated highly within the '4' band and another rated lower within the same band, but the rating for both will be a '4'. Only whole ratings are given (not 4.2, 4.5 etc).
- The 'banding' of quality ratings assists RECs in determining a final rating. If, for example, a Unit of Evaluation has a preliminary rating at the top margin of the '4' band based on the assessment of the quality of the research outputs, other indicators (e.g. income measures) may be sufficient to raise the rating into the '5' band. The lack of such indicators will not, however, be used to lower a rating.
- The ERA evaluation measures research quality, not scale or productivity. Volume information is presented to the RECs for the purposes of providing context to the research.
- The methodology and rating scale allow for UoEs with different volumes of output to achieve the same rating. So, for example, a UoE with a small number of outputs can achieve a rating of 5 where the UoE meets the standard for that rating point, similar to a UoE with a large number of outputs.
- Each UoE is assessed against the absolute standards of the rating scale, not against other UoEs. One of the key objectives of ERA is to identify excellence across the full spectrum of research performance.
- REC members exercise their knowledge, judgment and expertise to reach a single rating for each UoE. In reaching a rating, RECs take account of all of the supporting evidence which is submitted for the UoE. RECs do not make comment about the contributions of individual researchers.
- The rating for each UoE reflects the REC’s expert and informed view of the characteristics of the UoE as a whole. In all cases the quality judgments relate to all of the evidence, including the entire indicator suite, and the ERA rating scale. In order to achieve a rating at a particular point on the scale, the majority of the output from the UoE will normally be expected to meet the standard for that rating point. Experience has demonstrated that there is normally a variety of quality within a UoE.
Additional Reporting for ERA 2018
Gender Data
Institutions were required to submit gender data for each eligible researcher. Gender data was used for aggregate reporting and analysis purposes only. This data was not made available to peer reviewers or Research Evaluation Committees (RECs) and did not form part of the evaluation process.
Open Access
Institutions were required to state whether a research output is available in an open access repository. Open access data was used for aggregate reporting and analysis purposes only. This data was not made available to peer reviewers or Research Evaluation Committees (RECs) and did not form part of the evaluation process.
For the purpose of ERA 2018, an open access repository is as per the ARC's Open Access Policy.
Key ERA 2018 Documents
There are several documents that provide more detailed information about various aspects of the ERA 2018 evaluation. These include:
- ERA 2018 Submission Guidelines—provide guidance to institutions about ERA 2018 submission rules and components
- ERA 2018 Discipline Matrix—shows the indicators that apply to each FoR code
- ERA 2018 Evaluation Handbook—provides detailed information about the ERA 2018 indicators, evaluation approach and process
- The ERA 2018 Submission Journal List
See ERA 2018 key documents for further information.
Use of the ERA National Report
The ERA National Report presents data submitted as part of a comprehensive assessment by discipline of the research quality and research activity within Australia’s higher education institutions.
Coverage
ERA retrospectively evaluates the quality of research conducted within the specific reference periods (as shown previously). As the ERA 2018 research outputs reference period ended on 31 December 2016, research quality may have changed since that time.
Comparison across Data Items
Each UoE is assessed against the ERA rating scale. As no comparisons are made between UoEs, ERA ratings cannot be used as a ranking device. Further, as each ERA rating point might include a range of performances, and the gap between rating points is not defined, it is not appropriate to average ratings even within disciplines.
ERA has been designed to provide flexibility for, and recognition of, discipline-specific research behaviours at both the two- and four-digit FoR levels rather than comparison between disciplines or disciplinary clusters. ERA evaluations are conducted by discipline experts interpreting the indicators for each UoE in the context of their own expert knowledge of the discipline. Different indicators apply to each discipline, as outlined in the ERA 2018 Discipline Matrix. For this reason it is not appropriate to make productivity statements about or comparisons between disciplines.
Where possible the data presented in this report is de-duplicated. This does not represent the exact data submitted to ERA 2018 for the purposes of evaluation (which potentially contained duplicate data submitted by multiple institutions).
Please note: numbers are generally rounded to one decimal place in tables throughout this report and income is rounded to whole dollars. Totals may be different to the sum of parts due to rounding.