In 2018, the Australian Research Council (ARC) conducted the fourth Excellence in Research for Australia (ERA) evaluation. The evaluation collected data regarding the quality of research activity undertaken at all eligible higher education research institutions (see Appendix 1) 2018 reference period. These data were then evaluated by eight Research Evaluation Committees (RECs), comprised of distinguished and internationally-recognised researchers with expertise in research evaluation.
The ERA data—from the current and previous rounds—provides Government, universities, industry, and prospective students with valuable information about research performance in Australian higher education research institutions. For example, ERA data is regularly used to inform a range of policy advice and initiatives across various portfolios of Government. It also assists institutions with their strategic planning and decision making and helps with their research promotional activities in Australia and internationally. Finally, through ERA, students, industry and other stakeholders have access to rigorous and fine grained disciplined based information about research performance that is not readily available through other means.
With this fourth round of ERA now complete, the longitudinal data available is extensive covering up to 14 years of research activity. It clearly shows the significant growth and improvement in quality of the research produced by the Australian higher education research institutions across that time. With the use of longitudinal data, extended analyses such as these on selected topics aim to provide further insight on the Australian higher education research sector.
Objectives of ERA
ERA aims to identify and promote excellence across the full spectrum of research activity, including both discovery and applied research, within Australian higher education institutions.
The objectives of ERA are to:
- establish an evaluation framework that gives government, industry, business and the wider community assurance of the excellence of research conducted in Australia's higher education institutions
- provide a national stocktake of discipline-level areas of research strength and areas where there is opportunity for development in Australia's higher education institutions
- identify excellence across the full spectrum of research performance
- identify emerging research areas and opportunities for further development
- allow for comparisons of Australia's research nationally and internationally for all discipline areas.
Definition of Research
For the purposes of ERA, research is defined as the creation of new knowledge and/or the use of existing knowledge in a new and creative way so as to generate new concepts, methodologies, inventions and understandings. This could include synthesis and analysis of previous research to the extent that it is new and creative.
Fields of Research (FoR) Codes
For the purposes of ERA, disciplines are defined as two- and four-digit Fields of Research (FoRs) codes as identified in the Australia and New Zealand Standard Research Classification (ANZSRC) 2008 released by the Australian Bureau of Statistics and Statistics New Zealand. The ANZSRC provides 22 two-digit FoR codes, 157 four-digit FoR codes, and an extensive range of six-digit codes.
The FoR codes as used in ERA 2018 are listed in Appendix 2. ERA undertakes evaluation at both the two- and four-digit FoR code level. Institutions submitted data to ERA at the four-digit level and these were aggregated to form the two- and four-digit Units of Evaluation (UoEs).
The two-digit FoR code is the highest level of the ANZSRC hierarchy; it relates to a broad discipline field, for example, Physical Sciences (02) or History and Archaeology (21). A two-digit FoR code consists of a collection of related four-digit FoR codes.
The four-digit FoR code is the second level of the ANZSRC hierarchy and relates to a specific discipline field of a two-digit FoR code. For example, Astronomical and Space Sciences (0201) or Archaeology (2101).
ERA 2018 Reference Periods
ERA 2018 Evaluation Process
ERA 2018 Indicators
ERA is based on the principle of expert review informed by indicators. The ERA 2018 evaluations undertaken by RECs were informed by three broad categories of indicators:
- Indicators of research quality—considered on the basis of a publishing profile, citation analysis, ERA peer review, and peer reviewed Australian and international research income
- Indicators of research activity—considered on the basis of research outputs, research income and other research items within the context of the profile of eligible researchers
- Indicators of research application—considered on the basis of research commercialisation income, patents, plant breeder’s rights, registered designs, and National Health and Medical Research Council (NHMRC) Endorsed Guidelines. Some other measures, such as publishing behaviour and some other categories of research income, can also provide information about research application.
The ERA indicators are underpinned by the ERA Indicator Principles. The ERA Indicator Principles were developed by the ARC in accordance with international best practice and informed by the ERA Indicator Development Group with analytical testing of data from the Australian higher education sector. The eight ERA indicator principles listed below guided the development of the indicator suite. In addition, at all times throughout the ERA development process, the ARC has been cognisant of the burden of data collection placed on submitting institutions. Each of the ERA indicators has regard to the following criteria:
- Quantitative—objective measures that meet a defined methodology that will reliably produce the same result, regardless of when and by whom the principles are applied.
- Internationally recognised—while not all indicators will allow for direct international comparability, the indicators must be internationally recognised measures of research quality. Indicators must be sensitive to a range of research types, including research relevant to different audiences (e.g. practitioner focused, internationally relevant, nationally- and regionally-focused research). ERA will include research published in non-English language publications.
- Comparable to indicators used for other disciplines—while ERA evaluation processes will not make direct comparisons across disciplines, indicators must be capable of identifying comparable levels of research quality across disciplines.
- Able to be used to identify excellence—indicators must be capable of assessing the quality of research, and where necessary, focused to identify excellence.
- Research relevant—indicators must be relevant to the research component of any discipline.
- Repeatable and verifiable—indicators must be repeatable and based on transparent and publicly available methodologies. This should allow institutions to reproduce the methodology in-house. All data submitted to ERA must be auditable and reconcilable.
- Time-bound—indicators must be specific to a particular period of time as defined by the reference period. ERA does not assess research activity outside of the reference period other than to the extent it results in the triggering of an indicator during the reference period.
- Behavioural impact—indicators should drive responses in a desirable direction and not result in perverse unintended consequences. They should also limit the scope for special interest groups or individuals to manipulate the system to their advantage.
The ERA indicator suite was developed to align with the research behaviours of each discipline. For this reason, there are differences in the selection of indicators. The indicators that apply to each discipline (as defined by two- or four-digit FoRs) are shown in the ERA 2018 Discipline Matrix.
Unit of Evaluation (UoE)
The Unit of Evaluation for ERA is the research discipline for each institution as defined by FoR codes. Evaluations occurred at the two- and four-digit FoR code levels for UoEs that met the low volume threshold. UoEs do not correspond to named disciplines, departments or research groups within an institution.
National-level profiles of disciplines aggregated across institutions at the two- and four-digit FoR code level include information from all submitting institutions, including from those which did not meet the low volume threshold and were therefore not assessed.
ERA Rating Scale
ERA utilises a five-point rating scale. The rating scale is broadly consistent with the approach taken in research evaluation processes in other countries to allow for international comparison.
Notes on the Rating Scale
- 'World Standard' refers to a quality standard. It does not refer to the nature or geographical scope of particular subjects, or to the locus of research nor its place of dissemination.
- Each point within the rating scale represents a quality ‘band’. For example, one UoE might be rated highly within the '4' band and another rated lower within the same band, but the rating for both will be a '4'. Only whole ratings are given (not 4.2, 4.5 etc).
- The 'banding' of quality ratings assists RECs in determining a final rating. If, for example, a Unit of Evaluation has a preliminary rating at the top margin of the '4' band based on the assessment of the quality of the research outputs, other indicators (e.g. income measures) may be sufficient to raise the rating into the '5' band. The lack of such indicators will not, however, be used to lower a rating.
- The ERA evaluation measures research quality, not scale or productivity. Volume information is presented to the RECs for the purposes of providing context to the research.
- The methodology and rating scale allow for UoEs with different volumes of output to achieve the same rating. So, for example, a UoE with a small number of outputs can achieve a rating of 5 where the UoE meets the standard for that rating point, similar to a UoE with a large number of outputs.
- Each UoE is assessed against the absolute standards of the rating scale, not against other UoEs. One of the key objectives of ERA is to identify excellence across the full spectrum of research performance.
- REC members exercise their knowledge, judgment and expertise to reach a single rating for each UoE. In reaching a rating, RECs take account of all of the supporting evidence which is submitted for the UoE. RECs do not make comment about the contributions of individual researchers.
- The rating for each UoE reflects the REC’s expert and informed view of the characteristics of the UoE as a whole. In all cases the quality judgments relate to all of the evidence, including the entire indicator suite, and the ERA rating scale. In order to achieve a rating at a particular point on the scale, the majority of the output from the UoE will normally be expected to meet the standard for that rating point. Experience has demonstrated that there is normally a variety of quality within a UoE.
Key ERA 2018 Documents
There are several documents that provide more detailed information about various aspects of the ERA 2018 evaluation. These include:
- ERA 2018 Submission Guidelines—provide guidance to institutions about ERA 2018 submission rules and components
- ERA 2018 Discipline Matrix—shows the indicators that apply to each FoR code
- ERA 2018 Evaluation Handbook—provides detailed information about the ERA 2018 indicators, evaluation approach and process
- The ERA 2018 Journal List
See ERA 2018 key documents for further information.