Anchoring effects in the assessment of papers
In this study, we shall empirically study the assessment of cited papers within the framework of the anchoring-and-adjustment heuristic. We are interested in the question whether citation decisions are (mainly) driven by the quality of cited references.
We shall undertake a survey of corresponding authors with an available email address in the Web of Science database. The authors are asked to assess the quality of papers that they cited in previous papers. Some authors will be assigned to three treatment groups that receive further information alongside the cited paper: citation information, information on the publishing journal (journal impact factor), or a numerical access code to enter the survey. The control group will not receive any further information.
In the analyses, we estimate how (strongly) the quality assessments of the cited papers are adjusted by the respondents to the anchor value (citation, journal, or access code). Thus, we are interested in whether possible adjustments in the assessments can not only be produced by quality-related information (citation or journal), but also by numbers that are not related to quality, i.e. the access code.
The results of the study may have important implications for quality assessments of papers by researchers and the role of numbers, citations, and journal metrics in assessment processes.
For further information visit this website.
1 Science Policy and Strategy Department, Administrative Headquarters of the Max Planck Society, Munich,
Germany, 2 Department of Sociology