Print Logo

Sprung zur Startseite Sprung zur DZHW-Startseite
Sprung zur Startseite Sprung zur DZHW-Startseite
Search in: Website  Publications

DZHW  >  Research  >  Research cluster

Research – Research cluster

Research – Research cluster



Seitennavigation

Evaluation Practices in Science and Higher Education

Evaluation practices take centre stage in science because continuous assessments of scientific quality guarantee science's claim to produce universally accepted knowledge. Scientific knowledge production has therefore always depended on measures of quality assurance conducted by the scientific community itself, rather than relying on external assessments (Hirschauer 2004; Hirschauer 2005; Lamont 2009; Reinhart 2012; Pontille and Torny 2015). Though peer review has received a fair share of criticism (Reinhart 2012: 49ff.), it has nonetheless been recognized as the gold standard of scientific quality assessment (Weingart 2001: 284ff.).

Yet scientific peer review is not as homogenous as the debates about it might suggest. First, peer review features in a number of different contexts, including manuscript selections at journals, funding decisions, university appointment committees, evaluations of university performance, or on academic social media platforms. Second, procedures may vary considerably within these contexts: for instance, in regard to reviewer selection, to the (technological) infrastructure used, to the indicators or metrics that are applied, or even regarding different regimes of visibility of reviewers and reviewees.

In the light of such a diversity of evaluation procedures we want to tackle the question of how scientific quality assurance is interpreted and practised across different situations and contexts by looking at (1) the practical organization of evaluation procedures in science and higher education, (2) the conceptions of scientific quality implied in these procedures and (3) the resulting consequences for scientific knowledge production.

In doing so, we employ approaches from the sociology of valuation and evaluation (Lamont 2012): this "booming field" (Meer and Lamont 2016: 8) addresses questions such as how the value of material and immaterial goods, specific practices and routine actions, as well as individuals' performance or characteristics, is determined in the context of different orders of worth (Boltanski and Thévenot 2006), or through comparison (Heintz 2017) with each other or with a commonly accepted norm (Krüger and Reinhart 2016). The current research perspective of a sociology of valuation and evaluation thereby offers a wide range of linkages to recent literature about evaluation practices in science and higher education, such as university rankings (Espeland and Sauder 2007), and the underlying measurement of scientific performance (Rushforth and De Rijcke 2015; De Rijcke et al. 2016), or the valuation of specific procedures and evaluative categories in peer review processes (Lamont 2009, Reinhart 2012). In addition, it offers approaches to contributing to the existing literature by explicitly focusing on crucial practices, situations and (technological) infrastructures of value attribution and value assessment, as well as on underlying orders of worth and their reflection by the actors themselves (Krüger and Reinhart 2017).

Sources

  • Boltanski, L., & Thévenot, L. (1999). The Sociology of Critical Capacity. European Journal of Social Theory 2 (3), S. 359–377.
  • De Rijcke, S., Wouters, P., Rushforth, A., Franssen, T., & Hammarfelt, B. (2016). Evaluation practices and effects of indicator use - a literature review. Research Evaluation 25 (2), S. 161–169.
  • Espeland, W., & Sauder, M. (2007). Rankings and Reactivity: How Public Measures Recreate Social Worlds. American Journal of Sociology 113 (1), S. 1–40.
  • Heintz, B (2016) "Wir leben im Zeitalter der Vergleichung". Perspektiven einer Soziologie des Vergleichs Zeitschrift für Soziologie 45 (5), S. 305–323.
  • Hirschauer, S. (2004). Peer Review Verfahren auf dem Prüfstand. Zum Soziologiedefizit der Wissenschaftsevaluation Zeitschrift für Soziologie 33 (1), S. 62–83.
  • Hirschauer, S. (2005). Publizierte Fachurteile. Lektüre und Bewertungspraxis im Peer Review. Soziale Systeme 11 (1), S. 52–82.
  • Krüger, A., & Reinhart, M. (2016). Wert, Werte und (Be)Wertungen. Eine erste begriffs- und prozesstheoretische Sondierung der aktuellen Soziologie der Bewertung. Berliner Journal für Soziologie 26 (3-4), S. 485–500.
  • Krüger, A., & Reinhart, . (2017). Theories of Valuation. Building Blocks for Conceptualizing Valuation between Practice and Structure. In Krenn, K. (Hrsg.), Markets and Classifications. Special Issue. Historical Social Research 42 (1), S. 263–285.
  • Lamont, M. (2009). How professors think. Inside the curious world of academic judgment. Cambridge, Mass: Harvard University Press.
  • Lamont, M. (2012). Toward a Comparative Sociology of Valuation and Evaluation. Annual Review of Sociology 38 (1), S. 201–221.
  • Meer, N. & Lamont, M. (2016). Michèle Lamont: A Portrait of a Capacious Sociologist. Sociology 50 (5), S. 1012–1022.
  • Pontille, D., & Torny, D. (2015). From Manuscript Evaluation to Article Valuation. The Changing Technologies of Journal Peer Review. >em>Human Studies (38), S. 57–79.
  • Reinhart, M. (2012). Soziologie und Epistemologie des Peer Review. Baden-Baden: Nomos.
  • Rushforth, A., & De Rijcke, S. (2015). Accounting for Impact? The Journal Impact Factor and the Making of Biomedical Research in the Netherlands. Minerva 53 (2), S. 117–139.
  • Weingart, P. (2001). Die Stunde der Wahrheit? Zum Verhältnis der Wissenschaft zu Politik, Wirtschaft und Medien in der Wissensgesellschaft. Weilerswist: Velbrück Wiss.

This research cluster therefore aims to contribute (1) to the comparative empirical analysis of evaluation procedures in science and higher education and (2) to current theoretical debates within the sociology of valuation and evaluation.

Related projects:

Publications:

* = with peer review

  • Reinhart, M. (2017).
    Policing Misconduct: More Data Needed on Scientific Misconduct. Nature, 549(7673), 458. *
  • Kleimann, B., & Klawitter, M. (2017).
    An Analytical Framework for Evaluation-based Decision-making Procedures in Universities. In Huisman, J., & Tight, M. (Hrsg.), Theory and Method in Higher Education, Bingley: Emerald Publishing, S. 39-57.
  • Schendzielorz, C., Hoffmeister, A., & Marguin, S, (2017).
    Feldnotizen 2.0. Digitalität in der ethnographischen Beobachtungspraxis. Wie Digitalität die Geisteswissenschaften verändert. Neue Forschungsgegenstände und Methoden, Zeitschrift für digitale Geisteswissenschaften (ZfdG) (Sonderband 3) [im Erscheinen]. *

Show all publications

Presentations:

* = with peer review

  • Hartstein, J., Isigkeit, T., & Sörgel, F. ( 2017, Dezember).
    Algorithmic science evaluation and power structure: the discourse on strategic citation and ‚citation cartels‘. Vortrag auf dem 34. Chaos Communication Congress (34C3), 27.-30. Dezember 2017, Messe Leipzig.
  • Niggemann, F., & Oberschelp, A. ( 2017, November).
    Verfahren der Leistungsmessung in der Lehre - Deutschland im internationalen Vergleich. Vortrag auf der Jahrestagung "Netzwerk Wissenschaftsmanagement" am Wissenschaftszentrum Bonn.
    Download
  • Heßelmann, F., & Krüger, A. ( 2017, November).
    Sichtbarkeit in Bewertungsverfahren am Beispiel des Journal Peer Review. Vortrag auf der Tagung der DGS-Sektionen Kultursoziologie und Wissenssoziologie "Kulturen der Bewertung", Köln.

Show all presentations

Start of the project: 01-Sept-2017

http://www.dzhw.eu/en/abteilungen/cluster/pr_detail
© Copyright 2018 by DZHW Deutsches Zentrum für Hochschul- und Wissenschaftsforschung GmbH, Lange Laube 12, 30159 Hannover, fon: +49 511 450670-532, fax: +49 511 450670-960, E-Mail: