Skip to main content
Meine Favoriten
Menu
Meine Favoriten
May 12, 2025
Education and research

How public judgment is redirecting research priorities

New research shows how non-expert evaluations influence scientific funding decisions, reshaping the future of science.
| May 12, 2025
A balance scale showing "Expert Judgment" on the left side with a scientist in a lab coat under a microscope symbol, and "Societal Impact" on the right with three diverse people, one raising a fist. The illustration symbolizes the balance between scientific expertise and community perspectives.

How public judgment is redirecting research priorities

In their 2024 book How and When to Involve Crowds in Scientific Research, ESMT Professor Henry Sauermann and coauthor Marion K. Poetz (Copenhagen Business School) laid out a practical, research-based guide for scientists eager to engage the public. Whether through data collection or co-creation of research questions, Sauermann and Poetz demonstrated that broad public involvement – commonly referred to as crowd science – can drive scientific discovery while fostering transparency and trust.

Yet one question remained largely unexplored: Can citizens also judge science itself?

A new study published in Research Policy provides fresh insights. Authored by Sauermann, Chiara Franzoni (Politecnico di Milano), and Diletta Di Marco (Politecnico di Milano), the research investigates how non-experts evaluate scientific proposals and influence which projects receive funding.

The findings challenge traditional notions of expertise, equity, and public participation – and signal new opportunities and pitfalls for research funding as citizens gain influence.

From contributors to evaluators

“Our book was really about how scientists can open research to contributions from non-professionals,” says Sauermann, who holds the ESMT Chair in Entrepreneurship at ESMT Berlin. “This new study looks at what happens when citizens go beyond contributing and start making evaluative judgments traditionally reserved for experts.”

The researchers analyzed data from more than 2,300 crowd members who reviewed four scientific proposals in the medical sciences, biology, and social sciences. These evaluators first evaluated proposals with respect to criteria such as scientific merit, team capabilities, and potential societal impact. They then gave their final verdict using two different approaches: making funding recommendations to funding agencies and donating their own real money if they felt that a project was worth pursuing.

The study revealed how laypeople evaluate scientific research proposals.

Judging by impact – and by wallet

The study found that crowd participants did not make random or purely emotional decisions but carefully considered different aspects of the proposals. They placed considerable weight on scientific merit – similar to what expert reviewers are doing. But they placed a similarly high weight on societal impact – a criterion that is notoriously difficult to judge for experts and tends to receive less weight in traditional evaluation processes.

However, the data revealed important nuances:

  • Personal relevance played a key role. Citizens tended to support projects they found personally meaningful or interesting. This appears to partly reflect that they over-estimate the societal impact of a topic they find personally relevant.
  • Wealth and education levels affected participation in crowdfunding. Those with higher income and educational attainment were more likely to voice their preferences using a crowdfunding mechanism – which required them to spend their own money. This inequality did not arise using the recommendation mechanism.
  • Patterns of crowd evaluations differ across contexts and are difficult to predict. For example, societal impact played a greater role than scientific merit when crowd members decided whether to donate to a medical proposal but played a smaller role when they evaluated a social science proposal.

These results highlight how public evaluations can surface different priorities than expert assessments, offering both opportunities to align science with societal values and challenges related to potential biases and inequalities.

Rethinking expertise – and the future of funding

In the book, the authors discussed five main reasons to involve crowds in research activities:

  1. When sheer numbers are needed
  2. When one is looking for outlier ideas but does not know where they are sitting
  3. When unusual expertise might exist outside the academy
  4. When crowds can collaborate to solve complex problems
  5. When crowd members have different biases and errors when making evaluative judgments

“Now we see that some of these rationales also work when citizens help evaluate which research should be funded in the first place,” says Sauermann.

Major funding bodies are exploring how to involve the public in research decisions. The U.S. National Science Foundation includes public engagement requirements in some grants, while Horizon Europe promotes citizen science across its €95.5 billion framework. These efforts generally treat citizens as contributors, not evaluators.

This study provides new data for funders to consider. Crowd assessments can uncover societal priorities that experts might overlook, but they also risk amplifying the influence of wealthier and more educated participants.

"Crowd evaluations can highlight important social concerns," Sauermann notes, "but funders must carefully consider who participates, how feedback is elicited, and what safeguards prevent bias."

Relevance for business and innovation

Companies now routinely use crowdsourcing and other open innovation mechanisms for product development and idea generation. Engaging crowds in evaluations and critical decisions is less common. This study suggests that:

  • Crowdsourced evaluations could inform R&D project selection, highlighting market relevance or social acceptability.
  • Stakeholder engagement in project evaluation may uncover customer priorities or societal risks overlooked by expert panels.
  • Biases in crowd input – such as overrepresentation of affluent or highly educated participants – must be managed carefully.

For example, a technology firm soliciting feedback on early-stage product concepts might discover that crowds emphasize user-friendliness and accessibility – factors that internal R&D teams or expert panels might undervalue. Crowds may also provide richer insights into what concretely “user-friendliness” would mean in a particular context. Understanding these perspectives can help firms better align innovations with market expectations.

Building on a decade of research

The new study marks another step in Sauermann’s exploration of crowd science – a journey that has blended rigorous empirical research with public scholarship and practical tools for scientists for more than a decade. In addition to the open-access book, he and co-author Poetz launched sciencewithcrowds.org, a resource hub featuring case studies, templates, and guidelines for engaging the public in research.

That work, largely focused on how and when to involve the crowd, now expands to include the equally important question of how and when to trust the crowd’s judgment.

“It’s about rethinking the boundaries between science and society and to leverage the contributions of the broader public,” Sauermann reflects. “We know that diverse participation can make research more innovative and impactful. Now we’re learning how it can also shape which science gets done in the first place.”

Read the study in Research Policy (Volume 54, Issue 5, June 2025, 105214) or hear Sauermann speak about citizen science on episode #40 of Campus 10178 podcast.

Share on

External content is not displayed
Please change your privacy settings and activate the category "Statistics"

Tammi L. Coles

Tammi L. Coles

Senior Editor, ESMT Berlin
Chatbox