In their 2024 book How and When to Involve Crowds in Scientific Research, ESMT Professor Henry Sauermann and coauthor Marion K. Poetz (Copenhagen Business School) laid out a practical, research-based guide for scientists eager to engage the public. Whether through data collection or co-creation of research questions, Sauermann and Poetz demonstrated that broad public involvement – commonly referred to as crowd science – can drive scientific discovery while fostering transparency and trust.
Yet one question remained largely unexplored: Can citizens also judge science itself?
A new study published in Research Policy provides fresh insights. Authored by Sauermann, Chiara Franzoni (Politecnico di Milano), and Diletta Di Marco (Politecnico di Milano), the research investigates how non-experts evaluate scientific proposals and influence which projects receive funding.
The findings challenge traditional notions of expertise, equity, and public participation – and signal new opportunities and pitfalls for research funding as citizens gain influence.
“Our book was really about how scientists can open research to contributions from non-professionals,” says Sauermann, who holds the ESMT Chair in Entrepreneurship at ESMT Berlin. “This new study looks at what happens when citizens go beyond contributing and start making evaluative judgments traditionally reserved for experts.”
The researchers analyzed data from more than 2,300 crowd members who reviewed four scientific proposals in the medical sciences, biology, and social sciences. These evaluators first evaluated proposals with respect to criteria such as scientific merit, team capabilities, and potential societal impact. They then gave their final verdict using two different approaches: making funding recommendations to funding agencies and donating their own real money if they felt that a project was worth pursuing.
The study revealed how laypeople evaluate scientific research proposals.
The study found that crowd participants did not make random or purely emotional decisions but carefully considered different aspects of the proposals. They placed considerable weight on scientific merit – similar to what expert reviewers are doing. But they placed a similarly high weight on societal impact – a criterion that is notoriously difficult to judge for experts and tends to receive less weight in traditional evaluation processes.
However, the data revealed important nuances:
These results highlight how public evaluations can surface different priorities than expert assessments, offering both opportunities to align science with societal values and challenges related to potential biases and inequalities.
In the book, the authors discussed five main reasons to involve crowds in research activities:
“Now we see that some of these rationales also work when citizens help evaluate which research should be funded in the first place,” says Sauermann.
Major funding bodies are exploring how to involve the public in research decisions. The U.S. National Science Foundation includes public engagement requirements in some grants, while Horizon Europe promotes citizen science across its €95.5 billion framework. These efforts generally treat citizens as contributors, not evaluators.
This study provides new data for funders to consider. Crowd assessments can uncover societal priorities that experts might overlook, but they also risk amplifying the influence of wealthier and more educated participants.
"Crowd evaluations can highlight important social concerns," Sauermann notes, "but funders must carefully consider who participates, how feedback is elicited, and what safeguards prevent bias."
Companies now routinely use crowdsourcing and other open innovation mechanisms for product development and idea generation. Engaging crowds in evaluations and critical decisions is less common. This study suggests that:
For example, a technology firm soliciting feedback on early-stage product concepts might discover that crowds emphasize user-friendliness and accessibility – factors that internal R&D teams or expert panels might undervalue. Crowds may also provide richer insights into what concretely “user-friendliness” would mean in a particular context. Understanding these perspectives can help firms better align innovations with market expectations.
The new study marks another step in Sauermann’s exploration of crowd science – a journey that has blended rigorous empirical research with public scholarship and practical tools for scientists for more than a decade. In addition to the open-access book, he and co-author Poetz launched sciencewithcrowds.org, a resource hub featuring case studies, templates, and guidelines for engaging the public in research.
That work, largely focused on how and when to involve the crowd, now expands to include the equally important question of how and when to trust the crowd’s judgment.
“It’s about rethinking the boundaries between science and society and to leverage the contributions of the broader public,” Sauermann reflects. “We know that diverse participation can make research more innovative and impactful. Now we’re learning how it can also shape which science gets done in the first place.”
Read the study in Research Policy (Volume 54, Issue 5, June 2025, 105214) or hear Sauermann speak about citizen science on episode #40 of Campus 10178 podcast.