
James Zou
Biomedical Data Science, Stanford University
Strategic and incentive problems are ubiquitous in academia. This not only provides a set of applications such as peer review and peer grading but also provides a testing ground for more general Econ-CS theories. This workshop seeks to bring together both researchers working on academic applications where accounting for strategic reasoning is important (both Econ-CS research and those from other fields such as HCI, EdTech, education, OR, TCS, etc) and Econ-CS researchers whose theories might be tested in academic applications.
The organizing committee is looking forward to your contributions to the 2nd workshop on Incentives in Academia, held in conjunction with EC’25 on July 10th at Stanford University. We encourage submissions of research papers or position papers whose methods, models, or key insights are relevant to 1) a scenario in academia and 2) strategic behaviors or incentives.
For example, relevant empirical research may include data-driven or experimental design papers aiming to understand incentives in academic contexts such as peer review, grading, and learner-sourcing. We also encourage theoretical works. For example, insights from information elicitation could enhance the honesty and quality of grading reports, auction theory might optimize peer review processes, and matching market design could improve student admissions, paper bidding, and faculty job market strategies.
In summary, topics of interest include but are not limited to:
Thursday, July 10, 2025
Room TBD
Event | Time | Information |
---|---|---|
Posters | morning | Craig Fernandes, James Siderius, and Raghav Singal |
Invited Talk | 13:30 - 14:10 |
Speaker: James Zou Title: A large scale assessment of the impact of LLM feedback on peer reviews Abstract:
Peer review is stressed by rapidly rising submission volumes, leading to deteriorating review quality and increased author dissatisfaction. To address these challenges, we developed Review Feedback Agent, a system leveraging multiple large language models (LLMs) to improve review clarity and actionability by providing automated feedback on vague comments, content misunderstandings, and unprofessional remarks to reviewers. Implemented at ICLR 2025 as a large randomized control study, our system provided optional feedback to more than 20,000 randomly selected reviews. I will present findings from this ICLR deployment and discuss the impact of LLM feedback on reviews. |
14:10 - 14:50 |
Speaker: Nihar Shah Title: LLMs meet Peer Review: The Good, The Bad, and The Ugly Abstract:
As LLMs become increasingly integrated into academic workflows, their influence on peer review is both promising and concerning. This talk will explore three facets of this evolving intersection. The Good: LLMs' ability to execute aspects of peer review that are challenging for human reviewers. The Bad: Vulnerabilities in the review process to fraud such as collusion rings. The Ugly: Reviewers using LLMs to generate reviews, and detection of such usage. |
|
Coffee Break | 15:00 - 15:30 | |
Spotlights | 15:15 - 15:30 |
Title: Publication Design with Incentives in Mind Speaker: Ravi Jagadeesan |
15:30 - 15:45 |
Title: Auctions and Peer Prediction for Academic Peer Review Speaker: Siddarth Srinivasan |
Panel Discussion | 15:45 - 16:30 | Topic: The Tole of LLMs in Peer Review Moderator: Kevin Leyton-Brown |
Biomedical Data Science, Stanford University
Carnegie Mellon University
School of Information, University of Michigan
Computer Science, University of British Columbia
Computer Science, Carnegie Mellon University
DIMACS, Rutgers University