Overview

Strategic and incentive problems are ubiquitous in academia. This not only provides a set of applications such as peer review and peer grading but also provides a testing ground for more general Econ-CS theories. This workshop seeks to bring together both researchers working on academic applications where accounting for strategic reasoning is important (both Econ-CS research and those from other fields such as HCI, EdTech, education, OR, TCS, etc) and Econ-CS researchers whose theories might be tested in academic applications.

Call For Papers

The organizing committee is looking forward to your contributions to the 2nd workshop on Incentives in Academia, held in conjunction with EC’25 on July 10th at Stanford University. We encourage submissions of research papers or position papers whose methods, models, or key insights are relevant to 1) a scenario in academia and 2) strategic behaviors or incentives.

For example, relevant empirical research may include data-driven or experimental design papers aiming to understand incentives in academic contexts such as peer review, grading, and learner-sourcing. We also encourage theoretical works. For example, insights from information elicitation could enhance the honesty and quality of grading reports, auction theory might optimize peer review processes, and matching market design could improve student admissions, paper bidding, and faculty job market strategies.

In summary, topics of interest include but are not limited to:

  • Incentives in peer review
  • Mechanisms for grading
  • Learner-sourcing
  • Mechanisms for teaching evaluations
  • Incentivizing innovation
  • Market design for admissions
  • Data markets for research data
  • Incentives in grant funding
  • (Dis)incentivizing the use of AI in academic settings

Important Dates

  • Submission Deadline: June 6th
  • Author Notification: June 13th
  • Workshop Date: July 10

Schedule

Thursday, July 10, 2025
Room TBD

Event Time Information
Posters morning
  • Peer Review Market Design: Effort-Based Matching and Admission Control
         Craig Fernandes, James Siderius, and Raghav Singal
  • Binary-Report Peer Prediction for Real-Valued Signal Spaces
         Mary Monroe, Rafael Frongillo, and Ian Kash
  • Aligned Textual Scoring Rule
         Yuxuan Lu, Yifan Wu, Jason Hartline, and Michael Curry
  • Detecting Collusion in Peer Review via Game-Theoretic Approach
         Rica Gonen and Asaf Samuel
  • From Crowds to Codes: Minimizing Review Burden in Conference Review Protocols
         Xingbo Wang, Fang-Yi Yu, and Yichi Zhang
  • Invited Talk 13:30 - 14:10 Speaker: James Zou
    Title: A large scale assessment of the impact of LLM feedback on peer reviews
    Abstract:
    Peer review is stressed by rapidly rising submission volumes, leading to deteriorating review quality and increased author dissatisfaction. To address these challenges, we developed Review Feedback Agent, a system leveraging multiple large language models (LLMs) to improve review clarity and actionability by providing automated feedback on vague comments, content misunderstandings, and unprofessional remarks to reviewers. Implemented at ICLR 2025 as a large randomized control study, our system provided optional feedback to more than 20,000 randomly selected reviews. I will present findings from this ICLR deployment and discuss the impact of LLM feedback on reviews.
    14:10 - 14:50 Speaker: Nihar Shah
    Title: LLMs meet Peer Review: The Good, The Bad, and The Ugly
    Abstract:
    As LLMs become increasingly integrated into academic workflows, their influence on peer review is both promising and concerning. This talk will explore three facets of this evolving intersection.
    The Good: LLMs' ability to execute aspects of peer review that are challenging for human reviewers.
    The Bad: Vulnerabilities in the review process to fraud such as collusion rings.
    The Ugly: Reviewers using LLMs to generate reviews, and detection of such usage.
    Coffee Break 15:00 - 15:30
    Spotlights 15:15 - 15:30 Title: Publication Design with Incentives in Mind
    Speaker: Ravi Jagadeesan
    15:30 - 15:45 Title: Auctions and Peer Prediction for Academic Peer Review
    Speaker: Siddarth Srinivasan
    Panel Discussion 15:45 - 16:30 Topic: The Tole of LLMs in Peer Review
  • Using LLMs to support reviewers or generate reviews.
  • Using LLMs to filter out (clearly) low-quality submissions.
  • Improving paper-reviewer matching with LLMs.
  • Assisting SPCs/ACs with discussion moderation and decision-making.
  • Using LLMs to evaluate historical reviewer contributions and decide who to invite in the future.
  • Panelists: TBD
    Moderator: Kevin Leyton-Brown

    Invited Speakers

    James Zou

    Biomedical Data Science, Stanford University

    Organizers