Overview

Strategic and incentive problems are ubiquitous in academia. This not only provides a set of applications such as peer review and peer grading but also provides a testing ground for more general Econ-CS theories. This workshop seeks to bring together both researchers working on academic applications where accounting for strategic reasoning is important (both Econ-CS research and those from other fields such as HCI, EdTech, education, OR, TCS, etc) and Econ-CS researchers whose theories might be tested in academic applications.

Call For Papers

The organizing committee is looking forward to your contributions to the 1st workshop on Incentives in Academia, held in conjunction with EC’24 on July 8th in New Haven. We encourage submissions of research papers or position papers whose methods, models, or key insights are relevant to 1) a scenario in academia and 2) strategic behaviors or incentives.

For example, relevant empirical research may include data-driven or experimental design papers aiming to understand incentives in academic contexts such as peer review, grading, and learner-sourcing. We also encourage theoretical works. For example, insights from information elicitation could enhance the honesty and quality of grading reports, auction theory might optimize peer review processes, and matching market design could improve student admissions, paper bidding, and faculty job market strategies.

In summary, topics of interest include but are not limited to:

  • Incentives in peer review
  • Strategic Behaviors and incentives in Classrooms
  • Learner-sourcing
  • Mechanisms for teaching evaluations
  • Mechanisms for grading
  • Job market design and analysis
We look forward to your contributions and an engaging exchange of ideas at the workshop!

Important Dates

  • Submission Deadline: June 3, 2024
  • Author Notification: June 16
  • Workshop Date: July 8 (afternoon)

Schedule

Monday, July 8, 2024
Room 4200

Event Time Information
Posters 13:00 - 14:00
  • Fair and Welfare-Efficient Resource Allocation under Uncertainty
         Cyrus Cousins, Elita Lobo, Justin Payan, Yair Zick
  • Who You Gonna Call? Optimizing Expert Assignment with Predictive Models
         Cyrus Cousins, Sheshera Mysore, Neha Nayak-Kennard, Justin Payan, Yair Zick
  • Academic Publication Competition
         Haichuan Wang, Yifan Wu, Haifeng Xu
  • Carrot and Stick: Eliciting Comparison Data and Beyond
         Yiling Chen, Shi Feng, Fang-Yi Yu
  • The Isotonic Mechanism for Recommending Best Paper Awards
         Gang Wen, Natalie Collina, Weijie Su
  • Grantmaking, Grading on a Curve, and the Paradox of Relative Evaluation in Nonmarkets
         Marco Ottaviani, Jerome Adda
  • Analysis of the ICML 2023 Ranking Data: Can Authors’ Opinions on Their Own Papers Assist Peer Review in Machine Learning Conferences?
         Buxin Su, Jiayao Zhang, Natalie Collina, Yuling Yan, Didong Li, Jianqing Fan, Aaron Roth, Weijie Su
  • Deploying Fair and Efficient Course Allocation Mechanisms
         Paula Navarrete Diaz, Cyrus Cousins, George Bissias, Yair Zick
  • Spot Check Equivalence: an Interpretable Metric for Information Elicitation Mechanisms
         Shengwei Xu, Yichi Zhang, Paul Resnick, Grant Schoenebeck
  • Collusive and Adversarial Replication
         Adam Bouyamourn
  • Contributed Talks 14:00 - 14:30

    14:30 - 15:00
    Title: Grantmaking, Grading on a Curve, and the Paradox of Relative Evaluation in Nonmarkets.
    Speaker: Marco Ottaviani
    Title: Deploying Fair and Efficient Course Allocation Mechanisms.
    Speaker: Paula Navarrete Diaz
    Coffee Break 15:00 - 15:30
    Invited Talk 15:30 - 16:30 Title: Incentives for Experimentation
    Speaker: Gustavo Manso
    Abstract:
    This talk will analyze the design of incentive plans for employees and researchers engaged in novel tasks, where performance relies on experimenting with different approaches. We will integrate the canonical model of experimentation, known as the bandit problem, into a principal-agent framework, allowing us to compare incentives for exploring new, untested methods with incentives for exploiting well-established techniques.
    Our findings indicate that incentive schemes encouraging exploration are fundamentally different from traditional pay-for-performance schemes, which have previously been shown to be effective in motivating exploitation in routine, repetitive tasks. We will demonstrate that granting freedom to experiment, tolerating early failures, providing long evaluation periods, and offering timely performance feedback fosters creativity and innovation in these environments.
    The results will be corroborated by a laboratory experiment and empirical evidence on the funding of academic research in the life sciences. We will also cover recent findings in the context of genetics research on how data and information provision influence exploration in multi-agent settings.

    Panel Discussion 16:30 - 17:30 Topic: How can we (e.g., course instructors, conference organizers, department chairs, etc.) improve specific aspects of academia using insights or methods from EC research?
  • Incentives in student evaluation and peer grading.
  • Incentives in research funding.
  • Incentives for conducting high-risk research.
  • Forecasting the impact of completed research.
  • Incentives in peer review.
  • Panelists: Yiling Chen, Jason Hartline, Gustavo Manso, Marco Ottaviani
    Moderator: Kevin Leyton-Brown

    Panelists

    Organizers