Yiling Chen
Computer Science, Harvard University
Strategic and incentive problems are ubiquitous in academia. This not only provides a set of applications such as peer review and peer grading but also provides a testing ground for more general Econ-CS theories. This workshop seeks to bring together both researchers working on academic applications where accounting for strategic reasoning is important (both Econ-CS research and those from other fields such as HCI, EdTech, education, OR, TCS, etc) and Econ-CS researchers whose theories might be tested in academic applications.
The organizing committee is looking forward to your contributions to the 1st workshop on Incentives in Academia, held in conjunction with EC’24 on July 8th in New Haven. We encourage submissions of research papers or position papers whose methods, models, or key insights are relevant to 1) a scenario in academia and 2) strategic behaviors or incentives.
For example, relevant empirical research may include data-driven or experimental design papers aiming to understand incentives in academic contexts such as peer review, grading, and learner-sourcing. We also encourage theoretical works. For example, insights from information elicitation could enhance the honesty and quality of grading reports, auction theory might optimize peer review processes, and matching market design could improve student admissions, paper bidding, and faculty job market strategies.
In summary, topics of interest include but are not limited to:
Monday, July 8, 2024
Room 4200
Event | Time | Information |
---|---|---|
Posters | 13:00 - 14:00 | Cyrus Cousins, Elita Lobo, Justin Payan, Yair Zick |
Contributed Talks | 14:00 - 14:30 14:30 - 15:00 |
Title: Grantmaking, Grading on a Curve, and the Paradox of Relative Evaluation in Nonmarkets. Speaker: Marco Ottaviani Title: Deploying Fair and Efficient Course Allocation Mechanisms. Speaker: Paula Navarrete Diaz |
Coffee Break | 15:00 - 15:30 | |
Invited Talk | 15:30 - 16:30 | Title: Incentives for Experimentation Speaker: Gustavo Manso Abstract: This talk will analyze the design of incentive plans for employees and researchers engaged in novel tasks, where performance relies on experimenting with different approaches. We will integrate the canonical model of experimentation, known as the bandit problem, into a principal-agent framework, allowing us to compare incentives for exploring new, untested methods with incentives for exploiting well-established techniques. Our findings indicate that incentive schemes encouraging exploration are fundamentally different from traditional pay-for-performance schemes, which have previously been shown to be effective in motivating exploitation in routine, repetitive tasks. We will demonstrate that granting freedom to experiment, tolerating early failures, providing long evaluation periods, and offering timely performance feedback fosters creativity and innovation in these environments. The results will be corroborated by a laboratory experiment and empirical evidence on the funding of academic research in the life sciences. We will also cover recent findings in the context of genetics research on how data and information provision influence exploration in multi-agent settings. |
Panel Discussion | 16:30 - 17:30 | Topic: How can we (e.g., course instructors, conference organizers, department chairs, etc.) improve specific aspects of academia using insights or methods from EC research? Moderator: Kevin Leyton-Brown |
Computer Science, Harvard University
Computer Science, Northwestern University
Haas School of Business, UC Berkeley
Economics, Bocconi University
University of Michigan
University of British Columbia
Carnegie Mellon University
University of Michigan