报告人： Zhichao Jiang(UMass Amherst)
时间：2021-11-17 10:30 – 12:00
地点：Tencent Meeting(979 3153 9451）
Despite an increasing reliance on fully-automated algorithmic decision-making in our day-to-day lives, human beings still make highly consequential decisions. As frequently seen in business, healthcare, and public policy, recommendations produced by algorithms are provided to human decision-makers to guide their decisions. While there exists a fast-growing literature evaluating the bias and fairness of such algorithmic recommendations, an overlooked question is whether they help humans make better decisions. Using the concept of principal stratification, we develop a statistical methodology for experimentally evaluating the causal impacts of algorithmic recommendations on human decisions. We propose the evaluation quantities of interest, identification assumptions, and estimation strategies. We also develop sensitivity analyses to assess the robustness of empirical findings to the potential violation of a key identification assumption. We apply the proposed methodology to preliminary data from the first-ever randomized controlled trial that evaluates the pretrial Public Safety Assessment (PSA) in the criminal justice system.
About the Speaker:
Prof. Zhichao Jiang is an assistant professor in the Department of Biostatistics and Epidemiology at UMass Amherst. Prior joining the department, He was a postdoctoral fellow supervised by professor Kosuke Imai, in the Department of Statistics and Department of Government at Harvard University (2018+), and in the Center for Statistics and Machine Learning and Department of Politics at Princeton University (2016-2018). He obtained his PhD degree in the Department of Statistics at Peking University, under the supervision of Professor Zhi Geng. His main interest is in causal inference methodologies with applications in social sciences, biomedical sciences, and computer science. He is also interested in contaminated data problems including missing data, measurement error, and selection bias.
Tencent Meeting ：https://meeting.tencent.com/dm/WStyUVrbn5bJ
Meeting ID：979 3153 9451
Your participation is warmly welcomed!