主讲人简介: | Dr. Lan Wang is Centennial endowed chair professor and department chair of the Department of Management Science at the Miami Herbert Business School of the University of Miami, with a secondary appointment as Professor in the Department of Health Management and Policy Department in the Miami Herbert Business School, and a secondary appointment as Professor in the Department of Public Health Sciences at the Miller School of Medicine, University of Miami. She is the Co-Editor for Annals of Statistics (2022-2024), jointly with Professor Enno Mammen. Before joining University of Miami, she was a Professor of Statistics at School of Statistics, University of Minnesota. She got her Ph.D. in Statistics from the Pennsylvania State University. She got her Bachelor's degree in Applied Mathematics from Tsinghua University, China.
Dr. Wang's research covers several interrelated areas: high-dimensional statistical learning, quantile regression, optimal personalized decision recommendation, survival analysis, and business analytics. She is also interested in interdisciplinary collaboration, driven by applications in business, economics, health care, and other domains.
Dr. Wang is an elected Fellow of the American Statistical Association, an elected Fellow of the Institute of Mathematical Statistics, and an elected member of the International Statistical Institute. She was the associate editor for several leading statistical journals: Journal of the American Statistical Associations, Annals of Statistics, Journal of the Royal Statistics Society, and Biometrics. |
讲座简介: | In the existing literature of reinforcement learning (RL), off-policy evaluation is mainly focused on estimating a value (e.g., an expected discounted cumulative reward) of a target policy given the pre-collected data generated by some behavior policy. Motivated by the recent success of distributional RL in many practical applications, we study the distributional off-policy evaluation problem in the batch setting when the reward is multi-variate. We propose an offline Wasserstein-based approach to simultaneously estimate the joint distribution of a multivariate discounted cumulative reward given any initial state-action pair in the setting of an infinite-horizon Markov decision process. Finite sample error bound for the proposed estimator with respect to a modified Wasserstein metric is established in terms of both the number of trajectories and the number of decision points on each trajectory in the batch data. Extensive numerical studies are conducted to demonstrate the superior performance of our proposed method. |