Saying the winners of the 2021 Statistics for Bettering Insights, Fashions, and Selections request for proposals

In April 2021, Facebook published the call for proposals for statistics to improve insights, models and decisions 2021 live on the web conference. Today we announce the winners of this award.
VIEW RFPA At Facebook, our research teams strive to improve decision-making for a company that touches the lives of billions of people around the world. Advances in data science methodologies help us make the best decisions about our community, products, and infrastructure.

This call for tenders is a continuation of the calls for tenders 2019 and 2020 in applied statistics. With this series of tenders, the Facebook Core Data Science team, the Infrastructure Data Science team and the statistics and data protection team want to promote further innovations and deepen their cooperation with universities in applied statistics, including in the following areas:

  • Learning and evaluating under uncertainty
  • Statistical models of complex social processes
  • Causal conclusion with observation data
  • Algorithmic testing
  • Recognition and attribution of performance regressions
  • Forecast for aggregated time series
  • Data protection-conscious statistics for noisy, distributed data sets

The team has reviewed 134 high quality proposals and is happy to announce the 10 winning proposals below as well as the 15 finalists. Thank you to everyone who took the time to submit a proposal and congratulations to the winners.

Research Award Winner

Breaking the Trilemma of Accuracy, Privacy, and Communication in Federated Analytics
Ayfer Özgur (Stanford University)

Certifiable private, robust and explainable federated learning
Bo Li, Han Zhao (University of Illinois Urbana-Champaign)

Experimental design in market equilibrium
Stefan Wager, Evan Munro, Kuang Xu (Stanford University)

Learning to trust graphs of neural networks
Claire Donnat (University of Chicago)

Negative unlabeled learning for online data center latecomer prediction
Michael Carbin, Henry Hoffmann, Yi Ding (Massachusetts Institute of Technology)

Nonparametric methods for calibrated hierarchical time series forecasting
B. Aditya Prakash, Chao Zhang (Georgia Institute of Technology)

Data protection in personalized federated learning and analysis
Suhas Diggavi (University of California, Los Angeles)

Reducing the simulation-to-reality gap as a means of learning under uncertainty
Mahsa Baktashmotlagh (University of Queensland)

Reduction of the theory-practice gap in private and distributed learning
Ambuj Tewari (University of Michigan)

Robust wait-for-graph inference for performance diagnostics
Ryan Huang (Johns Hopkins University)

Finalists

An integrated framework for learning and optimizing over networks
Eric Balkanski, Adam Elmachtoub (Columbia University)

Auditing Bias in Large Language Models
Soroush Vosoughi (Dartmouth College)

Cross-functional experiment prioritization with decision makers in the loop
Emma McCoy, Bryan Liu (Imperial College London)

Co-design of data collection and intervention in social networks: privacy and justice
Amin Rahimian (University of Pittsburgh)

Efficient and practical A / B testing for several unsteady experiments
Nicolò Cesa-Bianchi, Nicola Gatti (University of Milan)

Empirical Bayesian Deep Neural Networks for Predictive Uncertainty
Xiao Wang, Yijia Liu (Purdue University)

Global forecast framework for large hierarchical time series
Rob Hyndman, Christoph Bergmeir, Kasun Bandara, Shanika Wickramasuriya (Monash University)

High dimensional treatments in causal inference
Kosuke Imai (Harvard University)

Nowcasting Time Series Aggregates: Textual Analysis of Machine Learning
Eric Ghysels (University of North Carolina at Chapel Hill)

Online sparse deep learning for large dynamic systems
Faming Liang, Dennis KJ Lin, Qifan Song (Purdue University)

Optimal use of data for reliable off-policy policy evaluation
Hongseok Namkoong (Columbia University)

Principal uncertainty quantification for deep neural networks
Tengyu Ma, Ananya Kumar, Jeff Haochen (Stanford University)

Reliable causal inference with continuous learning
Sheng Li (University of Georgia)

Training of machine learning models on an individual level with noisy aggregated data
Martine De Cock, Steven Golob (University of Washington Tacoma)

Understanding instance-dependent label noise: learnability and solutions
Yang Liu (University of California Santa Cruz)

Comments are closed.