Fb Fellow Highlight: Shaping the long run with neural program synthesis and adversarial ML
Every year, PhD students from around the world apply for the Facebook Fellowship, a program designed to encourage and support promising PhD students doing innovative and relevant research in the fields of computer science and engineering. Scholarship holders receive tuition fees for up to two years in order to conduct research at their respective universities independently of Facebook. To learn more about reward details, eligibility and more, visit the program page below.
As a continuation of our Fellowship Spotlight series, we’re introducing Facebook Fellow Xinyun Chen in 2020.
Xinyun is a PhD student at UC Berkeley working with Professor Dawn Song and is expected to graduate in 2022. Her research explores the intersection of deep learning, programming languages, and security with an emphasis on neural program synthesis and adversarial machine learning (ML).
It was a research opportunity at the National Institute of Informatics in Japan that inspired Xinyun Chen to research ML. There she designed and implemented an object recognition system for drones and thus developed her passion for deep learning – a passion that puts her at the forefront of deep learning research today.
Now, as a PhD student at UC Berkeley, “[m]Her research addresses the major challenges of improving access to programming for general users and improving the security and trustworthiness of ML models, ”she says. “It is a complicated process to teach a computer to think.”
Xinyun is at the forefront of AI research in neural program synthesis and has developed deep learning techniques to synthesize accurate and complex programs. “I have shown that our approaches can automatically generate programs from natural language descriptions, test cases, etc.,” she says. Her goal for future research is to not only develop new deep learning techniques to improve program synthesis performance, but also to take inspiration from program synthesis techniques to achieve better generalization for a wide range of tasks.
“In other words, I can’t create AI that steals people’s jobs,” she says with a laugh. “Machines still have a long way to go. They are still very simple and it is a very gradual process. “
But Xinyun’s work progresses year after year and examines how programs for various tasks can be synthesized by equipping a neural network with a symbolic module. So far, she reports, “our neural-symbolic model is better able to make compositional considerations and performs better under certain shifts in distribution.” are more robust than existing models and at the same time the weaknesses of these models are examined in the context of their research on adversarial ML.
The future of her research lies in exploring the possibilities associated with developing better pre-training and search techniques for program synthesis. Currently her work shows promising performance across strong baselines and she looks forward to continuing her work at leading AI research institutes.
To learn more about Xinyun Chen, visit her Fellowship Profile.