Q&A with Ayesha Ali, two-time award winner of Fb request for analysis proposals in misinformation

Facebook is a place where computer science savvy people work on some of the most complex and challenging research problems in the world. In addition to recruiting top talent, we maintain close relationships with academia and the research community in order to work together on difficult challenges and to find solutions together. In this new monthly series of interviews, we put members of the academic community and their critical research in the spotlight – as partners, employees, consultants, or independent contributors.

This month we contacted Ayesha Ali, professor at Lahore University of Management Sciences (LUMS) in Pakistan. Ali is a two-time winner of the Facebook Foundational Integrity Research Call for Proposals (RFP) on Misinformation and Polarization (2019 and 2020). In this Q&A, Ali shares the results of her research, its implications, and advice for university faculties looking to take a similar path.

Q: Tell us about your role at LUMS and the type of research that you and your department specialize in.

Ayesha Ali: I have been an assistant professor in the LUMS Department of Economics since 2016 after doing a PhD in economics from the University of Toronto. I am a trained applied development economist and my research focuses on understanding and addressing policy challenges that developing countries face such as: B. increasing human development, managing energy and the environment, and using technology for social benefit. Some of the topics I work on include how people with low digital literacy perceive and react to content on social media and how this affects their beliefs and behavior.

Q: How did you decide to conduct research projects with misinformation?

AA: Before writing the first proposal in 2018, I had thought for a long time about the phenomenon of misinformation and fabricated content. I have had the opportunity to speak to colleagues in the IT department about this topic on several occasions and we have had some great discussions about it.

We quickly realized that we couldn’t fight misinformation with technology alone. It’s a multifaceted topic. To address this issue, we need the following: user education, false news filtering technology, and contextual guidelines to prevent false news generation and spread. We were particularly keen to think about the different ways we can educate people with low digital literacy to spot misinformation.

Q: What were the results of your first research project and what are your plans for the second?

AA: In our first project, we examined the effect of two types of user education programs to help people spot false news using a randomized field experiment. Using a list of breaking news posted on social media, we create a test to measure the likelihood of people believing in misinformation. In contrast to their perceived effectiveness, we found no significant effect of video-based general educational messages about misinformation.

However, when video-based educational messages were supplemented with personalized feedback based on people’s previous preoccupation with false news, their ability to spot false news improved significantly. Our results show that if properly designed, educational programs can help people find consumers of information on social media more critical.

Our second project aims to build on this research agenda. We plan to focus on non-textual misinformation like audio deepfakes. Audio messages are a popular form of communication among people with low literacy and digital skills. Using surveys and experiments, we will examine how people perceive, consume, and deal with information received via audio deepfakes, and what role previous beliefs and analytical skills play in forming perceptions about the accuracy of such information. We also plan to design and experimentally evaluate an educational intervention to improve people’s ability to identify audio deepfakes.

Q: What is the impact of your research in your region and worldwide?

AA: I think there are at least three ways our work is impacting:

  1. Our work raises awareness of the importance of digital literacy campaigns in combating misinformation. It shows that such interventions hold promise in turning users into more sophisticated consumers of information when tailored to the target audience (e.g., low literacy populations).
  2. Our work can have an impact on policies related to media literacy campaigns and how they are structured, especially for low digital literacy populations. We are already in contact with various organizations in Pakistan to find out how our findings can be used in various digital literacy campaigns. For example, COVID-19 vaccination is expected to be available in the coming months and there is a need to raise awareness of its importance and proactively dispel conspiracy theories and misinformation about it. Previous experience with polio vaccination campaigns has shown that conspiracy theories can take strong roots and even endanger human lives.
  3. We hope that the work motivates others to work on such global societal challenges, especially in developing countries.

Q: What advice would you give academics looking to fund their research?

AA: I think a good research proposal has three components:

  1. It addresses an important issue that ideally has context / local relevance.
  2. It shows a well motivated solution or plan that has contextual / local relevance.
  3. It shows, or at least makes the case, why you are uniquely positioned to solve it well.

Q: Where can people find out more about your research?

AA: You can find out more about my research on my website.

Comments are closed.