(When) do recommender methods amplify person preferences? A theoretical framework and mitigation methods

What the research is:

Recommendation systems now influence almost every aspect of human activity on the Internet, be it the news we read, the products we buy, or the entertainment we consume. The algorithms and models at the heart of these systems rely on learning our preferences as we interact with them; When we watch a video or like a post on Facebook, we give the system information about our preferences.

This repeated interaction between people and algorithms creates a feedback loop that leads to recommendations that are increasingly tailored to our tastes. Ideally, these feedback loops should be virtuous all the time; The recommendation system can precisely determine our preferences and gives us recommendations that improve our quality of life.

But what if the system over-indexes and amplifies interactions that don’t necessarily capture the user’s true preferences? Or when the user’s preferences have migrated to recommended items that could be viewed as harmful or detrimental to their long-term wellbeing? Under what conditions would recommendation systems react to these changes and reinforce preferences, leading to a higher prevalence of harmful recommendations?

How it works:

In this post we offer a theoretical framework to answer these questions. We model the interactions between users and recommendation systems and examine how these interactions can lead to potentially harmful outcomes. Our main assumption is that users have a slight tendency to bolster (or drift) their opinion, increasing their preference for recommendations they appear to be well correlated with and decreasing them otherwise. We characterize the development of user preferences over time as a function of the user, the recommendation system and time and ask whether this function allows a fixed point, ie a change in the system’s reaction to the user’s interactions does not change his preferences. We show that even with a slight drift and without external intervention, no such fixed point exists. That is, even a slight preference for recommendations in a certain category by a user can lead to an increasing concentration of article recommendations from this category. We refer to this phenomenon as preference reinforcement.

Recommendation system model

We use the proven collaborative filter model of recommendation systems – each tuple (user, element) receives a rating based on the likelihood that the user is interested in the element. These scores are calculated using low dimensional matrix factorization. We use a stochastic recommendation model in which the items presented to a user are probabilistically selected relative to the items’ scores (rather than being deterministically sorted by scores). The degree of stochastics in the system is determined by a parameter ๐›ฝ; the higher ๐›ฝ, the lower the stochastics and the distribution of the scores is strongly concentrated on the top items. Finally, we consider the content available for a recommendation to be benign or problematic and use ษ‘ to denote the prevalence of the latter, ie the proportion of problematic content in all content.

Our model also includes the temporal interactions between the user and the recommender system, with the user being presented with a set of elements in each iteration and signaling their interests to the recommender system. These interests drift slightly based on the recommended items, with the actual extent of the drift being parameterized by the number of points that the item receives.

The following figure illustrates our model of temporal drift. The recommendation system first recommends a diverse set of elements to the user, who in turn interacts with the elements they prefer. The recommendation system picks up this signal and recommends a less diverse set of elements (shown as just green and blue elements) that match the perceived preferences of the user. The user then drifts on to a very specific set of items (shown as the items in blue) suggested by the recommendation engine. This means that the recommendation system only suggests elements of this particular class (blue elements).

Simulations

To investigate the parameter space in which the system amplifies recommendation values, we use simulations with synthetic and real data sets. We show that the system boosts ratings for items based on the user’s initial preferences – items that are similar to what the user initially liked are more likely to be recommended over time, and vice versa those that the user did not initially prefer have a higher likelihood of being recommended, decreasing likelihood of being recommended.


In the figure above on the left we can see the effect of preference reinforcement. Solid lines (upper group of lines) indicate the sympathetic articles that have a probability of more than 0.5 of getting a positive reaction from the user. The dashed lines (lower group) indicate the items that have a low positive response from the user. As the figure shows, the probability of liking an element increases towards 1 if its score is positive, and towards 0 otherwise. With higher values โ€‹โ€‹of ๐›ฝ (the stochastics of the recommendation system), the stochastic recommendation system acts as the top N recommender and therefore presents the users with items that they have already liked, which leads to a stronger reinforcement of their preferences. On the right diagram in the above figure we can see another result of preference amplification – the likelihood that the user will like an article from the top 5% of the recommended articles increases significantly over time. This reinforcement effect is particularly evident at high values โ€‹โ€‹of ๐›ฝ, at which the stochastics of the system are low and the recommendation system selects elements that are most likely preferred by the user.

Mitigation

Finally, we discuss two strategies for mitigating the effects of preference enhancement of problematic entities on a) global and b) personal level. In the former, the strategy is to remove these entities globally to reduce their overall prevalence, and in the latter, the system targets users and applies interventions aimed at reducing the likelihood of these entities being recommended.


In the figure above, we characterize the simulation effects of a global intervention on problematic content. We plot the probability of recommending an item with problematic content for different starting prevalences (denoted by ษ‘). The figure shows that, despite the low prevalence of problematic content, with an initial affinity for this type of content, the likelihood that it will be recommended to the user increases over time.

In the article we also describe an experiment that we performed with a real, large-scale video recommender system. In the experiment, we downgraded videos suspected of marginal nudity (the platform is already filtering out videos that violate community standards) for users who are regularly exposed to high levels of nudity. The results of the experiment show that we not only reduced the distribution of this content in the affected population, but also found an increase in the overall interaction of + 2%. These results are very encouraging as not only can we prevent exposure to problematic content, but we also have an overall positive impact on the user experience.

Why it matters:

In this thesis we investigate the interactions between users and recommendation systems and show that for certain user behaviors their preferences can be reinforced by the recommendation system. Understanding the long-term implications of ML systems helps us, as practitioners, take better security precautions and ensure that our models are optimized to serve the best interests of our users.

Read the full paper:

A Framework for Understanding Preference Enhancement in Recommendation Systems

Learn more:

Watch our presentation: KDD 2021.

Comments are closed.