Making sooner: Half 3 — cache first | by Glenn Conner

In the past few years, has seen a lot of changes – we’ve released stories, filters, creation tools, notifications and direct messages, as well as a host of other features and improvements. As the product grew, a side effect was that our web performance slowed. Over the past year we have made a conscious effort to improve this. These ongoing efforts have resulted in a cumulative improvement in our feed page load time of nearly 50%. This series of blog posts describes some of the work that has led to these improvements. In Part 1 we talked about prefetching data and in Part 2 we talked about improving performance by sending data directly to the client instead of waiting for the client to request the data.

Cache first

Because we’re already sending data to the client at the earliest possible time the page loads, the only faster way to get data to the client is by not having to get or transfer any data at all. We can do this using a cache-first rendering approach. However, this means that we need to show users out of date feed data for a short period of time. With this approach, when the page loads, users are immediately presented with a cached copy of their previous feed and story tray, and then replaced with new data as it becomes available.

We use Redux to manage the status on At a high level, we implemented this by storing a subset of our redux memory on the client in an indexed DB table and then rehydrating the memory the first time the page was loaded. Due to the asynchronous nature of indexed DB access, retrieval of server data, and user interactions, problems can arise when the user interacts with the cached state. However, we want to make sure that these interactions continue to be applied to the new state when it arrives from the server.

For example, if we were to naively treat caching, we might run into the following problem: we start loading from cache and from the network at the same time, and since the cached feed is ready first, we display it to the user. The user then likes a post, but as soon as the network reply comes back for the latest feed, they will overwrite that post with a copy that does not contain the same action that the user took on the cached copy, as shown below.

Race conditions when the user interacts with cached data (Redux actions in green, status in gray)

To solve this problem, we needed a way to apply interactions to the cached state, but also to save these interactions so that they could later be replayed by the server using the new state. If you have used Git or similar version control systems before, you may be familiar with this problem. If we think of the cached feed status as the branch and the server feed response as the master, we effectively want to rebase and apply the commits (likes, comments, etc) from our local branch to the head of the master.

This brings us to the following design:

  • When the page loads, we send a request for the new data (or wait for it to be pushed).

Fixing interactive race conditions with staging (Redux actions in green, status in gray)

With a staging status, the entire existing reduction behavior can be reused. It also separates the provided status (which contains the most recent data) from the current status. Since staging is implemented with Redux, we just need to trigger actions to use it!


The staging API consists of two main functions: stagingAction & stagingCommit (as well as a few other functions for handling undo and edge cases not covered here).

stagingAction accepts a promise that resolves an action to be sent to the staged state. It initializes the staging status and keeps track of all actions that have been triggered since initialization. In source control analogy, we can think of a local branch being created because any actions that take place are now queued and applied to the staged state when the new data arrives.

stagingCommit sets the staging status to the current status. If asynchronous actions are pending on the staging state, it waits before committing. This is similar to a new base in terms of source control in that we apply all of our local changes (from the cache branch) to the master (the new data from the server) and keep our local branch updated.

To enable staging, we wrap the root reducer with a reducer enhancer that processes the stagingCommit action and applies the provided actions to the new state. To take advantage of all of this, we just need to trigger the relevant actions and everything will be done for us. For example, if we want to get a new feed and apply it to a posted status, we can do the following:

Using cache-first rendering for both feed posts and the stories tray improved their respective display times by 2.5% and 11%, respectively, and resulted in a better user experience in line with the capabilities of the native ones iOS and Android Instagram apps.

Stay tuned for part 4

In Part 4, we’ll describe how we reduced the size of our code base and improved its performance by optimizing its code size and execution. If you would like to learn more about this work or would like to join one of our engineering teams, please visit our careers page, follow us on Facebook or Twitter.

Comments are closed.