Making instagram.com quicker: Code dimension and execution optimizations (Half 4) | by Glenn Conner

In the past few years, instagram.com has seen a lot of changes – we’ve released stories, filters, creation tools, notifications and direct messages, as well as a host of other features and improvements. As the product grew, a side effect was that our web performance slowed. Over the past year we have made a conscious effort to improve this. These ongoing efforts have resulted in a cumulative improvement in our feed page load time of nearly 50%. This series of blog posts describes some of the work that has led to these improvements. In Part 1 we talked about prefetching data, in Part 2 we talked about improving performance by sending data directly to the client instead of waiting for the client to request the data, and in Part 3 we talked about the Cache-first rendering spoken.

In Parts 1 through 3 we covered different ways in which we can optimize the loading patterns of the static resources and data queries for critical paths. However, there is another key area that we haven’t covered yet that is critical to improving web application performance, especially on low-end devices. Send less code to the user – In particular, send less JavaScript.

This may seem obvious, but here are a few points to keep in mind. A common belief in the industry is that the size of JavaScript downloaded over the network is important (i.e., the size after it is compressed). However, we have found that the size of the precompression is really important as it needs to be analyzed and executed on the user’s device, even if it is cached locally. This is especially true if you have a site with a large number of returning users (and subsequent high browser cache hit rates) or users who access your site on mobile devices. In these cases, the analysis and execution performance of JavaScript on the CPU becomes the limiting factor rather than the download time of the network. For example, when we implemented Brotli compression for our JavaScript assets, we were able to reduce the size after compression by almost 20% across the entire wire NO statistically significant reduction in overall page load times as viewed by end users.

On the flip side, we’ve found that decreasing the JavaScript size before compression consistently resulted in performance improvements. It’s also worth distinguishing between JavaScript that runs on the critical path and JavaScript that is dynamically imported after the main page completes. Ideally, it would be helpful to reduce the total amount of JavaScript in an application. In the short term, however, the amount of busy JavaScript running on the critical path needs to be optimized (we track this with a metric we refer to as critical bytes per route). Dynamically imported JavaScript that is lazy loading generally does not have as much impact on page loading performance. Hence, moving invisible or interaction-dependent UI components from the original page packages to dynamically imported packages is a valid strategy.

Redesigning our user interface to reduce the number of scripts on the critical path is critical to improving long-term performance. However, this is a significant undertaking that will take some time to complete. In the short term, we worked on a number of projects to improve the size and execution efficiency of our existing code in a way that is largely transparent to product developers and that requires little refactoring of existing product code.

We bundle our front-end webresets with Metro (the same bundler used by React Native) so we can instantly access inline requests. Inline-require shifts the cost of requiring / importing modules the first time they are actually used. This means you can avoid paying execution costs for functions you don’t use (although you will still pay the cost of downloading and parsing those functions), and you can better amortize the execution cost over starting the application, rather than a large amount Carry out projections.

Enabling inline requires in React Native & Instagram Web

To see how this works in practice, let’s take the following sample code:

Using inline requires that this be converted to the following (you can find these inline requirements by searching for r (d[intheInstagramJSsourceinbrowserdevelopertools)[intheInstagramJSsourceinyourbrowserdevelopertools)[inderInstagramJS-QuelleindenBrowser-Entwicklertools)[intheInstagramJSsourceinyourbrowserdevelopertools)

As we can see, it basically works by replacing the local references to a required module with a function call that requires that module. This means that the module is never needed (and therefore never executed) unless the code from that module is actually used. In most cases, this works very well, but there are a few marginal cases that can cause problems – namely modules with side effects. For example:

Without inline requests, module C would print {‘foo’: ‘bar’}, but if we enable inline requests it would print undefined because B has an implicit dependency on A. This is a made-up example, but there are others. In real cases, this can have implications, such as: B. when a module carries out a log as part of its initialization. If you enable inline requests, this logging might stop. This is largely avoidable by Linters, who are looking for code that runs immediately at the module-scope level. However, there were some files that we had to blacklist due to this tweak such as: B. Run-time polyfills that must be executed immediately. After experimenting with enabling inline requests throughout the codebase, we saw our Feed TTI (Time to Interactive) improve by 12% and Display Done by 9.8% and decided that the handling of some of these small marginal cases for which performance improvements were worthwhile.

One of the main reasons for adopting compiler / transpiler tools like Babel was that developers could use modern JavaScript coding languages, but their applications would still work in browsers that did not natively support these latest language features. Since then, there have been a number of other important use cases for these tools, including Compile-to-Js languages ​​like Typescript and ReasonML, language extensions like JSX and Flow type annotations, and AST manipulations to create time for things like internationalization. Because of this, it is unlikely that this additional compilation step will disappear from the front-end development workflows anytime soon. Against this background, however, it is worthwhile to check again whether the original purpose (cross-browser compatibility) is still required in 2019.

ES2015 and newer features like async / await are now well supported in the latest versions of most popular browsers. So it’s definitely possible to directly expose JavaScript with these newer features – but there are two important questions we had to answer first:

  • Would enough users be able to take advantage of this to make the additional build complexity worthwhile (since you’d still have to keep the older transpiling step for older browsers)?
  • And what are the performance benefits (if any) of shipping ES2015 + features?

To answer the first question, we first had to determine which functions should be delivered without transpiling / polyfilling and how many build variants we wanted to support for the various browsers. We chose two builds, one that requires support for the ES2017 syntax and a legacy build that can be retrospiled to ES5. We also added an optional polyfill bundle that is only added for legacy browsers that don’t have runtime support for current DOM APIs). The detection of support for these groups is done via a basic user-agent sniffing on the server side, which ensures that there are no runtime costs or additional round-trip time for the client-side detection of the bundles to be loaded.

With that in mind, we’ve taken the numbers and found that 56% of instagram.com users can serve the ES2017 build with no transpiling or run-time polyfills, and that percentage will only increase over time – it seems to be worth supporting two builds considering how many users can use it.

Percentage of Instagram users with supported ES2017 compared to unsupported browsers

As for the second question – what are the performance benefits of shipping ES2017 directly – let’s first look at what Babel is actually doing to bring some common constructs back to ES5. The ES2017 code is in the left column and the transpiled ES5-compatible version in the right column.

Class (ES2017 vs ES5)

Async / Await (ES2017 vs ES5)

Arrow functions (ES2017 vs ES5)

Rest parameters (ES2017 vs ES5)

Destructuring order (ES2017 vs ES5)

From this we can see that transpiling these constructs causes a considerable amount of overhead (even if you amortize the cost of some runtime help functions over a large code base). In the case of Instagram, we were able to reduce the size of our core JavaScript packages for end users by 5.7% when we removed all ES2017 transpiling plugins from our build. During testing, we found that the end-to-end load page load times for the feed page for users who were served the ES2017 bundle improved by 3% compared to users who did not.

While the progress made so far is impressive, the work that has been done is only the beginning. There is still a lot of room for improvement in areas such as redux store / reducer modularization, better code division, more JavaScript execution from the critical path, optimization of scrolling performance, adaptation to different bandwidth conditions and more.

If you would like to learn more about this work or would like to join one of our engineering teams, please visit our careers page, follow us on Facebook or Twitter.

Comments are closed.