React.js for TV UIs
RxJS Version 5
Videos from our past talks can always be found on our Netflix UI Engineering channel on YouTube. If you’re interested in being notified of future events, just sign up on our notification list.
By Kim Trott
Simply put, performance matters. We know members want to immediately start browsing or watching their favorite content and have found that faster startup leads to more satisfying usage. So, when building the long-awaited update to netflix.com, the Website UI Engineering team made startup performance a first tier priority.
The impact of this effort netted a 70% reduction in startup time, and was focused in three key areas:
The netflix.com legacy website stack had a hard separation between server markup and client enhancement. This was primarily due to the different programming languages used in each part of our application. On the server, there was Java with Tomcat, Struts and Tiles. On the browser client, we enhanced server-generated markup with JavaScript, primarily via jQuery.
This separation led to undesirable results in our startup time. Every time a visitor came to any page on netflix.com our Java tier would generate the majority of the response needed for the entire page's lifetime and deliver it as HTML markup. Often, users would be waiting for the generation of markup for large parts of the page they would never visit.
Our new architecture renders only a small amount of the page's markup, bootstrapping the client view. We can easily change the amount of the total view the server generates, making it easy to see the positive or negative impact. The server requires less data to deliver a response and spends less time converting data into DOM elements. Once the client JavaScript has taken over, it can retrieve all additional data for the remainder of the current and future views of a session on demand. The large wins here were the reduction of processing time in the server, and the consolidation of the rendering into one language.
We find the flexibility afforded by server and client rendering allows us to make intelligent choices of what to request and render in the server and the client, leading to a faster startup and a smoother transition between views.
In order to support identical rendering on the client and server, we needed to rethink our rendering pipeline. Our previous architecture's separation between the generation of markup on the server and the enhancement of it on the client had to be dropped.
Three large pain points shaped our new Node.js architecture:
There are many solutions to this problem that don't require Universal JavaScript, but we found this lesson was most appropriate: When there are two copies of the same thing, it's fairly easy for one to be slightly different than the other. Using Universal JavaScript means the rendering logic is simply passed down to the client.
Node.js and React.js are natural fits for this style of application. With Node.js and React.js, we can render from the server and subsequently render changes entirely on the client after the initial markup and React.js components have been transmitted to the browser. This flexibility allows for the application to render the exact same output independent of the location of the rendering. The hard separation is no longer present and it's far less likely for the server and client to be different than one another.
Without shared rendering logic we couldn't have realized the potential of rendering only what was necessary on startup and everything else as data became available.
Building rich interactive experiences on the web often translates into a large JavaScript payload for users. In our new architecture, we placed significant emphasis on pruning large dependencies we can knowingly replace with smaller modules and delivering JavaScript only applicable for the current visitor.
Many of the large dependencies we relied on in the legacy architecture didn't apply in the new one. We've replaced these dependencies in favor of newer, more efficient libraries. Replacing these libraries resulted in a much smaller JavaScript payload, meaning members need less JavaScript to start browsing. We know there is significant work remaining here, and we're actively working to trim our JavaScript payload down further.
In order to test and understand the impact of our choices, we monitor a metric we call time to interactive (tti).
Amount of time spent between first known startup of the application platform and when the UI is interactive regardless of view. Note that this does not require that the UI is done loading, but is the first point at which the customer can interact with the UI using an input device.
For applications running inside a web browser, this data is easily retrievable from the Navigation Timing API (where supported).
We firmly believe high performance is not an optional engineering goal – it's a requirement for creating great user-experiences. We have made significant strides in startup performance, and are committed to challenging our industry’s best-practices in the pursuit of a better experience for our members.
Over the coming months we'll be investigating Service Workers, ASM.js, Web Assembly, and other emerging web standards to see if we can leverage them for a more performant website experience. If you’re interested in helping create and shape the next generation of performant web user-experiences apply here.
[a, b, c, c, c, c, d, e, f, g, h]Requests for route c would terminate at the first occurrence of the c handler (position 2 in the array). However, requests for d would only terminate at position 6 in the array, having needless spent time spinning through a, b and multiple instances of c. We verified this by running the following vanilla express app.
var express = require('express'); var app = express(); app.get('/foo', function (req, res) { res.send('hi'); }); // add a second foo route handler app.get('/foo', function (req, res) { res.send('hi2'); }); console.log('stack', app._router.stack); app.listen(3000);Running this Express.js app returns these route handlers.
stack [ { keys: [], regexp: /^\/?(?=/|$)/i, handle: [Function: query] }, { keys: [], regexp: /^\/?(?=/|$)/i, handle: [Function: expressInit] }, { keys: [], regexp: /^\/foo\/?$/i, handle: [Function], route: { path: '/foo', stack: [Object], methods: [Object] } }, { keys: [], regexp: /^\/foo\/?$/i, handle: [Function], route: { path: '/foo', stack: [Object], methods: [Object] } } ]Notice there are two identical route handlers for /foo. It would have been nice for Express.js to throw an error whenever there’s more than one route handler chain for a route.
[... { handle: [Function: serveStatic], name: 'serveStatic', params: undefined, path: undefined, keys: [], regexp: { /^\/?(?=\/|$)/i fast_slash: true }, route: undefined }, { handle: [Function: serveStatic], name: 'serveStatic', params: undefined, path: undefined, keys: [], regexp: { /^\/?(?=\/|$)/i fast_slash: true }, route: undefined }, { handle: [Function: serveStatic], name: 'serveStatic', params: undefined, path: undefined, keys: [], regexp: { /^\/?(?=\/|$)/i fast_slash: true }, route: undefined }, ... ]Something was adding the same Express.js provided static route handler 10 times an hour. Further benchmarking revealed merely iterating through each of these handler instances cost about 1 ms of CPU time. This correlates to the latency problems we’ve seen, where our response latencies increase by 10 ms every hour.
![]() |
| Figure 2. MSL Networks |
![]() |
| Figure 3. MSL Communication w/Application Data |
![]() |
| Figure 4. MSL Handshake followed by Application Data |