Break Stuff Until it Works — Optimizing performance in NextJS
Optimizing for performance on the web can sometimes feel like a dark art; There are so many things working in tandem in modern web development, so where do you even start?
I recently went to production with a site and had a horrible wake-up call: Even using the latest-and-greatest technology, performance was abysmal. This article is a breakdown of how we tackled the issue and how you can too!
Performance optimizing isn’t easy, but with these steps you will be guaranteed to be much closer to a fast solution.
Establishing a baseline performance metric
The first step when optimizing performance is establishing how to create a baseline measurement. If the app is slow, how slow is it then? I recommend Chromes Lighthouse tool. It is open-source developed by a team of brilliant engineers from google.
There’s a lot of tools out there, but the important thing is that it is easy to run and can done so locally. Once you’ve decided on a tool, establish the baseline.
Maybe your baseline measurement looks something like this, clearly there is some way to go.
Note: An important thing to realize with a tool like lighthouse is that it is very dependent on external factors; Your computer, what it’s doing in the background and which browser extensions you have installed. So in order to combat this, I recommend making your measurements in an incognito environment and doing a couple of them in a row and then averaging the results.
The Strategy
So now that you have a way of measuring your miserably performance, where do you go from here? My strategy is a method of “Breaking Stuff Until It Works” but before we get to that, let us take a quick look into the lifecycle of a NextJS page load
Understanding NextJS’es lifecycle
Let’s look at an example app, most apps will look something like this. When rendering, the lifecycle will go like this.
- _document.tsx to control <head> or <body> of subpages
- getInitialProps defined in _app.tsx, could be used to fetch data (but is not recommended)
- _app.tsx commonly used to render components that needs to be on every page, like a header, menu or footer
- getServerSideProps — a function executed server side to fetch data for SomePage.tsx
- SomePage.tsx — where the user lands and has the content layout
Time to Break Stuff
In order to solve any problem in computer science, the first step is often to simplify the problem. With this in mind, lets remove a bunch of code to see where things slow down. Comment out everything marked in red
That would leave your App.tsx could look something like this:function MyApp({ Component, pageProps }) {
return <></>
}
Whats your score now?
Much, much better. This establishes one thing; We now know that NextJS can perform if it doesn’t do anything. We kinda knew that, so lets try reimplementing more parts of the site, say the Header and Footer component:
function MyApp({ Component, pageProps }) {
return <Header />{/* <Component {...pageProps} /> */}</Footer>
}
What’s the score now? Back to 38 plus a bit? Now you know there’s something going on in either header or footer. Continue this divide and conquer strategy and figure out exactly where the culprit is. React and NextJS is incredibly fast, normal code won’t make a huge dent in performance. But a few unintentional mistakes like re-rendering a huge component several times will, or importing a huge library in the header will.
To further reinforce you’re theory that it is either Header or Footer, that is the culprit you could enable <Component> Without header and footer, if its now still a good score, you’ve confirmed where the issue is.
Tooling
Now that you’ve successfully identified where your code gets slow, let us get the tools to see what’s going on.
I highly recommend installing and using the following:
Import Cost (for VScode)
This plugin is an essential tool, it shows how much each external import costs to the final bundle size. It can be a brilliant indicator for import that are not treeshakable and lets you asks the question: Is this library worth it given the size or not?
Alternatively Bundle Phobia if you’re not using VScode
Webpack Bundle Analyzer
Will give you a complete map of what’s included in the app after webpacking, this can give some great insights into library costs and if they are not treeshakeble.
React dev tools
React dev tools is reacts own tool to log and graph rendering cycles in your app. It comes both as a chrome extension and a seperate npm package for those who isn’t using chrome.
While it can be kind of hard to read, with practice it gives amazing insight into the react lifecycle of your component and can easily give clues that would otherwise be completely hidden.
Flavio Copes has an excellent article on how to use it.
npm run build
The most precise depiction of you’re script size going up or down is always going to be the stats provided by nextjs’es own build script.
Now this app depicted above is incredibly small, NextJS color grades it according to what it think’s is a great number to hit. But don’t sweat it too much if you’re in the low 200 kB’s, it can still run well.
Common pitfalls
Once you’ve ran all the tools, you should have a better idea of why this particular component is making everything slow. If not, here is a couple of common performance pitfalls you should keep an eye on.
Rerenders
Everyone, who have ever worked with react, knows that unintentional rerenders can be a huge performance hog. Detecting these can be done through React-dev tools, avoiding them on the other hand is much harder.
There is a number of ways to do this, and I recommend looking into debugbears article on the topic.
Relying on window while rendering
This might be obvious to people well versed in SSR, but relying on anything in window will result in things being rerendered clientside often resulting in wasted time and a bump in CLS as the right components are rendered.
This isn’t inherently bad, but if anything that can rendered directly from the server, instead of waiting for the first react cycle is going to be a plus for performance.
Using too large images
Lighthouses mobile internet speed is throttled at 1.6Mbps down / 750 Kbps up according to their documentation which means that image sizes matters a lot. Especially in the category Largest Contentful Paint (LCP). Loading images fast is a science in it self, using srcSet, automatically generated images to the right size can all help, but that’s a whole article on its own. The great news is that Next’s team has already thought of this and did the hard work of creating a brilliant library to handle image loading.
next/image is a relatively new addition to the next ecosystem but it aims to solve everything from sizing, caching and loading images on the fly and is a must have, if your goal is a high performing website.
Most of the time the change is a simple one liner
import Image from 'next/image'const MyOldImage = (props) => {
return (
<img
src="me.png"
alt="Picture of the author"
width={500}
height={500}
/>
)
}const MyImage = (props) => {
return (
<Image
src="me.png"
alt="Picture of the author"
width={500}
height={500}
/>
)
}
Blocking rendering with external scripts
By default, in order to render a page the browser will block rendering until all scripts are loaded, but what if they are not strictly needed? External analytics and cookie-consent scripts are brilliant examples that can wait until the user has loaded parts of the website. Fortunately for us Next has made it fairly easy to defer loading those with next/scripts.
import Head from 'next/head'
export default function OldHome() {
return (
<>
<Head>
<script src="https://www.analytics.com/analytics.js" />
</Head>
</>
)
}export default function NewHome() {
return (
<Script
src="https://www.analytics.com/analytics.js"
strategy="lazyOnload"
/>
)
}
This will defer loading the analytics script until the browser is idle.
Fetching way too much data
As mentioned earlier, when meassuring lighthouse scores for mobile, lighthouse will be heavily throttled. So cutting down on paylouds from endpoints that you need to process will help significantly on your mobile score. Analyze which parts of the response, you actually need and remove everything else. This can be done with a Backend for frontend (BFF) combining calls and shaving off anything you don’t need or even easier by using GraphQL that allows the frontend to specify exactly what data it wants from the database.
Not using code splitting actively
An application will often have a number of scenarios that won’t happen without user input. So why not load those seperately? It’s fairly simple
import dynamic from 'next/dynamic'
const DynamicComponent = dynamic(() => import('../components/hello'))export default function OldHome() {
return (
<>
<DynamicComponent />
</>
)
}
This will result in a separate JS chunk being built for the DynamicComponent, thus excluding it from the initially loaded script. You can read a lot more about it here.
Milica Mihajlija has written an excellent in-depth article on NextJS’es dynamic importing
Things that are sometimes recommended, but usually isn’t worth your time
CSS-in-JS libraries will destroy performance
I hear this thrown around on a lot of forums and posts, that CSS-in-JS libraries such as styled-components or emotion will dramatically slow down performance. While it might have some impact, it definitely won’t slow your page significantly. My own sites scores 95–100 with a pure CSS-in-JS approach to styling, so don’t worry about rewriting all your CSS, before you’ve tried everything else.
Avoid Micro Optimizations Until Later
They’re often not the real culprit. Focus on the bigger picture before rewriting things like unnecessary inline functions etc.
React.memo
When dealing with unwished rerenders, i often see the suggestion “just use react.memo” and while it’s definitely a tool in your toolkit, it often does not fix what you hope for if used blindly. Often the rerenders will happen because something that you didn’t think about actually changed during load.
Dmitri Pavlutin has made an excellent write up on when and when not to use react.memo.
Obsessing about information Lighthouse tells you
While lighthouse is an excellent tool, it gives you a bunch of information, which might send you down a wrong path. Take a look at this website scoring a clean 100 in lighthouse.
These are definitely something to look into, but realize that not everything is extremely important for performance.
Further reading
https://nextjs.org/docs/going-to-production
Is a great place to start, they have a number of simple tips to get you going.
https://leerob.io/
Is Head of devrel at NextJS and does an incredible job explaining some of the more complex things when it comes to NextJS archicture and structure.
https://medium.com/ne-digital/how-to-reduce-next-js-bundle-size-68f7ac70c375
A very well written article by Arijit Mondal, on how they reduced their bundle size by 26.5%
Conclusion
While performance optimizing can feel daunting, it can be made simple with the right strategy and thought process. Hope you liked the article and tweet me the results once you’re done optimizing ❤
👋 Hey! I’m Thomas Kjær-Rasmussen, I head a team of awesome developers making websites for BankData and sometimes i write about it too. Hit me up on Twitter or below if you got any questions, comments or anything in between!