Innovation Can’t Keep the Web Fast

Publikováno: 31.1.2020

Every so often, the fruits of innovation bear fruit in the form of improvements to the foundational layers of the web. In 2015, HTTP/2 became a published standard in an effort to update an aging protocol. This was was both necessary and overdue, as HTTP/1 rendered web performance as an arcane sort of discipline in the form of strange workarounds of its limitations. Though HTTP/2 proliferation isn’t absolute — and there are kinks yet to be worked out — I … Read article

The post Innovation Can’t Keep the Web Fast appeared first on CSS-Tricks.

Celý článek

Every so often, the fruits of innovation bear fruit in the form of improvements to the foundational layers of the web. In 2015, HTTP/2 became a published standard in an effort to update an aging protocol. This was was both necessary and overdue, as HTTP/1 rendered web performance as an arcane sort of discipline in the form of strange workarounds of its limitations. Though HTTP/2 proliferation isn’t absolute — and there are kinks yet to be worked out — I don’t think it’s a stretch to say the web is better off because of it.

Unfortunately, the rollout of HTTP/2 has presided over a 102% median increase of bytes transferred over mobile the last four years. If we look at the 90th percentile of that same dataset — because it’s really the long tail of performance we need to optimize for — we see an increase of 239%. From 2016 (PDF warning) to 2019, the average mobile download speed in the U.S. has increased by 73%. In Brazil and India, average mobile download speeds increased by 75% and 28%, respectively, in that same period of time.

While page weight alone doesn’t necessarily tell the whole story of the user experience, it is, at the very least, a loosely related phenomenon which threatens the collective user experience. The story that HTTPArchive tells through data acquired from the Chrome User Experience Export (CrUX) can be interpreted a number of different ways, but this one fact is steadfast and unrelenting: most metrics gleaned from CrUX over the last couple of years show little, if any improvement despite various improvements in browsers, the HTTP protocol, and the network itself.

Given these trends, all that can be said of the impact of these improvements at this point is that it has helped to stem the tide of our excesses, but precious little to reduce them. Despite every significant improvement to the underpinnings of the web and the networks we access it through, we continue to build for it in ways that suggest we’re content with the never-ending Jevons paradox in which we toil.

If we’re to make progress in making a faster web for everyone, we must recognize some of the impediments to that goal:

  1. The relentless desire to monetize every square inch of the web, as well as the army of third party vendors which fuel the research mandated by such fevered efforts.
  2. Workplace cultures that favor unrestrained feature-driven development. This practice adds to — but rarely takes away from — what we cram down the wire to users.
  3. Developer conveniences that make the job of the developer easier, but can place an increasing cost on the client.

Counter-intuitively, owners of mature codebases which embody some or all of these traits continue to take the same unsustainable path to profitability they always have. They do this at their own peril, rather than acknowledge the repeatedly established fact that performance-first development practices will do as much — or more — for their bottom line and the user experience.

It’s with this understanding that I’ve come to accept that our current approach to remedy poor performance largely consists of engineering techniques that stem from the ill effects of our business, product management, and engineering practices. We’re good at applying tourniquets, but not so good at sewing up deep wounds.

It’s becoming increasingly clear that web performance isn’t solely an engineering problem, but a problem of people. This is an unappealing assessment in part because technical solutions are comparably inarguable. Content compression works. Minification works. Tree shaking works. Code splitting works. They’re undeniably effective solutions to what may seem like entirely technical problems.

The intersection of web performance and people, on the other hand, is messy and inconvenient. Unlike a technical solution as clearly beneficial as HTTP/2, how do we qualify what successful performance cultures look like? How do we qualify successful approaches to get there? I don’t know exactly what that looks like, but I believe a good template is the following marriage of cultural and engineering tenets:

  1. An organization can’t be successful in prioritizing performance if it can’t secure the support of its leaders. Without that crucial element, it becomes extremely difficult for organizations to create a culture in which performance is the primary feature of their product.
  2. Even with leadership support, performance can’t be effectively prioritized if the telemetry isn’t in place to measure it. Without measurement, it becomes impossible to explain how product development affects performance. If you don’t have the numbers, no one will care about performance until it becomes an apparent crisis.
  3. When you have the support of leadership to make performance a priority and the telemetry in place to measure it, you still can’t get there unless your entire organization understands web performance. This is the time at which you develop and roll out training, documentation, best practices, and standards the organization can embrace. In some ways, this is the space which organizations have already spent a lot of time in, but the challenging work is in establishing feedback loops to assess how well they understand and have applied that knowledge.
  4. When all of the other pieces are finally in place, you can start to create accountability in the organization around performance. Accountability doesn’t come in the form of reprisals when your telemetry tells you performance has suffered over time, but rather in the form of guard rails put in place in the deployment process to alert you when thresholds have been crossed.

Now comes the kicker: even if all of these things come together in your workplace, good outcomes aren’t guaranteed. Barring some regulation that forces us to address the poorly performing websites in our charge — akin to how the ADA keeps us on our toes with regard to accessibility — it’s going to take continuing evangelism and pressure to ensure performance remains a priority. Like so much of the work we do on the web, the work of maintaining a good user experience in evolving codebases is never done. I hope 2020 is the year that we meaningfully recognize that performance is about people, and adapt accordingly.

As technological innovations such as HTTP/3 and 5G emerge, we must take care not to rest on our laurels and simply assume they will heal our ills once and for all. If we do, we’ll certainly be having this discussion again when the successors to those technologies loom. Innovation alone can’t keep the web fast because making the web fast — and keeping it that way — is the hard work we can only accomplish by working together.

The post Innovation Can’t Keep the Web Fast appeared first on CSS-Tricks.

Nahoru
Tento web používá k poskytování služeb a analýze návštěvnosti soubory cookie. Používáním tohoto webu s tímto souhlasíte. Další informace