I just read a great post by Guy Podjarny on SPDY titled, “Not as SPDY as You Thought” which gives you an idea what he’s discussing. He outlines his test design, comparing SPDY versus HTTPS and HTTP. He gets right to the surprise result early on:
In fact, my tests showed SPDY is only marginally faster than HTTPS and is slower than HTTP. Why? Simply put, SPDY makes HTTP better, but for most websites, HTTP is not the bottleneck.
Podjarny goes on to talk about why his real-world test leads to such poor results by SPDY. His top two reasons are:
- Web pages use many different domains, and SPDY works per domain. This means SPDY can’t reduce connections or multiplex requests across the different domains (with some exceptions), and its value gets diminished.
- Web pages have other bottlenecks, which SPDY does not address.For example, SPDY doesn’t prevent scripts from blocking downloads of other resources, nor does it make CSS not block rendering. SPDY is better than HTTP, but for most pages, HTTP is not the bottleneck.
It’s a great read and provides a good concrete example of what we at Mobolize have been saying forever: fiddling with protocols to optimize data delivery is all well and good, but the best way to reduce traffic and speed page loads is to do so from local cache. And no one ever seems to think about that, or try to implement it (aside from the CDN and WOC companies, and their caches still have issues).
I guess it’s because we’ve had browser caches pretty much since we’ve had browsers, and those caches have been completely lame from day one, so maybe people assume that caching is inherently flawed. I don’t know, but it’s kind of amazing how many people work on really hard things like whole new protocols, like SPDY to try to eke out modest performance gains, when the answer is so easy and can deliver much, much better results.