How to make secure sites fast

Would you like your website to be fast?

Would you like your website to be secure?

Or, to put it another way, would you like to have your cake and eat it?

Assuming your answer to each of these questions is yes, read on.

There are plenty of reasons why you might want to use a secure connection to deliver your online content these days. The Internet is a dangerous place. There have been a number of recent high-profile security breaches. Customer data has been stolen. And while a website that uses TLS offers no guarantee of security, it does add another layer of protection.

From the customer’s point of view, there is nothing like the warm, comforting feeling offered by that reassuring green padlock in the browser address bar.

It’s not just http. It’s https. It’s protected. It’s secure. It’s safe.

Here, as in so many other areas, Google has led the way in making the web more secure. For one thing, if your website uses TLS, it’s likely to get a higher search ranking. Google was also instrumental in the development of HTTP/2, a new protocol for communicating over the web that’s largely based on its own experimental protocol, SPDY. SPDY was secure by default and, for practical purposes, it looks as though HTTP/2 will be too. It’s worth mentioning that this is due to browser implementations rather than what’s in the specification, but the effect is the same.

It seems we’re edging ever closer to an https-only world. Whether this is a good or a bad thing is debatable. Some view the requirement for TLS as an unnecessary overhead if there’s no personal data involved. But it’s looking increasingly like an inevitable reality.

The need for speed

However, there is a problem with TLS. As things stand – there is no getting away from the fact that simply establishing a secure connection takes time. And you can’t start sending anything useful to the browser until it’s happened. Other things being equal, an https site will be slower than an http site.

The diagrams below illustrate the problem. A TLS handshake includes more steps than a standard TCP handshake. Only once these steps are complete can you start sending meaningful content.

And while online consumers care about security, they also care about peformance.

They’re not the only ones.

We’ve mentioned that Google uses TLS as a ranking signal. Well, guess what? It also uses load time. To site owners, it can look like Google is giving with one hand and taking away with the other.

tcp1FIN

Establishing a TCP connection

tcp3FIN

Establishing a connection over TLS

Mobile latency

It gets worse.

You might have noticed that we’re doing an awful lot of our web browsing on mobile devices these days. This is good news for many retailers, who are seeing a surge in sales via mobile. However, the bad news is that latency is generally higher on mobile networks. Each round-trip between client and server typically takes longer. This makes secure sites slower if you’re on a mobile phone.

So there’s a silver lining to this cloud, right?

There certainly is. Unfortunately, though, there’s no magic solution. A silver lining, yes – a silver bullet, no.

However, there are a number of things that can help to minimise the impact of the delay caused by setting up a secure connection.

Minimising the length of the certificate chain

Browsers only trust a limited number of certificates by default. When someone makes a secure connection to your site, the browser will look at the certificate that signed your certificate. Is it on that trusted list? If not, it will look up the chain, to the certificate that signed that certificate. It will keep going until it finds a certificate it trusts.

It’s therefore important to make sure that the server includes intermediate certificates when establishing a connection – failing to do this forces the browser to verify each intermediate certificate separately. This is bad news, especially if you have a long certificate chain.

A shorter certificate chain is also preferable simply because it’s more likely to fit in a single round-trip.

Certificates with fewer intermediaries are generally more expensive, but if you can afford it, this is one way to mitigate the impact of https on performance.

TLS session resumption

There are two forms of session resumption – session ID resumption and session ticket resumption. Both cut a round-trip from TLS negotiation when a client makes multiple secure connections to the same host. Since browsers typically open six or more connections per hostname, this is very useful.

In session ID resumption, the client keeps a cached session ID from the first TLS connection, which it sends to the host when it needs to resume that connection. The server then checks this ID and finds the appropriate session key.

Session ticket resumption works in a similar way but is designed to get around the fact that session ID resumption requires the server to store a lot of data. In session ticket resumption, the client stores an encrypted session key, which it sends to the host to decrypt when it needs to re-establish a previous secure connection.

You can read more about session resumption, including some of the security concerns about session ticket resumption, here.

It’s worth saying that while session resumption is useful now, it will be less so in an HTTP/2 world (more on this later). This is because HTTP/2 loads multiple resources in parallel from one domain over a single connection rather than downloading resources from the same domain over multiple connections.

OCSP stapling

Sometimes, TLS certificates are revoked – for some reason or another, they can no longer be trusted.

How does the browser know when this happens?

In the past, it would, from time to time, obtain certificate revocation lists (CRLs) from Certificate Authorities (CAs). However, as these lists grew, this put more and more strain on the browser and the network.

OCSP (online certificate status protocol) was designed to get around the problem. Each time the browser tried to connect to a server over TLS, it would check with the CA that the certificate hadn’t been revoked.

The problem was that this added another round-trip whenever the browser connected to a secure website. It wasn’t even that reliable. A person trying to visit a secure website could effectively be stopped if it ran into problems connecting to the CA’s servers (a real possibility given the huge amount of traffic). Because of this, browsers simply opted to bypass this step if they couldn’t connect to the CA.

OCSP stapling represents the latest attempt to get the process right: the responsibility for verifying that the certificate is valid passes from the client to the server, saving a round trip. The server periodically obtains a signed response from the CA, which it returns to the client in response to a certificate status request.

HTTP Strict Transport Security (HSTS)

Are you redirecting visitors to your http domain to an https domain?

HSTS is a way to tell browsers that they should only access your domain over https. It comes into play when someone visits your website using https for the first time. The server can respond with the Strict-Transport-Security header, which requires the browser to use https every time for that domain in future.

This was originally conceived as a security feature to prevent man-in-the-middle attacks.

However, it also offers a bonus performance benefit. This is because repeat visitors to your site will no longer be delayed as they’re redirected from http to https.

HTTP/2

With HTTP/2, lots more websites are going to be delivered over TLS very soon. This means that a lot of websites are going to take a little longer to get that first byte of meaningful data to the end user.

However, HTTP/2 is packed with plenty of other features dedicated to making the websites load faster.

One of most important of these features is multiplexing: multiple HTTP requests and responses will be able to travel in parallel over a single TCP connection.

Currently, many website owners deliver content from several different domains in order to increase the number of requests/responses that can travel in parallel (a performance optimisation technique called domain sharding). The disadvantage of sharding is that more domains mean more TCP connections and, if those connections are secure, more TLS handshakes. Which means more delay.

Not in HTTP/2. With HTTP/2, one connection is all you need, effectively making domain sharding redundant, and cutting out the delay added by those extra TLS handshakes. What it won’t do, of course, is remove any delay added by establishing secure connections to third-party domains.

Conclusion

This is a broad overview of some of the techniques you can use to make your website both fast and secure. For more in-depth explanations, look no further than High Performance Browser Networking by Ilya Grigorik.

Ultimately, the main point to take away is that making your site secure needn’t mean throwing in the web performance towel. While TLS inevitably adds some delay to load time, the extent of that delay is largely up to you.


Alex Painter

Alex is a member of the professional services team at NCC Group Web Performance, helping organisations to deliver a better online experience for their customers.

Leave a Reply

Your email address will not be published. Required fields are marked *