You are on page 1of 13

Role of HTTP/2 in Web

services
Brief History of HTTP
● HTTP 0.9 (1991)
● HTTP 1.0 (1996)
● HTTP 1.1 (1999)
● SPDY (2010)
● HTTP 2 (2015)
Fetching HTML and CSS with connection keep alive
Pipelined HTTP requests with server-side FIFO queue
Using Multiple TCP Connections
Pros

● The client can dispatch up to six requests inparallel.


● The server can process up to six requests inparallel.
● Work around for limitations of the application protocol (HTTP)

Cons

● Additional sockets consuming resources on the client, server, and all intermediaries: extra memory
buffers and CPU overhead
● Competition for shared bandwidth between parallelTCP streams
Quick Summary
● HTTP/0.9 - 1.1 delivered exactly what it set out to do
● HTTP/1.x clients need to use multiple connections to achieve concurrency and reduce
latency
● HTTP/1.x does not compress request and response headers, causing unnecessary
network traffic
● HTTP/1.x does not allow effective resource prioritization, resulting in poor use of the
underlying TCP connection
Brief History of SPDY
● SPDY was an experimental protocol, developed at Google and
announced in mid-2009
● Goal was to try to reduce the load latency of web pages by addressing
some of the well-known performance limitations of HTTP/1.1.
● Target a 50% reduction in page load time(PLT).
● To achieve the 50% PLT improvement, SPDY aimed to makemore
efficient use of the underlying TCP connection
● By introducing a new binary framing layer to enable request and
response multiplexing, prioritization, and header compression
HTTP/2
HTTP/2
● HTTP/2 will make our applications faster, simpler, and more robust—a rare combination

● Reduce latency by
○ enabling full request and response multiplexing
○ minimize protocol overhead via efficient compression of HTTP header fields
○ add support for request prioritization and serverpush.
Header Compression
One Connection Per Origin
● With the new binary framing mechanism in place, HTTP/2 no longer needs multipleTCP connections
to multiplex streams in parallel.
● Each stream is split into many frames, which can be interleaved and prioritized.
● All HTTP/2 connections are persistent, and only one connection per origin is required, which offers
numerous performance benefits.
● Whereas TCP is optimized for long-lived, bulk data transfers.
● By reusing the same connection HTTP/2 is able to both make more efficient use of each TCP
connection.
● Use of fewer connections reduces the memory and processing footprint along the full connection
path (i.e., client, intermediaries, and origin servers), which reduces the overall operational costs and
improves network utilization and capacity.
● The move to HTTP/2 should not only reduce the network latency, but also help improve throughput
and reduce the operational costs.
Evergreen Performance Best Practices
● Reduce DNS lookups
● Reuse TCP connections
● Minimize number of HTTPredirects
● Reduce round trip times
● Compress assets during transfer
● Eliminate unnecessary request bytes
References
● https://hpbn.co/ (High Performance Browser Networking)
● https://hpbn.co/http2/
● https://www.youtube.com/watch?v=yURLTwZ3ehk
● https://http2.golang.org/gophertiles?latency=30
● http://www.http2demo.io/
● https://http2.akamai.com/demo
● https://github.com/http2/http2-spec/wiki/Implementations
● https://caniuse.com/#search=http2

You might also like