HTTP/2:修订间差异
小 Riguz moved page Hypertext Transfer Protocol -- HTTP/2 to HTTP/2 without leaving a redirect |
无编辑摘要 |
||
(未显示同一用户的7个中间版本) | |||
第1行: | 第1行: | ||
HTTP/2 was first discussed when it became apparent that SPDY was gaining traction with implementers (like Mozilla and nginx), and was showing significant improvements over HTTP/1.x. | |||
After a call for proposals and a selection process, SPDY/2 was chosen as the basis for HTTP/2. Since then, there have been a number of changes, based on discussion in the Working Group and feedback from implementers<ref>https://http2.github.io/faq/g</ref>. | |||
Server push | * [https://datatracker.ietf.org/doc/html/rfc7540 RFC7540:HTTP/2] | ||
* [https://datatracker.ietf.org/doc/html/rfc7541 RFC7541:HPACK] | |||
=Overview = | |||
== HTTP/2 or HTTP/2.0?== | |||
The Working Group decided to drop the minor version (“.0”) because it has caused a lot of confusion in HTTP/1.x. | |||
In other words, the HTTP version only indicates wire compatibility, not feature sets or “marketing.” | |||
== HTTP/2 key differences to HTTP/1.1== | |||
At a high level, HTTP/2: | |||
* is binary, instead of textual | |||
* is fully multiplexed, instead of ordered and blocking | |||
* can therefore use one connection for parallelism | |||
* uses header compression to reduce overhead | |||
* allows servers to “push” responses proactively into client caches | |||
== Why binary? == | |||
Binary protocols are more efficient to parse, more compact “on the wire”, and most importantly, they are much less error-prone, compared to textual protocols like HTTP/1.x, because they often have a number of affordances to “help” with things like whitespace handling, capitalization, line endings, blank lines and so on. | |||
For example, HTTP/1.1 defines four different ways to parse a message; in HTTP/2, there’s just one code path. | |||
=== Multiplexing=== | |||
HTTP/1.1 loads resources one after the other, so if one resource cannot be loaded, it blocks all the other resources behind it. In contrast, HTTP/2 is able to use a single TCP connection to send multiple streams of data at once so that no one resource blocks any other resource. HTTP/2 does this by splitting data into binary-code messages and numbering these messages so that the client knows which stream each binary message belongs to. | |||
=== Server push === | |||
Typically, a server only serves content to a client device if the client asks for it. However, this approach is not always practical for modern webpages, which often involve several dozen separate resources that the client must request. HTTP/2 solves this problem by allowing a server to "push" content to a client before the client asks for it. The server also sends a message letting the client know what pushed content to expect – like if Bob had sent Alice a Table of Contents of his novel before sending the whole thing. | |||
=== Header compression=== | |||
Small files load more quickly than large ones. To speed up web performance, both HTTP/1.1 and HTTP/2 compress HTTP messages to make them smaller. However, HTTP/2 uses a more advanced compression method called '''HPACK''' that eliminates redundant information in HTTP header packets. This eliminates a few bytes from every HTTP packet. Given the volume of HTTP packets involved in loading even a single webpage, those bytes add up quickly, resulting in faster loading. | |||
SPDY/2 proposed using a single GZIP context in each direction for header compression, which was simple to implement as well as efficient. | |||
Since then, a major attack has been documented against the use of stream compression (like GZIP) inside of encryption; CRIME. | |||
With CRIME, it’s possible for an attacker who has the ability to inject data into the encrypted stream to “probe” the plaintext and recover it. Since this is the Web, JavaScript makes this possible, and there were demonstrations of recovery of cookies and authentication tokens using CRIME for TLS-protected HTTP resources. | |||
As a result, we could not use GZIP compression. Finding no other algorithms that were suitable for this use case as well as safe to use, we created a new, header-specific compression scheme that operates at a coarse granularity; since HTTP headers often don’t change between messages, this still gives reasonable compression efficiency, and is much safer. | |||
== HTTP/2 Discovery == | == HTTP/2 Discovery == | ||
;h2: HTTP/2 over TLS | ;h2: HTTP/2 over TLS | ||
第45行: | 第81行: | ||
=== HTTP2-Settings === | === HTTP2-Settings === | ||
= FAQ = | |||
== Will I need TCP_NODELAY for my HTTP/2 connections? == | |||
Yes, probably. Even for a client-side implementation that only downloads a lot of data using a single stream, some packets will still be necessary to send back in the opposite direction to achieve maximum transfer speeds. Without '''TCP_NODELAY''' set (still allowing the Nagle algorithm), the outgoing packets may be held for a while in order to allow them to merge with a subsequent one. | |||
In cases where such a packet, for example, is a packet telling the peer that there is more window available to send data, delaying its sending for multiple milliseconds (or more) can have a severe impact on high speed connections. | |||
[[Category:Network]] | [[Category:Network]] |
2023年12月8日 (五) 07:53的最新版本
HTTP/2 was first discussed when it became apparent that SPDY was gaining traction with implementers (like Mozilla and nginx), and was showing significant improvements over HTTP/1.x.
After a call for proposals and a selection process, SPDY/2 was chosen as the basis for HTTP/2. Since then, there have been a number of changes, based on discussion in the Working Group and feedback from implementers[1].
Overview
HTTP/2 or HTTP/2.0?
The Working Group decided to drop the minor version (“.0”) because it has caused a lot of confusion in HTTP/1.x.
In other words, the HTTP version only indicates wire compatibility, not feature sets or “marketing.”
HTTP/2 key differences to HTTP/1.1
At a high level, HTTP/2:
- is binary, instead of textual
- is fully multiplexed, instead of ordered and blocking
- can therefore use one connection for parallelism
- uses header compression to reduce overhead
- allows servers to “push” responses proactively into client caches
Why binary?
Binary protocols are more efficient to parse, more compact “on the wire”, and most importantly, they are much less error-prone, compared to textual protocols like HTTP/1.x, because they often have a number of affordances to “help” with things like whitespace handling, capitalization, line endings, blank lines and so on.
For example, HTTP/1.1 defines four different ways to parse a message; in HTTP/2, there’s just one code path.
Multiplexing
HTTP/1.1 loads resources one after the other, so if one resource cannot be loaded, it blocks all the other resources behind it. In contrast, HTTP/2 is able to use a single TCP connection to send multiple streams of data at once so that no one resource blocks any other resource. HTTP/2 does this by splitting data into binary-code messages and numbering these messages so that the client knows which stream each binary message belongs to.
Server push
Typically, a server only serves content to a client device if the client asks for it. However, this approach is not always practical for modern webpages, which often involve several dozen separate resources that the client must request. HTTP/2 solves this problem by allowing a server to "push" content to a client before the client asks for it. The server also sends a message letting the client know what pushed content to expect – like if Bob had sent Alice a Table of Contents of his novel before sending the whole thing.
Header compression
Small files load more quickly than large ones. To speed up web performance, both HTTP/1.1 and HTTP/2 compress HTTP messages to make them smaller. However, HTTP/2 uses a more advanced compression method called HPACK that eliminates redundant information in HTTP header packets. This eliminates a few bytes from every HTTP packet. Given the volume of HTTP packets involved in loading even a single webpage, those bytes add up quickly, resulting in faster loading.
SPDY/2 proposed using a single GZIP context in each direction for header compression, which was simple to implement as well as efficient.
Since then, a major attack has been documented against the use of stream compression (like GZIP) inside of encryption; CRIME.
With CRIME, it’s possible for an attacker who has the ability to inject data into the encrypted stream to “probe” the plaintext and recover it. Since this is the Web, JavaScript makes this possible, and there were demonstrations of recovery of cookies and authentication tokens using CRIME for TLS-protected HTTP resources.
As a result, we could not use GZIP compression. Finding no other algorithms that were suitable for this use case as well as safe to use, we created a new, header-specific compression scheme that operates at a coarse granularity; since HTTP headers often don’t change between messages, this still gives reasonable compression efficiency, and is much safer.
HTTP/2 Discovery
- h2
- HTTP/2 over TLS
- h2c
- HTTP/2 is run over cleartext TCP
Starting HTTP/2 for "http" URIs
如果客户端无法事先知道服务端是否支持http2,可以通过HTTP Upgrade mechanism(定义在HTTP/1.1)中来实现,这样在之后的请求中可以采取HTTP/2。
GET / HTTP/1.1
Host: server.example.com
Connection: Upgrade, HTTP2-Settings
Upgrade: h2c
HTTP2-Settings: <base64url encoding of HTTP/2 SETTINGS payload>
这样的话,如果请求中由payload必须一次性发完。然后才能升级到HTTP/2。另一种办法是通过OPTIONS请求 来判断,这样做会多一次请求。
如果服务端不支持HTTP/2,返回中则不包含Upgrade header:
HTTP/1.1 200 OK
Content-Length: 243
Content-Type: text/html
否则,返回一个101 (Switching Protocols)响应:
HTTP/1.1 101 Switching Protocols
Connection: Upgrade
Upgrade: h2c
[ HTTP/2 connection ...
在h2c后面的空行之后,就可以开始发送HTTP/2的frame了。
HTTP2-Settings
FAQ
Will I need TCP_NODELAY for my HTTP/2 connections?
Yes, probably. Even for a client-side implementation that only downloads a lot of data using a single stream, some packets will still be necessary to send back in the opposite direction to achieve maximum transfer speeds. Without TCP_NODELAY set (still allowing the Nagle algorithm), the outgoing packets may be held for a while in order to allow them to merge with a subsequent one.
In cases where such a packet, for example, is a packet telling the peer that there is more window available to send data, delaying its sending for multiple milliseconds (or more) can have a severe impact on high speed connections.