There's an active fork here (which I also use on my remaining nginx instances) https://github.com/eustas/ngx_brotli
"An endpoint MUST NOT generate an HTTP/2 message containing connection-specific header fields; any message containing connection-specific header fields MUST be treated as malformed (Section 22.214.171.124)." and "Clients MUST NOT accept a malformed response. Note that these requirements are intended to protect against several types of common attacks against HTTP [...]"
(though Safari's response of repeatedly retrying as fast as possible is certainly problematic)
Every time I've tried looking into it, I've found it underwhelming. Infrastructure support and tooling is still heavily lacking as far as I can tell. To really take advantage of server push it seems like you'd need to have really good build tools available.
Don't you risk sending useless data with server push? How do you handle cached resources? Just because a view depends on some resources, it doesn't mean you always need to send em down to the client.
Having server push doesn't always mean you should avoid bundling and minifying resources. Won't a gzipped bundle typically produce a smaller payload?
Maybe I'm totally wrong or I've misunderstood something, but the general impression I've gotten is that the benefits of server push are a bit overstated. Does anyone have links to resources which discusses these issues in depth?
This type of thing is extremely common, but when it comes to H2 everyone is focused on the hardest thing.
> When enabled, H2O maintains a fingerprint of the web browser cache, and cancels server-push suggested by the handlers if the client is known to be in possession of the content. The fingerprint is stored in a cookie...
That sounds interesting solution.
Actually it looks like this tool along with nginx pagespeed would work well and require no changing in build tools.
Nginx pagespeed supports parsing and optimizing the html (and linked resources), including inserting prelink headers. Which will now get pushed. It requires no change in build process.
> Won't a gzipped bundle typically produce a smaller payload?
With http2 it's better to unbundle these days as the best reason for bundling was to not create new tcp connections which require ramp up time to be useful. Now with http2 the same tcp connection is used but multiplexed. So the main reason for bundling goes away. As to it producing smaller payloads, it might be the case but not that much smaller that it's worth it since it's the same tcp connection. Better to have it in unbundled then it gets parsed and processed in chunked streams of files. The client is able to use and process the files as they come in without having to load the whole bundle and process at once. Making the site a bit more responsive.
My personal experience has been you have to confirm the theory of how HTTP/2 related changes will perform with actual measurements. I've seen some pages get faster in HTTP/2 by no longer bundling, and in other cases seen then become slower. So far the only way to know for sure is to measure the result.
That way you get time to document interactive a lot quick. The user can start using the page.
With that in mind, http2 beats the pants of bundled js.
Are there any efforts underway to fix that? Seems like you could solve that problem by sharing the compression dictionary between documents transmitted over the same connection.
I looked into using SDCH for topojson, since it seemed like a match made in heaven (a lot of repeated bytes in many files that are usually static), but since it never took off in usage it is being removed. The only major site that used it is linkedin.
EDIT: the continuation of this is basically what brotli is. Gather a dictionary of the most common byte-sequences on the internet, pre-ship that in every client and use that as the shared dictionary. But it will never be as good for specific use-cases.
In general, if you are only pushing something that is both small and required that isn't a problem, but when using push for things like MPEG-DASH this becomes a point to think about.
Note: This is still based on a Golomb-coded set (GCS). Current proposal under discussion is to use a Cuckoo filter. Takes slightly more bandwidth but allows removal when browser caches evict items.
The point of server to push is to _preload_ content, and reduce the amount of requests, not to reduce total download size.
I'm skeptical of not bundling resources "because HTTP/2". I think it's still a good idea to keep good HTTP/1 performance for the 10%-20% of traffic that doesn't support HTTP/2. Even when everyone supports HTTP/2, if you're loading a bunch of other data from the same domain (like images), I'm doubtful that the improvement from unbundling resources would be noticeable.
Rules of Thumb for HTTP/2 Push
Follow up talk Revisiting HTTP/2 given Jan 31, 2018 https://www.youtube.com/watch?v=wR1gF5Lhcq0 & slides https://www.slideshare.net/Fastly/revisiting-http2-87148462
(I am the Quart author, thought this would be interesting and relevant).
Are you going to be able to push stuff from your application if using Nginx as a proxy? For example, a dynamic view that includes a css file hosted by nginx.
> Also, preload links from the Link response headers, as described in
https://www.w3.org/TR/preload/#server-push-http-2, can be pushed, if
enabled with the "http2_push_preload" directive.
So you send down some headers in your response which tells the proxy server resources it should push. Pretty elegant, and fails gracefully. I believe this is the same mechanism as other http2 servers.
works so far but still waiting on bug fix https://trac.nginx.org/nginx/ticket/1478 as Nginx has pushed back 1.13.9 release until next week https://twitter.com/nginx/status/963442197436678144
Google has a dead-simple JSON format  for this from 2015.
Edit: Oops, I missed out "push" at the end.
This is specific to H2 Push
Server: sure, here's index.html, theme.css, and jquery.js.
Client: hey wait! I didn't ask for... oh, nvm, good call. Now I can display all this without asking you for anything else.
EDIT: See this: https://youtu.be/0yzJAKknE_k?t=30m52s for more details
("just" as a qualifier for nefariousness, not usefulness)
Server: "Sure, here is index.html and you'll need style.css as well, so here is a second stream with that"
It will ship with 1.13.9 (mainline) in 6 hours (at the time of writing).
For stable branch, you need to wait until the 1.13 milestone is completed, which according to their roadmap (https://trac.nginx.org/nginx/milestone/1.13) will be completed in 8 weeks.
Maxim's answer on this issue: https://mailman.nginx.org/pipermail/nginx/2015-December/0494...