jbergstroem 11 days ago [-]
I've recently spent more time using H2O over nginx, mainly because of its more complete support of HTTP 2 (push, cache-aware push, ..) but also out of the box support for brotli compression and mruby (think lua landscape for nginx). Even though nginx made it easier to build and ship third party modules separately, I feel like the module community (as well as distro maintainers/packagers) haven't followed suit. There's obviously pre-baked packages like openresty which has lua world, but then there's no brotli.
youngtaff 11 days ago [-]
With you on that one - h2o is a lovely server
sureaboutthis 11 days ago [-]
My web dev company now has a few small clients on h2o for the same reasons. It gives us a chance to try h2o out without any downfall. Still very early but loving it.
SahAssar 11 days ago [-]
The nginx brotli module is very easy to build, I usually build my openresty with brotli and nchan as add in modules and have never encountered any issues.
vbtechguy 10 days ago [-]
been using ngx_brotli with my Centmin Mod Nginx stack for what seems like years now https://community.centminmod.com/threads/how-to-use-brotli-c... :)
jbergstroem 10 days ago [-]
The upstream repo from google hasn't been updated in over a year; issues are being opened questioning maintainership. It additionally breaks against more recent version combos of nginx/brotli.

There's an active fork here (which I also use on my remaining nginx instances) https://github.com/eustas/ngx_brotli

vbtechguy 10 days ago [-]
yup i am using eustas fork for ngx_brotli works well :)
linsomniac 11 days ago [-]
Just a heads up, we tried to enable H2 with haproxy 1.8.3 last week and had to roll it back because of a Firefox bug. I believe it's this one, which doesn't seem to be getting much attention: https://bugzilla.mozilla.org/show_bug.cgi?id=1427256
ghusbands 11 days ago [-]
On that bug, it looks like the reporter has yet to give an update detailing what the bug actually is (the connection header is a mislead). Their update on Dec 30th says "I'm still working closely together with Willy from Haproxy to determine the actual root cause of http2 failures in Haproxy which only seems to affect Firefox". Without information on the actual fault, it seems like there's little the Firefox developers can do. It could easily be a Haproxy bug.
x25519 11 days ago [-]
ghusbands 10 days ago [-]
Glad to see the bug is now updated/fixed, but the person you're replying to said they were already using 1.8.3. I guess you're saying their issues were caused by a different bug.
kolme 11 days ago [-]
That's funny, we couldn't deploy H2 because there were problems with HAProxy/Apache and Safari. And it looks like a very similar issue:

http://www.wiktorzychla.com/2017/06/http2-keep-alive-and-saf...

ghusbands 11 days ago [-]
That bug is in HAProxy. The RFC clearly states that the presence of the Connection header makes the response malformed.

"An endpoint MUST NOT generate an HTTP/2 message containing connection-specific header fields; any message containing connection-specific header fields MUST be treated as malformed (Section 8.1.2.6)." and "Clients MUST NOT accept a malformed response. Note that these requirements are intended to protect against several types of common attacks against HTTP [...]"

(though Safari's response of repeatedly retrying as fast as possible is certainly problematic)

TheAceOfHearts 11 days ago [-]
Please correct me if I'm wrong in anything I say here. I haven't had any real-world deployment experiences with HTTP 2.0.

Every time I've tried looking into it, I've found it underwhelming. Infrastructure support and tooling is still heavily lacking as far as I can tell. To really take advantage of server push it seems like you'd need to have really good build tools available.

Don't you risk sending useless data with server push? How do you handle cached resources? Just because a view depends on some resources, it doesn't mean you always need to send em down to the client.

Having server push doesn't always mean you should avoid bundling and minifying resources. Won't a gzipped bundle typically produce a smaller payload?

Maybe I'm totally wrong or I've misunderstood something, but the general impression I've gotten is that the benefits of server push are a bit overstated. Does anyone have links to resources which discusses these issues in depth?

Touche 11 days ago [-]
Most of the discussion surrounding H2 PUSH is concentrated on the single use case of static assets. It's true that this one is difficult, but there are other use cases.

For example, say you had a chart on your page that is generated with JavaScript. That JavaScript makes an API call to get data. You can go ahead and push this data when the page is requested. Since it's (usually) dynamic you don't have to worry about caching nearly as much.

This type of thing is extremely common, but when it comes to H2 everyone is focused on the hardest thing.

dalore 11 days ago [-]
Good questions. I was looking at H2O and they solve this using what they call http-casper: https://h2o.examp1e.net/configure/http2_directives.html#http...

> When enabled, H2O maintains a fingerprint of the web browser cache, and cancels server-push suggested by the handlers if the client is known to be in possession of the content. The fingerprint is stored in a cookie...

That sounds interesting solution.

Actually it looks like this tool along with nginx pagespeed would work well and require no changing in build tools.

Nginx pagespeed supports parsing and optimizing the html (and linked resources), including inserting prelink headers. Which will now get pushed. It requires no change in build process.

> Won't a gzipped bundle typically produce a smaller payload?

With http2 it's better to unbundle these days as the best reason for bundling was to not create new tcp connections which require ramp up time to be useful. Now with http2 the same tcp connection is used but multiplexed. So the main reason for bundling goes away. As to it producing smaller payloads, it might be the case but not that much smaller that it's worth it since it's the same tcp connection. Better to have it in unbundled then it gets parsed and processed in chunked streams of files. The client is able to use and process the files as they come in without having to load the whole bundle and process at once. Making the site a bit more responsive.

josephscott 11 days ago [-]
That is the theory. The real world however can be very messy. More than two years ago Khan Academy measured how this would work for them - http://engineering.khanacademy.org/posts/js-packaging-http2.... - here was their summary:

> The reality is not so rosy. Due to degraded compression performance, the size of the data download with individual source files ends up being higher than with packages, despite having achieved 'no wasted bytes'. Likewise, the promised download efficiency has yet to show up in the wild, at least for us. It seems that, for the moment at least, JavaScript packaging is here to stay.

My personal experience has been you have to confirm the theory of how HTTP/2 related changes will perform with actual measurements. I've seen some pages get faster in HTTP/2 by no longer bundling, and in other cases seen then become slower. So far the only way to know for sure is to measure the result.

dalore 1 days ago [-]
Well taking a site that was highly optimized for bundling and trying it unbundled on http2 is going to have poor performance.

You need to engineer it so you're downloading and using the javascript pieces as you go. Streaming your code and running it in a way. So the stuff you need more important goes first and stuff that doesn't later.

That way you get time to document interactive a lot quick. The user can start using the page.

With that in mind, http2 beats the pants of bundled js.

Ajedi32 11 days ago [-]
> Due to degraded compression performance

Are there any efforts underway to fix that? Seems like you could solve that problem by sharing the compression dictionary between documents transmitted over the same connection.

SahAssar 11 days ago [-]
There are (SDCH), but they are basically abandoned and are close to being unshipped in chrome (which is the only browser that supported them): https://groups.google.com/a/chromium.org/forum/#!topic/blink...

I looked into using SDCH for topojson, since it seemed like a match made in heaven (a lot of repeated bytes in many files that are usually static), but since it never took off in usage it is being removed. The only major site that used it is linkedin.

EDIT: the continuation of this is basically what brotli is. Gather a dictionary of the most common byte-sequences on the internet, pre-ship that in every client and use that as the shared dictionary. But it will never be as good for specific use-cases.

bfred_it 11 days ago [-]
I think, but could be completely wrong, that browsers can reject a pushed resource at the start. Since the "connection" cost for each resource is minimal, this wouldn't be too much of a problem. Someone confirm or deny this.
querulous 11 days ago [-]
with http2 server push the server first sends a frame advertising it's going to push some resource (denoted by it's uri). the client can reject this resource before it's sent. this means the server has to delay sending to give the client the opportunity, however
SahAssar 11 days ago [-]
The server does not delay sending until the advertising frame is acknowledged. So, for a large push the client might cancel the push, but for small pushes the client will already have received the data before the cancel can reach the server.

In general, if you are only pushing something that is both small and required that isn't a problem, but when using push for things like MPEG-DASH this becomes a point to think about.

vertex-four 11 days ago [-]
I think (not sure if this has been agreed as best practice) that you can use cookies to decide which content to push. This should work in most cases - set a cookie that lasts as long as your users' cache, and don't do server push if that cookie exists.
donatj 11 days ago [-]
Ugh. That would work, but it’s such a hack. There should be a mechanism specifically for this. Is there’s no other good way?
pas 11 days ago [-]
The problem is cache coherency, also known as consistency protocols. It's The Hard Problem of computing science, because you have to synchronize state. You have to use information to tell the other party what do you have, and that can be done with a HTTP header, like the Cookie.
sebastiaand 11 days ago [-]
There is an experimental RFC currently being discussed: https://datatracker.ietf.org/doc/draft-ietf-httpbis-cache-di... The technique is derived from the H2O Casper work (and by the same author).

I have a Javascript service worker implementation here: https://www.npmjs.com/package/cache-digest-immutable

Note: This is still based on a Golomb-coded set (GCS). Current proposal under discussion is to use a Cuckoo filter. Takes slightly more bandwidth but allows removal when browser caches evict items.

ko27 11 days ago [-]
Do you have a source for that Cuckoo filter proposal?
poyu 11 days ago [-]
Take a look at my other comment, but in short, it's hard. You also have to account in the client's download speed.

The point of server to push is to _preload_ content, and reduce the amount of requests, not to reduce total download size.

theandrewbailey 11 days ago [-]
I upgraded my blog's server a few months ago and got HTTP/2 automatically. I didn't change any architecture (still one CSS and one JS files, no pushing), and it definitely feels faster, because (I think) of the multiplexing and the TCP and TLS overheads on the extra connections are gone.

I'm skeptical of not bundling resources "because HTTP/2". I think it's still a good idea to keep good HTTP/1 performance for the 10%-20% of traffic that doesn't support HTTP/2. Even when everyone supports HTTP/2, if you're loading a bunch of other data from the same domain (like images), I'm doubtful that the improvement from unbundling resources would be noticeable.

poyu 11 days ago [-]
Here's a really nice write up on server push optimization from the Chrome team.

Rules of Thumb for HTTP/2 Push

https://docs.google.com/document/d/1K0NykTXBbbbTlv60t5MyJvXj...

carlosvega 11 days ago [-]
Interesting video about HTTP 2.

https://www.youtube.com/watch?v=0yzJAKknE_k

smartbit 11 days ago [-]
x25519 11 days ago [-]
They did a static analysis and fixed a null pointer dereference here: https://hg.nginx.org/nginx/rev/8b0553239592
jchb 11 days ago [-]
They might want to consider using clang nullability annotations (_Nullable etc) when compiling on clang. Can use a macro so it becomes a noop when the compiler does not support it. The static analyser would then get a lot of more information to work with, and it also serves as a kind of documentation.
pgjones 11 days ago [-]
Server push in Python https://pgjones.gitlab.io/quart/serving_http2.html#server-pu...

(I am the Quart author, thought this would be interesting and relevant).

drcongo 11 days ago [-]
As someone who loves Flask and finds Django infuriating, I've been keeping an eye on Quart, it looks really interesting.
Mojah 11 days ago [-]
For those interested, I wrote a lengthy blogpost about HTTP/2 a few years ago, detailing the architectural/infrastructure changes needed to support it for your endusers: https://ma.ttias.be/architecting-websites-http2-era/
ubercow 11 days ago [-]
How are you going to signal nginx what to push?

Are you going to be able to push stuff from your application if using Nginx as a proxy? For example, a dynamic view that includes a css file hosted by nginx.

rictic 11 days ago [-]
Quoting the OP:

> Also, preload links from the Link response headers, as described in https://www.w3.org/TR/preload/#server-push-http-2, can be pushed, if enabled with the "http2_push_preload" directive.

So you send down some headers in your response which tells the proxy server resources it should push. Pretty elegant, and fails gracefully. I believe this is the same mechanism as other http2 servers.

spiderfarmer 11 days ago [-]
How is browser support for server push nowadays? Back in March Jake Archibald noticed a lot of oddities in mainly Safari and Edge: https://jakearchibald.com/2017/h2-push-tougher-than-i-though...
mozumder 11 days ago [-]
This doesn't seem to have a level of "cache-aware server push" that h2o (awesome http/2 server) has. Is there any info on how Nginx deals with cached server push?
vbtechguy 10 days ago [-]
been testing Nginx 1.13.9 via master branch with my Centmin Mod Nginx stack and HTTP/2 push testing by setting up cache aware HTTP/2 push via conditional preload link resource hints headers which only show up when a cookie is absent https://community.centminmod.com/threads/hurray-http-2-serve...

works so far but still waiting on bug fix https://trac.nginx.org/nginx/ticket/1478 as Nginx has pushed back 1.13.9 release until next week https://twitter.com/nginx/status/963442197436678144

niftich 11 days ago [-]
As more software is adding support for HTTP/2 server push, I hope they'll start supporting a higher-level, implementation-agnostic, declarative way of specifying resources to push, which operates at a layer higher than server config directives.

Google has a dead-simple JSON format [1] for this from 2015.

[1] https://github.com/GoogleChromeLabs/http2-push-manifest

SahAssar 11 days ago [-]
Most servers just read the preload headers from the backend and use that as a push. That won't work with "HTTP 103 Early Hints" (Since it needs to know what to push before the backend responds), but for now a preload header is a good compromise between ease-of-use and good-enough.
alwillis 11 days ago [-]
Glad push support is finally about to ship.
seanwilson 11 days ago [-]
Are there any good general guidelines on when you should use HTTP2 push?

Edit: Oops, I missed out "push" at the end.

crishoj 11 days ago [-]
I would suggest using HTTP/2 whenever possible. Features like connection multiplexing and pipelining alone are huge benefits.
smartbit 11 days ago [-]
Research by Homaan Behesti of Fastly (above at https://news.ycombinator.com/item?id=16365413#16365605) suggest that you test h2 with your site and see how h2 compares to h1.
eicnix 11 days ago [-]
According to [0] you can gain performance improvements pretty much in every scenario.

[0] https://docs.google.com/presentation/d/1r7QXGYOLCh4fcUq0jDdD...

niklasrde 11 days ago [-]
ryantownsend 11 days ago [-]
You might want to read https://jakearchibald.com/2017/h2-push-tougher-than-i-though...

This is specific to H2 Push

seanwilson 11 days ago [-]
Yes, I read a few articles like this and it doesn't seem clear when you should use HTTP2 push.
therealmarv 11 days ago [-]
Can somebody explain in simple words what HTTP/2 push means? Example?
moviuro 11 days ago [-]
Client: may I get /index.html please?

Server: sure, here's index.html, theme.css, and jquery.js.

Client: hey wait! I didn't ask for... oh, nvm, good call. Now I can display all this without asking you for anything else.

EDIT: See this: https://youtu.be/0yzJAKknE_k?t=30m52s for more details

abricot 11 days ago [-]
And add-track.js, flash-banner.gif?
janfoeh 11 days ago [-]
This is just fancy cache-prewarming. The client still decides what resources to request, but those it wants to fetch are available locally at request time.

("just" as a qualifier for nefariousness, not usefulness)

detaro 11 days ago [-]
Browser: "Hey server, send me index.html please"

Server: "Sure, here is index.html and you'll need style.css as well, so here is a second stream with that"

11 days ago [-]
11 days ago [-]
nodesocket 11 days ago [-]
Awesome. Is this going to make it into stable 1.12.X?
x25519 11 days ago [-]
No, new features don't usually get backported to the stable branch.

It will ship with 1.13.9 (mainline) in 6 hours (at the time of writing).

For stable branch, you need to wait until the 1.13 milestone is completed, which according to their roadmap (https://trac.nginx.org/nginx/milestone/1.13) will be completed in 8 weeks.

spectrox 11 days ago [-]
It should be shipped with 1.13.9 according to the roadmap: https://trac.nginx.org/nginx/roadmap
steeve 11 days ago [-]
Still no HTTP/2 backend support?
x25519 11 days ago [-]