"Data size matters in Elasticsearch; a large index means too much data, caching churn, and poor response time. When query volumes are large, ELK nodes become overloaded, causing long garbage collection pauses or even system outages.
To address this, we switched to hexagonal queries, dividing our maps into hexagonal cells. Each hexagonal cell has a string ID determined by the hexagon resolution level. A geodistance query can be roughly translated to a ring of hexagon IDs; although a hexagonal ring is not a circular surface, it is close enough for our use case. Due to this adjustment, our system’s query capacity more than tripled."
For example, "equal area" subdivision of the surface is not a particularly useful property. This seems like it is wrong on its face such that most people try to achieve it but you have to remember that equal area only matters if your data is uniformly and predictably distributed. Geospatial data models are neither in a pretty severe way as a rule, which means you'll have to deal with data load asymmetry regardless via some other mechanism. If you can ignore equal area because it is handled by some other mechanism, it opens up other possible surface decompositions that have much stronger properties in other ways.
Compactness of representation and cost of computing relationships on the representation has a large impact on real-world index performance that is frequently ignored. The poorness of lat/lon grids in this regard is often significantly underestimated. A singularity-free 3D DGGS can have an indexing structure that is an order of magnitude smaller than a basic 2D lat/long grid for addressing, and the storage on disk of records encoded in these DGGS are also significantly smaller. This all adds up to a lot of performance.
Hexagonal grids tend to work particularly well for visualization. However, they do have their own significant weaknesses e.g. it is typically not a good representation for join operations and they are relatively expensive to search at large scales relative to some other DGGS tessellation systems.
On the other hand, chemical engineering had a big influence on how I reason about distributed systems. That discipline is essentially about the design of complex, continuous flow, coordination-free distributed computation systems that are robustly stable in an efficient equilibrium. It maps directly to computer science but has a concept of the problem space that I think is much more refined than what you commonly see in computer science though it is never expressed in computer science terms. But it makes sense, it is chemical engineering’s One Job.
I would like to read more about what this means and how it applies to distributed systems. I have a physics background, but I sense that chemists tend to have a much more intricate (and interesting) conception of "stability".
There are several options for partitioning your grid, and the geometric consequences on your index. This overview in particular should be accessible even without a deep GIS background: http://webpages.sou.edu/~sahrk/sqspc/pubs/xuzhouKeynote13.pd...
My tiling had a fixed resolution that you specify up-front; I didn't really consider how to make my grid hierarchical.
Squares divide into sub-squares exactly, but hexagon's don't sub-divide into hexagon's neatly.
Can someone explain the advantages of hexagons over squares in this use case?
edit: Oops, that only explains the tiling. For explanations of how the hierarchy descends, Dr. Sahr produced some GIFs that give a little bit of an overview of how the resolutions overlay: http://webpages.sou.edu/~sahrk/dgg/images/topogif/topogif.ht...
Also found this which goes into some more detailed comparison with rectangular grids:
This means you can better simulate dynamic systems where there is flow between cells.
Squares don't pack as closely as hexes, meaning that you can get 18% more hexes into a space than squares for a given perimeter-size. Squares also degenerate badly over the surface of the sphere.... you can't keep a consistent size as you change latitude, and they turn into triangles when you reach the poles.
It has a good overview of the implementation details, and pictures!
The big difference between this and S2 is that hexagons have only one kind of neighbor, neighbors that border on an edge, instead of the two kinds of "checkerboard" neighbors that squares have. This lets you do some interesting analysis on data gathered with hexagons.
Movement between neighbors could be modelled as a current flow, with the edge being a cross-sectional area between them, and since the edges are all equal (unlike squares) you can consider it a constant factor on that analysis, and then drop it and simply use the directional flow counts (the H3 UniEdge index) directly.
H3 takes better care than S2 to keep both hexagon area and shape distortion to a minimum together (area distortion is about the same as S2, but shape distortion is much better than S2), so the hexagons look almost the same almost anywhere on the planet, and so data gathered in one city on the planet is directly comparable to data gathered in another city, which can let you do interesting things like classify "types" of hexagons (urban, suburban, etc) and then make predictions about cities you haven't even set up shop in, yet, based on similar cities.
Hexagons are also the most efficient way to pack spheres, and can best approximate a circular radius with a discrete grid, so they're also useful for doing fast approximations of field-effect calculations (like electromagnetic fields from discrete particles). You could count drivers as a negative charge and riders as a positive charge, for instance, and use EM equations to determine the biggest imbalances in supply and demand distribution, and this will let you do it very quickly with little error.
The hexagons themselves can get down to a granularity below GPS resolution error, so you could, without any effective losses, pack 3 doubles (lat, lng, error) into a single 64-bit integer (H3 index of an appropriate size based on the error) and reduce bandwidth usage on location data.
The H3 index for any particular resolution is ordered in something similar to a Gosper curve so if you really need just a rough approximation of data to query from an area, you actually only need to store the two indexes at the beginning and end of the gosper-like curve you're interested in.
This C library wasn't meant to be used directly by most developers and that's why it has a few rough edges (like not centering the output around the meridian (-180, 180) instead of the current (0, 360) output). I can't wait until the bindings come out, too, probably with the blog post. :)
There are lots of reasonable choices for high-level grid shapes, e.g. http://www.lib.ncep.noaa.gov/ncepofficenotes/files/on467.pdf
For human comprehensibility of coordinates I would recommend instead starting with an octahedron as the basic geometric skeleton.
This looks like a completely new implementation with none of the original DGGrid source. (Unsurprising for some license prohibitions of parts of it that would prevent Uber from using it.)
I haven't had a ton of time to dig through the source, but haven't seen some of the utilities for things like bulk binning of coordinates. (Hopefully the bloggers will talk about this a little bit.) When you worked on it, was Dr. Sahr involved with any of the new API adaptations? [edit: Yes, I see!] He and I had chatted about feature wishlists, and iOS / mobile bindings was at the top of our list a few years ago, but neither of us had much time to work on it. :-)
Then we dug in on code formatting, performance tuning (we've removed almost all of the H3IndexFat struct representation usage and switched most things to bitwise operations), and testing coverage.
As for the bindings, I don't know which will be open sourced, so I can't say more, but we had asked Dr. Sahr to make sure the API itself never allocates memory, it must be passed in allocated memory, so it's not too hard to make bindings that work with both manual memory management and garbage-collected languages.
A lot of the stuff I'm wanting is stuff I wouldn't expect Uber to care about, but it doesn't hurt to ask. Did you look at implementing pathfinding? (As I recall, Dr. Sahr said A* should be easily implementable in this scheme. Elsewhere in the thread someone mentioned that joins are not easy. Any other tidbits like that that came up?)
One of the main reasons that equal-volume cubic tessellations have emerged as the default choice for high-scale analytical DGGS is that they are nearly optimal for scaling out spatial joins between arbitrary geospatial data models. And relatively optimal in most other regards as well, especially computationally; the primary "downside" is that they are 3-dimensional, which is slightly wasteful, though more and more geospatial analysis applications make good use of a direct 3-dimensional representation.
An unfortunate aspect of all this, is how few good implementations of these algorithms exist outside of big commercial GIS packages. I'm extremely grateful that a public university financed this particular research originally, or we might not have gotten a well funded open source library.
I have been writing a blog post that elaborates on the specification of what I believe is the state of the art DGGS for most applications. It is a specification of the best DGGS I know how to design. This is not proprietary IP, just esoteric knowledge. Will be pushed sometime over the next few
If you look closely, I am one of the authors of the standards for such systems. :) There is a boundary where cartographic systems and technology cease to be useful.
One of the big advantages of the 3D embedding DGGS is that the math is dead simple compared to the forced 2D versions. They are extremely powerful in terms of expressiveness, performance, and precision but also relatively transparent. The mere fact of attacking the 2D problem in 3-space reduces its complexity. People just aren’t used to it. The implicit dimensional reduction of 2-space has consequences. In a few years I think all geospatial data will be handled this way.
Reasons for going to 4D are numerous and varied, such as symplectic geometry only works in even dimensions, image/video reconstruction, matrix multiplication, transformations, and other reasons related to computer vision, gauge theory, and topology.
If you only need to know the hex grid distance between the hexagons, but not the actual path, there's a quicker algorithm looking for a common parent between the hexagons (but can have unpredictable slower paths when the hexagons don't share the same base cell, and might have to fall back to an A* algo in that case, anyways).
I don't know what exactly they mean by "joins" elsewhere. We usually use it as a hash index, but the index has a gosper-like curve so b-tree indexes work on it, too, for particular use-cases, and you can also use the parent-child operations (just some bit twiddling) to get approximate area joining super cheap, too.
Others have pointed out various indexing systems that NASA uses that support hexagons, as well. H3's advantages over the others is, in my opinion, that we tried to marry as much of the awesome S2 library into a hexagonal grid (short 64-bit addresses for all hexagons at all resolutions, a parent-child tree with no shared parents, minimized area distortion, and a pretty simple API with some built-in utilities, like geofence polyfill, hexagon compaction, GeoJSON output, etc), where the hexagonal properties give you the other advantages I outlined over S2.
To be perfectly fair, there are some things that S2 will still do better than H3, most notably that the area coverage of a parent perfectly matches all of its children, where that's not the case with H3, though we minimized it as well as I think is possible.
Kevin Sahr (whom others have cited in here) worked with us on this library and came up with the parent-child orientation and scaling, and implemented the original version of the code.
Like most tools/conventions, there are trade-offs involved.
I thought to cover a sphere you had to throw in at least a few things that weren't hexagons. Euler and all that.
> The first H3 resolution (resolution 0) consists of 122 cells (110 hexagons and 12 icosahedron vertex-centered pentagons), referred to as the base cells.
The way the discrete global grids people handle it (TFA is an implementation of that) is generally to stick 12 pentagonal pixels in their grid at the corners of an icosahedron.
I can't find anything that explains how the partitioning in geohex works, but the Uber system makes it clear that they are using Dr. Sahr's partitioning scheme: https://github.com/uber/h3/blob/master/docs/doxyfiles/h3inde...
"A multi-resolution HEALPix data structure for spherically mapped point data" (2017) http://www.heliyon.com/article/e00332
Discrete global grid systems based on hexagons are an idea which differ in most respects from that, except for the part about hierarchical coordinates (which can also be done using any arbitrary other map projection).