NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Differentiable Dithering (peterstefek.me)
dgant 1301 days ago [-]
A great read on dithering: Lucas Pope's development blogs while working on Return of the Obra Dinn.

It's an incredible dive into how he created the game's remarkable and unique look, featuring a wonderful and unexpected mathematical contribution from a forum member. If you're not familiar with the game peek at a trailer to see what an achievement it was.

https://forums.tigsource.com/index.php?topic=40832.msg136374...

bane 1301 days ago [-]
On dithering, the original Playstation had built in support for dithering. On CRT televisions it helped provide a better looking visual and it's a huge part of the "look" of the system.

https://www.youtube.com/watch?v=bi-Wzl6BwRM&feature=emb_titl...

anilgulecha 1302 days ago [-]
I've recently done a few things around dithering, and found this site good to experiement:

https://ditherit.com/

It's open source: https://github.com/alexharris/ditherit-v2

aharris6 1301 days ago [-]
Hi, thanks for using Dither it! I built it, and would gladly receive any feedback, suggestions or further praise!
underanalyzer 1301 days ago [-]
Wow exciting! I tried to use ditherit to get some baseline comparison images for my post! Is there anyway to both control the number of colors in the palette and have it auto pick colors at the same time?
aharris6 1301 days ago [-]
That feature does not currently exist, but that is a great idea. I have added it to the list (which is just the github issue tracker, and contains no other items). I like that it auto-analyzes the palette when the images loads without requiring any further user input, but maybe an option after it loads to "auto-detect X colors" as a little dropdown thingie.
anilgulecha 1301 days ago [-]
I do actually (a CLI). I'll open an issue.
londons_explore 1302 days ago [-]
> A pipedream would be an entirely differentiable image compression pipeline where all the steps can be fine tuned together to optimize a particular image with respect to any differentiable loss function.

Neural Image Compression? https://arxiv.org/abs/1908.08988

tonic_section 1301 days ago [-]
Unfortunately you wouldn't have any guarantees on the output of any particular image though, just some reassurances about the expected behaviour over the training set.
MiroF 1301 days ago [-]
> any guarantees on the output of any particular image though

You don't have any guarantees with this non-convex optimization.

I think most of these methods would work OK on out-of-domain data.

tonic_section 1301 days ago [-]
In terms of the decoded image, yes - it's very unlikely you would get something substantially different from the original image. But in terms of the bitrate it's not hard to find examples where the compressed bitrate can be several standard deviations above the average bitrate on the training set - see e.g. the last example here: https://github.com/Justin-Tan/high-fidelity-generative-compr...

(Lossy) neural compression methods may also synthesize small portions of an image to avoid compression artefacts associated with standard image codecs, so should definitely not be used in sensitive applications where small details can make a big difference such as security imaging, guarantees or none.

MiroF 1301 days ago [-]
Those are very cool examples and of considerably higher quality+bitrate than when I last tuned into this field half a year ago.

Unrelated, but I actually recognize your name from Github - I guess deep image compression is a pretty small space.

underanalyzer 1301 days ago [-]
Yea neural image compression looks pretty neat! The reason I bring up jpeg is because it's so well established and if you come up with a more optimized jpeg (which I'm not really convinced is possible, again a pipedream) you don't have to force people to transition to a new image format. In the end methods like neural style or whatever comes after are probably the better pick but there is a transition period.
MiroF 1301 days ago [-]
> more optimized jpeg

The thing about compression is that there is no single "more optimized" knob - there's a bunch of different tradeoffs.

Want a compression algo that can compress existing images to smaller sizes than JPEG? You can already do that with neural image compression. Want a compression algo that can decode that compressed image in 0.01 seconds? You need JPEG.

underanalyzer 1301 days ago [-]
Sorry I could have been more specific. By more optimized jpeg I meant better perceptual quality (again subjective) within the confines of what a jpeg decoder could understand.

To co opt your knobs analogy I imagine each of the steps of a complex image compression pipeline comes with its own knobs each with its own tradeoffs. The dream here would be to tune all those knobs at the same time to optimize some sense of quality in a particular image. Of course huge disclaimer I’m not an image or signal processing expert. It’s also very possible that these “knobs” have been tuned well enough so that even if we optimized them for a specific image the quality difference would not be noticeable.

MiroF 1301 days ago [-]
The problem here is that the JPEG decoder is not differentiable.
MiroF 1301 days ago [-]
> entirely differentiable image compression pipeline

Depending on what is meant by entirely differentiable, this might be impossible without relaxation. ie. you can't differentiate through the quantization step

tonic_section 1301 days ago [-]
There are a couple of solutions which work empirically - as you mentioned, one solution is a dithering-like differentiable relaxation where uniform noise is added, which simulates quantization, or just to ignore the quantization operation when taking gradients, essentially treating it as an identity operation in the backward pass.
MiroF 1301 days ago [-]
But how do you optimize the lossless encoding of the quantized latent space? ie. how do you tell the encoder to produce something that can be well encoded, given that the encoding is a bunch of discrete steps.
tonic_section 1301 days ago [-]
Usually the lossless encoding is offloaded to a standard entropy coder, e.g. arithmetic, ANS, etc. because these approach the theoretical minimum rate given by the source entropy pretty closely, so there wouldn't be a point building a fancy differentiable replacement.
MiroF 1301 days ago [-]
That makes sense, I don't think I stated my question very clearly: how do you control/optimize the entropy of the latent space?

ie. what stops the network from laundering all of the information for reconstructing the image through a super high entropy latent space that is hard to code but allows it to reconstruct perfectly

e: I guess I should just get up to date by reading some papers

tonic_section 1301 days ago [-]
The objective function used in these lossy neural compression schemes usually takes the form of a rate-distortion Lagrangian - the rate term captures the expected length of the message needed to transmit the compressed information and the distortion term measures the reconstruction error. So it wouldn't be able to cheat like in your example, because this would incur a high value of the loss through the rate term.
uoaei 1301 days ago [-]
Formulate it in terms of probabilistic programming, and you will essentially be able to do exactly that.
amelius 1301 days ago [-]
I fear that you might end up with hallucinations in your images ...
IanCal 1301 days ago [-]
Similar to imagining Ryan Gosling is in your background https://petapixel.com/2020/08/17/gigapixel-ai-accidentally-a...
alanbernstein 1301 days ago [-]
Ryan Gosling? That's quite a stretch.
0-_-0 1301 days ago [-]
Isn't that a feature?
MiroF 1301 days ago [-]
It's not trained to generate faces, so I don't think so.
steerablesafe 1302 days ago [-]
It's a very interesting approach, however once you have the probability distribution for each pixel, independent random sampling produces a poor dither pattern compared to Floyd-Steinberg or other error diffusion approaches.

I think once you have the target distributions then maybe you can combine the sampling with some error diffusion approach. The idea is to make the sampling of neighboring pixels negatively correlated, so the colors average out at shorter length scale.

For a sledgehammer approach you can try to have a blur in your loss function and try to sample from the combined probability distribution of all the pixels (ie. sample whole images). It would probably make the calculation even more expensive or possibly even infeasible.

0-_-0 1301 days ago [-]
A differentiable error diffusion loss would dither the image with quantisation (like in the post), but then minimise the difference between the blurred dithered image and the blurred original, instead of the dithered image and the original. This would tend to distribute errors so that the average colour in an area is the same in the dithered image as the original, similar to Floyd-Steinberg.
underanalyzer 1301 days ago [-]
Although I agree with the sentiment I am going to point out for fun that the algorithm does not have to assign probabilities less than 1. Technically a solution like Floyd steinberg produces is in the search space. You would just need the right objective to motivate it
Const-me 1301 days ago [-]
Last time I did dithering was for Polyjet 3D printers. The problem is substantially different from what’s in the article.

The palette is fixed, as the colors are physically different materials. The amount of data is huge, an image is a layer and the complete model has thousands of layers, because 3D.

I implemented a 3D-generalization of ordered dithering https://en.wikipedia.org/wiki/Ordered_dithering The algorithm doesn’t have any data dependencies across voxels, the result only depends on source data, and position of the voxel. I did it on GPU with HLSL shaders, it takes a few seconds to produce thousands of images.

phonebucket 1302 days ago [-]
Fun. I never considered differentiable dithering before.

Would be interesting to see results using a content loss function as defined by Gatys (2015), as opposed to the L2 loss as given. That should hopefully capture more long-distance structures in the image rather than optimising each pixel independently.

dbaupp 1301 days ago [-]
Very interesting!

This seems somewhat similar to the recently published GIFnets[1]. However, I believe GIFnets is training a reusable network to a predict palettes, and pixel assignments, while this post is focusing on optimising the "weights" (i.e. pixel values) for a single image.

I wonder if the loss functions from GIFnets could be applied to this single-image approach to potentially solve the banding problem via something a little more "perceptual" than the variance term mentioned.

[1]: "GIFnets: Differentiable GIF Encoding Framework" https://arxiv.org/abs/2006.13434

underanalyzer 1301 days ago [-]
That's interesting! One thing I was surprised about is that they don't address optimizing the palette and dither pattern across time (b/c most gifs are animated). This feels to me like it would be really interesting and a hard problem for traditional algorithms. They do mention it as a possibility for future work at the end tho. They also seem to have separate losses for the palette net and the dither net instead of just adjusting both to optimize a general image quality metric (although it does look like they have some kind of perceptual loss, it's just not the only objective)
SimplyUnknown 1301 days ago [-]
Looks cool!

Two questions:

- Is this approach also learning the palette? It is kind presented as a given here but it is of course very important for a good dithering.

- The loss function might work better on spatially downsampled images. The downsampling causes a mix of the image colors making the dithered image look more like the original given a good dithering. This also naturally removes the variance that is now penalized in the loss function as this is blurred away.

underanalyzer 1301 days ago [-]
This blew up while I was asleep so I’ll try my best to answer now!

1. Yes the palette is being optimized for as well which is imho what makes it different from a quantization approach 2. That’s a good point. I cite a reference blog post which does use blur in the loss function towards the end of the post. Unfortunately I think pure blur would still produce a noisy image as it would remove variance in the eyes of the loss function but not the final image. I would guess something like the example I give with purple, red, blue pixels would still be a problem for blurred loss

bufferoverflow 1301 days ago [-]
What are the applications for dithering these days? I understand it was needed when we had 4 or 16 or 256 color limits. But now we have 8-bit/channel displays, and 10-bit is becoming popular.
formerly_proven 1301 days ago [-]
Eight and ten bits are the full range of the image, but dark scenes only use a fraction of the range and often suffer from banding (and comically bad compression artifacts on certain popular streaming services). Clean, slight gradients as a backdrop often only cover a small distance in RGB, so again, very low resolution and banding is the result.

Dithering is vital. Just like dithering is vital for audio, even at 24 bits.

Also keep in mind that the "10 bit" you speak of is implemented by dithering on an 8 bit panel in almost every display. Similarly many cheaper 8 bit displays are actually 6 bit with dithering. Additionally, 10 bit is a very rare output format [1] and rarely used by applications apart from the handful of HDR games; even for content creation applications 10 bit support is uncommon, and it actually being utilized even less common.

[1] Just because everything is output and composited in 8 bit, doesn't mean 10 bit display output is entirely for naught. If you are using hardware gamma correction, which you are when you use tools like flux/redshift/... or most ICC display profiles, then 10 bit scanout of an 8 bit framebuffer still makes sense.

sinity 1301 days ago [-]
Yeah, since the day I noticed it on one YT video I can't unsee it in every dark one. I didn't even learn why it's so bad for a long time.

Horrible huge squares/rectangles of slightly different black all over the place.

s1mon 1301 days ago [-]
There are still plenty of 1-bit displays in the world. Consumer electronics with small OLEDs, and various low-power signage have low bit depth. Just because our modern phones and laptops have high bit depth doesn’t mean that dithering goes away.
enriquto 1301 days ago [-]
A straightforward implementation of differentiable dithering consists in applying a large support band-pass filter to the image (so that it becomes of of zero-mean), and then thresholding it at 0. Sure, you lose the property that the average colors over large regions are conserved, but the image is perfectly recognizable, even with higher contrast than the original.
1301 days ago [-]
lokl 1301 days ago [-]
Might be better with CIELAB color space, where "difference" is closer to "perceptual difference."
tlarkworthy 1302 days ago [-]
Seems like the gains in pallete information is wasted in precise placement of pixels for dithering. Net loss IMHO, except for naive formats like bitmap. Interesting nevertheless but I guess we could do better by optimizing against the storage format. But then we are at the state-of-the-art
marcan_42 1302 days ago [-]
Lower bit depth/palette encoding has not been a state of the art option for compressing natural images like this for decades, and nobody is claiming it is.

If you're doing this, it's either because your medium is limited (retro games or 8-bit equivalent embedded systems), because you can't afford the CPU power to decompress something more complex (unlikely these days), because lower bit depth is ideal for the rest of your image (e.g. largely UI graphics with no gradients, and just a few small graphics), or because you just don't care.

But given those reasons exist, there is value in researching better dithering algorithms. Also, to some extent, these things also apply to non-palette formats (dithering to lower bit depths), and that is still relevant today when e.g. converting HDR content to typical 8bpc (24bpp) formats.

the8472 1302 days ago [-]
Also note that some cheap displays are not even true 8bpc and employ temporal dithering to emulate 256 color steps.
this_was_posted 1302 days ago [-]
I think it's mainly beneficial for things where the palette of your medium is limited such as thermal receipts/labels and epaper/e-ink
1301 days ago [-]
bitwize 1301 days ago [-]
I'm a fan of the TempleOS approach: divine dithering. Because TempleOS only supports 16 colors (as God literally intended), other colors are dithered a different random way each frame, yielding a dynamic "snow" like effect, but otherwise good results. Presumably, Terry used the same RNG that allows God to talk to him through his OS; hence, God is dithering the images.
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 21:42:37 GMT+0000 (Coordinated Universal Time) with Vercel.