avaku 7 days ago [-]
Thanks for posting this. This looks like something genuinely new. Going to look into it.
DoctorOetker 7 days ago [-]
I didn't read the referenced 2017 paper yet, but mapping the training data to noise (gaussian and/or other) is exactly what the RevNet paper does, with the advantage of deterministic reversibility such that the trained RevNet is also generative (without having to do gradient descent for each generated image)
dtjohnnyb 7 days ago [-]
The intro to the paper has a nice comparison to other similar methods (generative and non-generative) and the blog post linked in this article by inFERNCe https://www.inference.vc/unsupervised-learning-by-predicting... has a nice comparison at the end to different unsupervised methods and where this method adds novelty (or doesn't!)
DoctorOetker 6 days ago [-]
>has a nice comparison at the end to different unsupervised methods

I don't see the comparisons at the end of the inFERENCe link?

zygotic12 7 days ago [-]
I think I'm missing the point?

A visual proof that neural nets can compute any function: http://neuralnetworksanddeeplearning.com/chap4.html

smittywerben 7 days ago [-]
> looks like something genuinely new

I fear those words

a008t 7 days ago [-]
Can someone please ELI5 what this does and why/where it is/can be useful?
tempodox 7 days ago [-]
Don't trust any machine learning algorithm that you haven't faked yourself. You can make random noise mean anything you want.
cgearhart 7 days ago [-]
His stated purpose was image compression (although I didn’t see evidence that it worked). If the distribution encoded by the is smaller than your image, then you can send the small set of model parameters (instead of the image) and then use the model to reconstruct the target image.