[–] DoctorOetker link

I didn't read the referenced 2017 paper yet, but mapping the training data to noise (gaussian and/or other) is exactly what the RevNet paper does, with the advantage of deterministic reversibility such that the trained RevNet is also generative (without having to do gradient descent for each generated image)

reply

[–] dtjohnnyb link

The intro to the paper has a nice comparison to other similar methods (generative and non-generative) and the blog post linked in this article by inFERNCe https://www.inference.vc/unsupervised-learning-by-predicting... has a nice comparison at the end to different unsupervised methods and where this method adds novelty (or doesn't!)

reply

[–] DoctorOetker link

>has a nice comparison at the end to different unsupervised methods

I don't see the comparisons at the end of the inFERENCe link?

reply

[–] zygotic12 link

I think I'm missing the point?

A visual proof that neural nets can compute any function: http://neuralnetworksanddeeplearning.com/chap4.html

reply

[–] smittywerben link

> looks like something genuinely new

I fear those words

reply

[–] avaku link

Thanks for posting this. This looks like something genuinely new. Going to look into it.

reply

[–] tempodox link

Don't trust any machine learning algorithm that you haven't faked yourself. You can make random noise mean anything you want.

reply

[–] cgearhart link

His stated purpose was image compression (although I didn’t see evidence that it worked). If the distribution encoded by the is smaller than your image, then you can send the small set of model parameters (instead of the image) and then use the model to reconstruct the target image.

reply

[–] a008t link

Can someone please ELI5 what this does and why/where it is/can be useful?

reply