I didn't read the referenced 2017 paper yet, but mapping the training data to noise (gaussian and/or other) is exactly what the RevNet paper does, with the advantage of deterministic reversibility such that the trained RevNet is also generative (without having to do gradient descent for each generated image)
The intro to the paper has a nice comparison to other similar methods (generative and non-generative) and the blog post linked in this article by inFERNCe https://www.inference.vc/unsupervised-learning-by-predicting... has a nice comparison at the end to different unsupervised methods and where this method adds novelty (or doesn't!)
>has a nice comparison at the end to different unsupervised methods
I don't see the comparisons at the end of the inFERENCe link?
I think I'm missing the point?
A visual proof that neural nets can compute any function:
> looks like something genuinely new
I fear those words
Thanks for posting this. This looks like something genuinely new. Going to look into it.
Don't trust any machine learning algorithm that you haven't faked yourself. You can make random noise mean anything you want.
His stated purpose was image compression (although I didn’t see evidence that it worked). If the distribution encoded by the is smaller than your image, then you can send the small set of model parameters (instead of the image) and then use the model to reconstruct the target image.
Can someone please ELI5 what this does and why/where it is/can be useful?