[–] fixermark link

Question: This is one of the pieces of neural nets that has always seemed completely opaque voodoo to me. What estimating are you doing to suggest a 512-cell LSTM could stand to be swapped out with a 256-cell bidirectional? What constraints are you optimizing for?

reply

[–] minimaxir link

Not a constraint per se, but having too big of a neural network (or any statistical model) can cause it to overfit and generalize poorly; of course, generalizing better is a good objective for text generation.

You can use 512-cell LSTMs if you have a lot of text, though.

reply

[–] minimaxir link

As someone who has spent a lot of time working with text-generating neural networks (https://github.com/minimaxir/textgenrnn), I have a few quick comments.

1) The input dataset from Memegenerator is a bit weird. More importantly, it does not distinctly identify top and bottom texts (some have a capital letter to signifify the start of the bottom text, which isn't always true). A good technique when encoding text for these types of things is to use a control token (e.g. a newline) to indicate these types of behaviors. (the conclusion notes this problem: "One example would be to train on a dataset that includes the break point in the text between upper and lower for the image. These were chosen manually here and are important for the humor impact of the meme.")

2) The use of GLoVe embeddings don't make as much sense here, even as a base. Generally the embeddings work best on text which follows real-world word usage, which memes do not follow. (in this case, it's better to let the network train the embeddings from scratch)

3) A 512-cell LSTM might be too big for a word-level model of that size; since the text follows rules, a 256-cell Bidirectional might work better.

reply

[–] YeGoblynQueenne link

>> Very silly; best not to alert the media or we'll soon see "AI can now generate memes" clickbait.

This Artificial Intelligence Learned to Create Its Own Memes and the Results will Make you ROFL!!

How scientists trained an AI to create memes by looking at images

The end is near. The singluarity is here. Run for your lives!1!!

reply

[–] glup link

Very silly; best not to alert the media or we'll soon see "AI can now generate memes" clickbait.

I thought it was funny though that Richard Socher, one of the authors of GLoVe and NLP researcher is pictured in the generated memes on p. 8. ("the face you make when")

reply

[–] wodenokoto link

Judging from the url posted in an earlier top thread, this might be a student report.

https://web.stanford.edu/class/cs224n/reports/6909159.pdf

reply

[–] camelCaseOfBeer link

I'd put $100 on the researcher coming up with the title and working from there. "Dank Learning"? Come on, it's a meme in itself. That said, worth publishing? Sure it's at the top of HN. Ground breaking results, nah. Though, I admit I am impressed with the applied solution, using deep learning and some apriori direction to derive context from images is neat.

reply

[–] ekianjo link

Exactly. Memes are funny because they make meta references that are culturally relevant or simply attach absurd bottom lines. It's highly unlikely a deep neural network can model anything like that.

reply

[–] dmschulman link

Considering most deep learning results are interpreted as absurd/bizarre, I don't think the machine will have much difficulty intentionally or unintentionally emulating meme culture.

reply

[–] vertexFarm link

That was my thought. They need to crank the noise way up and aim for some surreal memes, not these ancient fossilized memes from 2010.

reply

[–] stochastic_monk link

I think the image needs to be an input somehow. I imagine running an image classifier (e.g., YOLO9000) to extract “pretrained” features and making those values inputs into a modified LSTM could allow learning to synthesize text and perception. I’d suggest learning new image embeddings (training a neural network to extract image features from scratch), but it’d be difficult to get enough images/enough different images.

reply

[–] aw3c2 link

This is a complete joke, right? What is better about those results than a simple "image + headline + random bottom line" algorithm?

reply

[–] ekianjo link

Pretty unfunny results.

reply

[–] jwilk link

"I should buy a boat" and "blackjack and hookers" image macros usually require external context to be understood. So you can't even tell if they're funny or not.

The other generated images are just dumb.

reply

[–] dsfyu404ed link

I at least chuckled at the "I'm not racist, I'm just a hipster". That said, I'm not a hipster so it doesn't personally insult me and I don't see how the image is at all relevant to the text.

reply

[–] yellowapple link

I got a mild chuckle out of them.

reply

[–] nofinator link
[–] brian-armstrong link

Yeah, I immediately looked for a date on this - feels like "neural net generates ancient text using ancient tomes"

reply

[–] Xyzodiac link

I was expecting this to use some formats that aren't from 2012. It would be interesting to see a neural network that could decide text for more complex meme formats that trend on twitter and instagram.

reply

[–] stochastic_monk link

Interestingly, this subreddit is generated by vanilla Markov chains: no neural networks.

reply

[–] minimaxir link

I created a similar subreddit which does use neural networks: https://www.reddit.com/r/SubredditNN/

reply

[–] yellowapple link

"Why does the sun work?"

"Because that's just how it be sometimes."

Magnificent.

EDIT: apparently human comments are allowed, which might explain why that one fits so well.

reply

[–] minimaxir link

Yes, that is a human comment (unfortunately, training NNs for comments is a bit cost/time prohibitive)

reply

[–] Cthulhu_ link
[–] Cthulhu_ link

Who knows, maybe they already are? I mean I'm confident there's a ton of content farms out there already that just run a cronjob every couple minutes to pluck the top ten images off of a subreddit, checks if they've been published on their own channel yet and republishes them.

If not, I'll brb, need to set up some websites / facebook accounts.

reply

[–] toomanybeersies link

9gag was caught out a few years back for automatically harvesting images off the front page of reddit, then posting it to 9gag like it was from a "real user", and artificially inflating the upvotes.

You could tell it was automated, because every once in a while, a very reddit specific meme would appear on the 9gag front page, with a bunch of confused comments from 9gag users who didn't understand it. Here's a writeup from a couple of years ago on it [1]

I don't doubt that other clickbait sites like BoredPanda do exactly the same thing.

[1] https://www.reddit.com/r/pcmasterrace/comments/3z2wvf/about_...

reply

[–] jcfrei link

It looks like a joke now but I'm fairly convinced that in the not too distant future the most influential social media accounts will be run by some kind of AI.

reply

[–] momania link

Let me leave this here: https://imgur.com/a/ZOcKWmp

reply

[–] Miltnoid link

Holy shit this has the NIPS format.

If this was submitted we are certainly in the dankest timeline.

reply

[–] typon link

All their generated examples look like Markov chain generated captions. Pretty random and generally unfunny. I completely disagree with the claim that you can't differentiate between these generated memes and real memes. None of these would make the front page of reddit, for example.

reply

[–] mr__y link

that's still funnier than 9gag

reply

[–] swebs link

One is a subset of the other. You could also call these ones "advice dog variants" or "unfunny reddit cancer".

reply

[–] SimbaOnSteroids link

In this case, yes the memes are a subset of image macros. However that's because the algorithm only produces images. Not all memes are images, like hit F to pay respect, the old $pun -aroo, Zoop, and my axe, and we did it reddit are all examples of non image macro based memes.

reply

[–] dsschnau link

If you live in 2002 yeah

reply

[–] ferongr link

They're called image macros, not memes.

reply

[–] minimaxir link

See page 8 of the paper.

reply

[–] a_r_8 link

Examples?

reply