minimaxir 130 days ago [-]
As someone who has spent a lot of time working with text-generating neural networks (https://github.com/minimaxir/textgenrnn), I have a few quick comments.

1) The input dataset from Memegenerator is a bit weird. More importantly, it does not distinctly identify top and bottom texts (some have a capital letter to signifify the start of the bottom text, which isn't always true). A good technique when encoding text for these types of things is to use a control token (e.g. a newline) to indicate these types of behaviors. (the conclusion notes this problem: "One example would be to train on a dataset that includes the break point in the text between upper and lower for the image. These were chosen manually here and are important for the humor impact of the meme.")

2) The use of GLoVe embeddings don't make as much sense here, even as a base. Generally the embeddings work best on text which follows real-world word usage, which memes do not follow. (in this case, it's better to let the network train the embeddings from scratch)

3) A 512-cell LSTM might be too big for a word-level model of that size; since the text follows rules, a 256-cell Bidirectional might work better.

fixermark 130 days ago [-]
Question: This is one of the pieces of neural nets that has always seemed completely opaque voodoo to me. What estimating are you doing to suggest a 512-cell LSTM could stand to be swapped out with a 256-cell bidirectional? What constraints are you optimizing for?
minimaxir 130 days ago [-]
Not a constraint per se, but having too big of a neural network (or any statistical model) can cause it to overfit and generalize poorly; of course, generalizing better is a good objective for text generation.

You can use 512-cell LSTMs if you have a lot of text, though.

glup 130 days ago [-]
Very silly; best not to alert the media or we'll soon see "AI can now generate memes" clickbait.

I thought it was funny though that Richard Socher, one of the authors of GLoVe and NLP researcher is pictured in the generated memes on p. 8. ("the face you make when")

YeGoblynQueenne 130 days ago [-]
>> Very silly; best not to alert the media or we'll soon see "AI can now generate memes" clickbait.

This Artificial Intelligence Learned to Create Its Own Memes and the Results will Make you ROFL!!

How scientists trained an AI to create memes by looking at images

The end is near. The singluarity is here. Run for your lives!1!!

aw3c2 130 days ago [-]
This is a complete joke, right? What is better about those results than a simple "image + headline + random bottom line" algorithm?
wodenokoto 130 days ago [-]
Judging from the url posted in an earlier top thread, this might be a student report.

https://web.stanford.edu/class/cs224n/reports/6909159.pdf

camelCaseOfBeer 130 days ago [-]
I'd put $100 on the researcher coming up with the title and working from there. "Dank Learning"? Come on, it's a meme in itself. That said, worth publishing? Sure it's at the top of HN. Ground breaking results, nah. Though, I admit I am impressed with the applied solution, using deep learning and some apriori direction to derive context from images is neat.
ekianjo 130 days ago [-]
Exactly. Memes are funny because they make meta references that are culturally relevant or simply attach absurd bottom lines. It's highly unlikely a deep neural network can model anything like that.
dmschulman 130 days ago [-]
Considering most deep learning results are interpreted as absurd/bizarre, I don't think the machine will have much difficulty intentionally or unintentionally emulating meme culture.
vertexFarm 130 days ago [-]
That was my thought. They need to crank the noise way up and aim for some surreal memes, not these ancient fossilized memes from 2010.
stochastic_monk 130 days ago [-]
I think the image needs to be an input somehow. I imagine running an image classifier (e.g., YOLO9000) to extract “pretrained” features and making those values inputs into a modified LSTM could allow learning to synthesize text and perception. I’d suggest learning new image embeddings (training a neural network to extract image features from scratch), but it’d be difficult to get enough images/enough different images.
nofinator 130 days ago [-]
ekianjo 130 days ago [-]
Pretty unfunny results.
jwilk 130 days ago [-]
"I should buy a boat" and "blackjack and hookers" image macros usually require external context to be understood. So you can't even tell if they're funny or not.

The other generated images are just dumb.

dsfyu404ed 130 days ago [-]
I at least chuckled at the "I'm not racist, I'm just a hipster". That said, I'm not a hipster so it doesn't personally insult me and I don't see how the image is at all relevant to the text.
yellowapple 130 days ago [-]
I got a mild chuckle out of them.
Xyzodiac 130 days ago [-]
I was expecting this to use some formats that aren't from 2012. It would be interesting to see a neural network that could decide text for more complex meme formats that trend on twitter and instagram.
brian-armstrong 130 days ago [-]
Yeah, I immediately looked for a date on this - feels like "neural net generates ancient text using ancient tomes"
Cthulhu_ 130 days ago [-]
stochastic_monk 130 days ago [-]
Interestingly, this subreddit is generated by vanilla Markov chains: no neural networks.
minimaxir 130 days ago [-]
I created a similar subreddit which does use neural networks: https://www.reddit.com/r/SubredditNN/
yellowapple 130 days ago [-]
"Why does the sun work?"

"Because that's just how it be sometimes."

Magnificent.

EDIT: apparently human comments are allowed, which might explain why that one fits so well.

minimaxir 130 days ago [-]
Yes, that is a human comment (unfortunately, training NNs for comments is a bit cost/time prohibitive)
jcfrei 130 days ago [-]
It looks like a joke now but I'm fairly convinced that in the not too distant future the most influential social media accounts will be run by some kind of AI.
Cthulhu_ 130 days ago [-]
Who knows, maybe they already are? I mean I'm confident there's a ton of content farms out there already that just run a cronjob every couple minutes to pluck the top ten images off of a subreddit, checks if they've been published on their own channel yet and republishes them.

If not, I'll brb, need to set up some websites / facebook accounts.

toomanybeersies 130 days ago [-]
9gag was caught out a few years back for automatically harvesting images off the front page of reddit, then posting it to 9gag like it was from a "real user", and artificially inflating the upvotes.

You could tell it was automated, because every once in a while, a very reddit specific meme would appear on the 9gag front page, with a bunch of confused comments from 9gag users who didn't understand it. Here's a writeup from a couple of years ago on it [1]

I don't doubt that other clickbait sites like BoredPanda do exactly the same thing.

[1] https://www.reddit.com/r/pcmasterrace/comments/3z2wvf/about_...

momania 130 days ago [-]
Let me leave this here: https://imgur.com/a/ZOcKWmp
Miltnoid 130 days ago [-]
Holy shit this has the NIPS format.

If this was submitted we are certainly in the dankest timeline.

typon 130 days ago [-]
All their generated examples look like Markov chain generated captions. Pretty random and generally unfunny. I completely disagree with the claim that you can't differentiate between these generated memes and real memes. None of these would make the front page of reddit, for example.
mr__y 130 days ago [-]
that's still funnier than 9gag
ferongr 130 days ago [-]
They're called image macros, not memes.
swebs 130 days ago [-]
One is a subset of the other. You could also call these ones "advice dog variants" or "unfunny reddit cancer".
SimbaOnSteroids 130 days ago [-]
In this case, yes the memes are a subset of image macros. However that's because the algorithm only produces images. Not all memes are images, like hit F to pay respect, the old $pun -aroo, Zoop, and my axe, and we did it reddit are all examples of non image macro based memes.
dsschnau 130 days ago [-]
If you live in 2002 yeah
a_r_8 130 days ago [-]
Examples?
minimaxir 130 days ago [-]
See page 8 of the paper.