From Paul Armstrong, this link to Lars Eidnes’s blog on the 13th, with a scheme for “Auto-Generating Clickbait With Recurrent Neural Networks”.
Background: postings on this blog:
And now Eidnes’s come-on:
“F.D.R.’s War Plans!” reads a headline from a 1941 Chicago Daily Tribune. Had this article been written today, it might rather have said “21 War Plans F.D.R. Does Not Want You To Know About. Number 6 may shock you!”. Modern writers have become very good at squeezing out the maximum clickability out of every headline. But this sort of writing seems formulaic and unoriginal. What if we could automate the writing of these, thus freeing up clickbait writers to do useful work?
If this sort of writing truly is formulaic and unoriginal, we should be able to produce it automatically. Using Recurrent Neural Networks, we can try to pull this off.
Standard artificial neural networks are prediction machines, that can learn how to map some input to some output, given enough examples of each. Recently, as people have figured out how to train deep (multi-layered) neural nets, very powerful models have been created, increasing the hype surrounding this so-called deep learning. In some sense the deepest of these models are Recurrent Neural Networks (RNNs), a class of neural nets that feed their state at the previous timestep into the current timestep. These recurrent connections make these models well suited for operating on sequences, like text.
Let me see if I understand this. I find a text that I want to post on the net, so I fire up the Clickbait RNN and it creates a header for my text. I am then free to do some other work, not having to do with clickbait for this text — answering my e-mail, or looking for other cool texts to post.
So if the nets do their stuff, they can generate strikingly unoriginal clickbait. If that’s what you want, then go for it!