Actually, it's pretty amazing that it got the structure of the recipes right- title first (with subtitle), ingredients list next, complete with quantities, then preparation and even serving suggestions at the very end (Serve on ranged removable pieces; lol).
There's no information on the training (the link on the top first asks me to accept targeted ads so, no) but it looks like an LSTM trained to predict the next character (hence the nonsensical names of the recipes and some other words, clearly invented by the network to approximate real words in its training data). In a way it's really impressive that this sort of setup can learn such rigid structure.
On the other hand, not an expert, but I'm pretty sure there are ways to force the network to learn that the ingredients list and the prepration section must be somehow related. I think, something something attention something would make for more coherent results :)
If you put whole recipes into the neural network, of course it won't work. You need to separate the parts. One neural network for ingredients, and a separate one for "instructions".
Then you feed the output of the first network into the second to get a set of preparations that actually prepare the chosen ingredients.
I would think it wouldn't get things right. But it should get things less incorrect.
The ingredients are a set of items. Preparation is a sequence of steps with the ingredients as input. A neural network taught on lots of recipes should be able to come with sequences that actually make sense given the ingredients.
The choice of ingredients is also a function of the desired preparation / serving. This will probably generate more realistic recipes but still nothing usable.
What I get from this is that neural networks really have no idea what they are talking about (which of course is no news). How could they? Is a feature more important or relevant than another just because of its frequency?
People look at the output of neural networks and when they work they say - of course they do. And when they don't they rationalize it with "your model was wrong from the start".
(This is similar to how Alpha-Zero "discovered" the optimal chess openings. Of course it did, it was bound to find them by the rules of chess).
apparently randomness is important factor in comedy, just like somewhat funny memes created by neural network https://news.ycombinator.com/item?id=17302917
maybe joke-generation is actual future of AI?
There's no information on the training (the link on the top first asks me to accept targeted ads so, no) but it looks like an LSTM trained to predict the next character (hence the nonsensical names of the recipes and some other words, clearly invented by the network to approximate real words in its training data). In a way it's really impressive that this sort of setup can learn such rigid structure.
On the other hand, not an expert, but I'm pretty sure there are ways to force the network to learn that the ingredients list and the prepration section must be somehow related. I think, something something attention something would make for more coherent results :)