Warning: Output may be nsfw/offensive
This is due to being trained on real demotivators from open directories, some of which seem to have come from old /b/.
~6000 demotivators were gathered, and then split into image and caption fragments using Go, which were labelled / OCR’d by Google Cloud Vision. The resulting text blocks were then used to train OpenAI’s GPT-2 language model.
GPT-2’s output then likewise consists of image labels and captions. The labels are searched on DuckDuckGo images to find a suitable picture, to which the captions are then appended.