The company told TechCrunch once it discovered a "coordinated effort" to make the AI project say inappropriate things, it took the program offline to make adjustments.
Seasoned Internet users among us are none too surprised by the unfortunate turn of events. If you don't program in fail-safes, the Internet is going to do its worst — and it did.
In fact, The Guardian cited Godwin's Law, which holds the longer an online discussion goes on, the more likely it is that someone will compare something to Hitler or the Nazis.
As a writer for TechCrunch put it, "While technology is neither good nor evil, engineers have a responsibility to make sure it's not designed in a way that will reflect back the worst of humanity. ... You can't skip the part about teaching a bot what 'not' to say."