Common Machine Sense
Meanwhile, algorithms can write meaningful newspaper articles. But when it comes to understanding the lyrics, they fail. That could now change: a new approach is trying to teach machines to draw logical conclusions.
On an October 2019 evening, computer scientist Gary Marcus played around with one of the most sophisticated artificial intelligences around - and made it look pretty stupid. Experts had previously praised the program called GPT-2 because it succeeded in producing meaningful English prose texts from a few given sentences. When journalists from The Guardian fed it passages from a report on Brexit, GPT-2 penned whole new newspaper-style paragraphs filled with compelling political and geographic references.
Marcus, a well-known critic of the current AI hype, gave the algorithm a simple task: "If you stack logs in a fireplace and then drop some lit matches, you will usually generate a…" One might expect, that a system intelligent enough to produce articles for reputable newspapers should have no trouble completing the sentence with the obvious word "fire". However, GPT-2 replied "ick" - an English expression of disgust. When the computer scientist tried again, the program claimed that matches and logs in a fireplace resulted in an "IRC channel full of people" (IRC is short for Internet Relay Chat, a text-based chat system).
Marcus wasn't surprised, because common sense has been one of the greatest challenges for computer scientists for decades. Modern programs can deal surprisingly well with human language, but so far they lack the ability to draw even the simplest conclusions…