Facebook, foxed by its AI developing a new language that the human researchers who created it couldn’t understand, shut down the AI agents themselves. A mistake in programming—no reward system was created for sticking to the English language—led the AI agents to put out gibberish that other AI agents could understand, but made no semantic sense to humans. This happened because the AI agents were competing in a “generative adversarial network” to develop efficient methods of conversation. The advanced system can be used to negotiate with other AI agents for completing the task at hand. It allowed AI agents to disobey rules of the understandable language and invent codewords. The AI agents compressed the original tools provided to them to communicate to a such a degree that a single “token”—an individual word—could represent more than its literal meaning in English; in fact, it could stand in for complex concepts that would seem completely unrelated to the word.
While there is a huge potential for machine-machine interaction in letting such languages/frames of communication develop, most AI developers are more focussed on human-machine interaction. Machines speaking to each other without human involvement would have removed the need for developing API, or application programming interface, which allows various software to “talk” to each other. However, the problem is with many thought-leaders warning of the consequences of letting AI grow too intelligent, a separate language—especially when there are no bilingual AI/human language speakers in the human camp—complicates matters and stokes a fear of apocalyptic AI control of the world. But, on the other hand, Google recently added a neural network to its Translate service, leading to more efficient translation, including between language pairs that it hasn’t been taught. The addition has had surprisingly stellar results; in the bargain, the AI had silently written its own language that helps it translate sentences.