It’s the latest evolution in artificial intelligence, which has experienced rapid advancements in recent years that have led to dystopian inventions, from chatbots becoming humanlike, to AI-created art becoming hyper-realistic, to killer drones.
Meta claims that Cicero was able trick people into believing it was real. It can invite players to form alliances, create invasion plans, and negotiate peace agreements when necessary. The model’s mastery of language surprised some scientists and its creators, who thought this level of sophistication was years away.
Experts warn that the technology’s ability to withhold information and think ahead of its opponents, outwit human rivals and outsmart them raises wider concerns. This technology could be used in cleverer scams to extort individuals or create convincing deep fakes.
“It’s a great example of just how much we can fool other human beings,” said Kentaro Toyama, a professor and artificial intelligence expert at the University of Michigan, who read Meta’s paper. “These things are super scary … [and] could be used for evil.”
Since the 1970s, scientists have been trying to develop artificial intelligence models that can do tasks better than humans. There have been concerns that the associated advances could bring humans closer to a dystopia-like science fiction where technology and robots control the world.
Facebook has created an artificial intelligence that can bluff and beat human poker players in 2019. More recently, a former Google engineer claimed that LaMDA, Google’s artificially intelligent chatbot generator, was sentient. Artificial intelligence-created artwork has been capable of tricking experienced contest judges, leading to ethical debates.
Experts said that many of these advances occurred in rapid succession due to advances in natural speech processing and sophisticated algorithms that can analyze large volumes of text.
Meta’s research team decided to create something to test how advanced language models could get, hoping to create an AI that “would be generally impressive to the community,” said Noam Brown, a scientist on Meta’s AI research team.
They hit on gameplay, which has been used many times to illustrate the limits and progresses in artificial intelligence. Computers had already mastered games like Go and chess, which were played in China. Brown stated that Meta researchers quickly opted for Diplomacy. This was a game without a numerical rule base, and relies more on the conversations between people.
Cicero was developed to master the skill. Two artificial intelligence engines powered it. The first guided strategic reasoning allowed the model forecast and to create the best ways to play the game. The second guided dialogue allows the model to communicate in a lifelike way with people.
Scientists trained the model using large volumes of text data from the Internet and roughly 50,000 Diplomacy games played online at www.Diplomacy.net. This included transcripts of game conversations.
Meta allowed Cicero to play 40 games in Diplomacy online with other players. The study revealed that it was in the top 10% of players.
Meta researchers claimed that Cicero had been deceptive and its gameplay suffered. They filtered the data to make it more transparent. Despite that, they acknowledged that the model could “strategically leave out” information when it needed to. “If it’s talking to its opponent, it’s not going to tell its opponent all the details of its attack plan,” Brown said.
Cicero’s technology could affect real-world products, Brown said. Personal assistants may be better at understanding the needs of customers. The Metaverse’s virtual people could be more interactive and interact in a more human-like way.
“It’s great to be able to make these AIs that can beat humans in games,” Brown said. “But what we want is AI that can cooperate with humans in the real world.”
Some artificial intelligence experts disagree.
Toyama of the University of Michigan said that the worst scenarios are obvious. Since Cicero’s code is open for the public to explore, he said, rogue actors could copy it and use its negotiation and communication skills to craft convincing emails that swindle and extort people for money.
If someone trained the language model on data such as diplomatic cables in WikiLeaks, “you could imagine a system that impersonates another diplomat or somebody influential online and then starts a communication with a foreign power,” he said.
Brown stated that Meta has safeguards in place for toxic dialogue and to filter deceptive messages. However, he acknowledged that this concern also applies to Cicero as well as other language-processing systems. “There’s a lot of positive potential outcomes and then, of course, the potential for negative uses as well,” he said.
Despite internal safeguards, Toyama said there’s little regulation in how these models are used by the larger public, raising a broader societal concern.
“AI is like the nuclear power of this age,” Toyama said. “It has tremendous potential both for good and bad, but … I think if we don’t start practicing regulating the bad, all the dystopian AI science fiction will become dystopian science fact.”