How to Control AI that Becomes Too Advanced?
Artificial Intelligence is rapidly becoming more advanced. One of the organisations working on AI is OpenAI; the not-for-profit artificial intelligence research organisation co-founded by Elon Musk. Last week, they produced a paper demonstrating the progress they have made on predictive text software.
The AI that they developed, called GPT2, is so efficient in writing a text based on just a few lines of input, that OpenAI decided not to release the comprehensive research to the public. Out of fear of misuse of the tool. Already, GPT2 has been described as the text version of deep fakes. Should we now be scared? Not yet. Should we be cautious? Definitely.
A journalist from The Guardian got access to the system and was allowed to play with it. As Hannah Jane Parkinson described, the AI system developed is “so good it had me quaking in my trainers when it was fed an article of mine and wrote an extension of it that was a perfect act of journalistic ventriloquism”.
The Problem of AI-Assisted Text writing?
AI such as GPT2 is only the beginning. When AI-assisted fake porn arrived at the end of 2017, we could have known that an AI-assisted fake text would not be far away. GTP2 was trained on a dataset of 8 million web pages. With approximately 1.8 billion websites available, this is a relatively small training set. Despite that, the system has become capable of producing reasonable samples for 50% of the time.
As an example, the researchers fed the AI-system the following, human-written, text:
“In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English.”
GTP2’s answer was:
“The scientist named the population, after their distinctive horn, Ovid’s Unicorn. These four-horned, silver-white unicorns were previously unknown to science.
Now, after almost two centuries, the mystery of what sparked this odd phenomenon is finally solved.
Dr. Jorge Pérez, an evolutionary biologist from the University of La Paz, and several companions, were exploring the Andes Mountains when they found a small valley, with no other animals or humans. Pérez noticed that the valley had what appeared to be a natural fountain, surrounded by two peaks of rock and silver snow.
Pérez and the others then ventured further into the valley. “By the time we reached the top of one peak, the water looked blue, with some crystals on top,” said Pérez.
Pérez and his friends were astonished to see the unicorn herd. These creatures could be seen from the air without having to move too much to see them – they were so close they could touch their horns.”
It goes on for a few more paragraphs, but as you can see, the text is very readable, even includes fake quotes, a fake researcher and has a convincing narrative. I understand why the system had journalist Hannah Parkinson trembling on her feet. It is a remarkable achievement that could very easily be turned against us.
The thought alone of bad actors using a system like this to flood the internet with completely fake news articles makes me shiver as well. The current fake news problem seems innocent compared to the problems that could arise when programs like this fall in the wrong hands.
How to Move Forward with Advanced AI?
It is inevitable that such advanced AI solutions will be developed. After all, if OpenAI can develop it, another organisation can do the same. However, that organisation might not be as responsible as OpenAI and decide to release the full research. Or they might be bad actors and use the system for their benefit.
The challenges that we face with AI are enormous. As part of my PhD research, I tried to understand how we could prevent AI from going rogue; how we can solve the principal-agent problem when dealing with artificially intelligent agents. However, that is a completely different problem than the problem of a properly developed AI being misused by a rogue actor.
The challenge is that the developments of AI are going much faster than the adaptability of humans in terms of regulations, culture and norms and values. Before we know it, someone might have developed advanced AI to use against us, and there are no proper measures we can take to prevent it.
The only way to move forward with advanced AI is, in my opinion, a global new deal for AI. A series of programs and projects instituted by global organisations such as the G20 or the United Nations to prevent misuse of (advanced) artificial intelligence. As I wrote in 2017, countries should appoint a senior leader responsible for AI, similar to the UAE who appointed a Minister of AI. Those leaders can then form overarching committees, similar to the Eurogroup that holds informal meetings of the finance ministers of the Eurozone to exercise political control over the Euro, to establish guidelines and policies in terms of AI development.
I sincerely believe we have no time to lose in establishing global guidelines, rules and regulations on how to deal with artificial intelligence. After all, not all organisations will be as responsible as OpenAI. It is a matter of time before we will see the first examples of advanced AI used to cause problems. However, seeing the developments in autonomous weapons, I am afraid there is not much reason to be optimistic.
Image: Bas Nastassia/Shutterstock