Open AI research the only way forward
Recently, the US non-profit Future of Life Institute called on all ‘AI labs’ around the world to take a six-month break in the development of advanced AI technology. The purpose of the appeal, according to the initiators, supported by more than 50,000 researchers, politicians and business leaders, was to allow some breathing space to consider how technological developments that could fundamentally change our society should be regulated. How good an idea is it to pause the development of AI, and what should this break involve?
We asked IVA Fellow Sara Mazur, Chair of the Swedish AI initiative WASP.
It is great to see the subject being properly debated and that sound technologies that can benefit humanity are continuing to be developed as a result.
Sara Mazur
What was your first reaction when you read about this call?
– It is always good to have academic debates, and of course we welcome reflections on the development of AI technology and its use. In general, it is difficult to slow down technological advances, and in this case the success of this approach is doubtful. I don’t even know if it would be possible.
You chair the Wallenberg AI, Autonomous Systems and Software Programme (WASP), Sweden’s largest research initiative to date on AI and other subjects. What do you do?
– WASP is a huge research programme with a budget of more than SEK 6 billion, where we conduct basic research into new technologies in AI, autonomous systems and software. Long-term basic research in WASP’s areas is vital for the development of business and society, and is an enabling factor in the work to achieve a sustainable future.
– WASP is now strengthening Sweden’s competitiveness in areas where global advances are being made at tremendous speed. We have attracted a large number of leading researchers to Sweden, who have had to build up new research teams, and we aim to produce more than 600 new doctoral graduates, at least 150 of whom will have been employed in industry during their doctoral studies. We have also established research arenas in partnership with Swedish industry, where knowledge, infrastructure, systems and technology are shared in order to jointly research areas of interest in fields such as AI. In addition, we are now investing SEK 200 million in cyber security research within WASP, with AI playing a major role.
How would a pause in all advanced AI development, as proposed in the call, affect what you do, and even in the long run what is done in the AI field across Sweden?
– The call actually indicates a need to expand research to increase our understanding of AI and what its possibilities and limitations are. In addition to technical research within WASP, there is also WASP-HS, which aims to develop knowledge of the ethical, economic, labour market, social, cultural and legal implications of new technologies. In light of the rapid technological advances, this research should really be accelerated and expanded.
Among other things, the call will pause all training in advanced AI systems, such as future generations of ChatGPT, which is about building language models that can learn to interpret and produce advanced text. One such project is under way in Sweden, GPT SW3, where WASP is working with AI Sweden and RISE to develop an AI-driven language model for the Nordic languages. Do you have any concern that a break, if it happened, would affect that work?
– No, the language model training that we are doing is part of our Research Arena for Media & Language and is focused on basic research. So it is not about commercialising AI products. It is about academic research. Unlike most of the generative models now available worldwide, our work is open and transparent, both the research outcomes and the resulting language model. This will form the basis for many of the studies and research projects that are needed to continue to develop AI safely and responsibly.
The apparent aim of the break is to allow time to develop rules and frameworks that reflect the rapid pace of current AI development. Some people think it makes sense that, based on some kind of precautionary principle, this would also include research. Do you agree with them?
– Of course, it is important to discuss technologies and how to regulate their use. Knowledge is required to establish the right laws and draft the right regulations. We need to have sound knowledge of how new technologies affect our society. And this is achieved by conducting open research.
– But there must be a balance between these regulatory and precautionary considerations and the opportunities created by open and free, ground-breaking research. It would be a shame to prevent harmless research that could produce solutions to the world’s major challenges and save people’s lives.
The appeal also says that decisions about how we deal with a technology such as AI, which could affect our entire future social development, are a democratic issue. So something to be decided not by CEOs of major tech companies, but by elected politicians. Do our politicians definitely have the expertise to determine how these developments should best be regulated?
– The politicians always retain the right to make decisions and the democratic responsibility, but it is important to have researchers who can be involved in conducting research and generating knowledge in this field, to ensure this expertise. It can be very difficult for a layman to form an opinion about how to regulate this area well. This is why experts have to be involved in the process. And they are at the moment.
In your view, do you think a global six-month pause in all advanced AI development is going to happen?
– No, I have a very hard time believing that, but it is great to see the subject being properly debated and that sound technologies that can benefit humanity are continuing to be developed as a result.