A British scientist known for his contributions to artificial intelligence has told Sky News that powerful AI systems “can’t be controlled” and “are already causing harm”.
Professor Stuart Russell was one of more than 1,000 experts who last month signed an open letter calling for a six-month pause in the development of systems even more capable than OpenAI’s newly-launched GPT-4 – the successor to its online chat bot ChatGPT which is powered by GPT-3.5.
Speaking to Sky’s Sophy Ridge, Professor Russell said of the letter: “I signed it because I think it needs to be said that we don’t understand how these [more powerful] systems work. We don’t know what they’re capable of. And that means that we can’t control them, we can’t get them to behave themselves.”
He said that “people were concerned about disinformation, about racial and gender bias in the outputs of these systems”.
And he argued that with the swift progression of AI, time was needed to “develop the regulations that will make sure that the systems are beneficial to people rather than harmful”.
He said one of the biggest concerns was disinformation and deep fakes (videos or photos of a person in which their face or body has been digitally altered so they appear to be someone else – typically used maliciously or to spread false information).