Rishi Sunak says tech giants ‘can’t mark their own homework’ on Artificial Intelligence as he praises Elon Musk for warning about its ‘risks’ ahead of his talks with billionaire (but PM denies he’s lining up a post-No10 career)
Rishi Sunk tonight warned tech giants they can’t ‘mark their own homework’ on the development of Artificial Intelligence.
The Prime Minister, who is hosting an AI safety summit in Britain this week, insisted it was ‘the responsibility of governments’ to assess the dangers of new technology.
The two-day meeting at Bletchley Park, the home of Britain’s Second World War code-breakers, is being attended by a string of global ministers and tech firms.
They include Elon Musk, the controversial billionaire, who Mr Sunak is set to hold talks with tomorrow night.
The PM praised the tech investor, who owns X (formerly known as Twitter) and Tesla, for having consistently spoken about the ‘potential risks’ of AI technologies.
But Mr Sunak, a former investment banker, denied he was lining up a post-politics career in big tech once he leaves Downing Street.
Rishi Sunk warned tech giants they can’t ‘mark their own homework’ on the development of Artificial Intelligence
Mr Sunak is due to sit down with Elon Musk, who recently launched his AI startup known as xAI, for an ‘in conversation’ livestream chat on X (formerly known as Twitter)
A two-day AI summit at Bletchley Park, the home of Britain’s Second World War code-breakers, is being attended by a string of countries and tech firms
READ MORE: Elon Musk warns AI is ‘one of the biggest threats’ to humanity as he joins world leaders at Bletchley Park summit – and even the King urges caution over ‘leap’ in tech power
Speaking to the BBC this evening, ahead of his expected attendance at the summit tomorrow, the PM said: ‘It’s important that countries are the ones in the driving seat.
‘We can’t expect these companies to mark their own homework. That has to be the responsibility of governments.
‘That’s why I’ve created the AI Safety Institute here in the UK, which I hope can be a globally leading institute to research the safety of these models.
‘But then, as the companies have already done, is give access to the UK to look at these models before they’re released to work with them on that.
‘So we can do the testing that is necessary to make sure that we are keeping our citizens and everyone at home safe.
‘It has to be governments or external people who do that work.’
Mr Sunak is due to sit down with Mr Musk, who recently launched his AI startup known as xAI, for an ‘in conversation’ livestream chat on X after the summit ends tomorrow.
‘Elon Musk for a long time has both been an investor and developer of AI technologies himself,’ the PM added.
‘But for over a decade he has been also talking about the potential risks they pose and the need for countries and companies to work together to manage and mitigate against those risks.
‘So he’s someone who obviously has got something that can be valuable to the conversation.
‘It was great that he has decided to join us as well as other major CEOs.’
Asked whether his hosting of the AI summit was part of him eyeing a career move into the tech sector after finishing his time as PM, Mr Sunak replied: ‘No.
‘This is about doing what is right in the long term interests of this country and that’s what I’m about as Prime Minister.
‘I want to make the right long term decisions, which are not always easy and an example of that is just taking on the challenge of bringing this summit together.
‘It hadn’t been done before. No one had thought to bring this number of people together to discuss these risks, to focus on it.’
Elon Musk’s hatred of AI explained: Billionaire believes it will spell the end of humans – a fear Stephen Hawking shared
Elon Musk wants to push technology to its absolute limit, from space travel to self-driving cars — but he draws the line at artificial intelligence.
The billionaire first shared his distaste for AI in 2014, calling it humanity’s ‘biggest existential threat’ and comparing it to ‘summoning the demon.’
At the time, Musk also revealed he was investing in AI companies not to make money but to keep an eye on the technology in case it gets out of hand.
His main fear is that in the wrong hands, if AI becomes advanced, it could overtake humans and spell the end of mankind, which is known as The Singularity.
That concern is shared among many brilliant minds, including the late Stephen Hawking, who told the BBC in 2014: ‘The development of full artificial intelligence could spell the end of the human race.
‘It would take off on its own and redesign itself at an ever-increasing rate.’
Despite his fear of AI, Musk has invested in the San Francisco-based AI group Vicarious, in DeepMind, which has since been acquired by Google, and OpenAI, creating the popular ChatGPT program that has taken the world by storm in recent months.
During a 2016 interview, Musk noted that he and OpenAI created the company to ‘have democratisation of AI technology to make it widely available.’
Musk founded OpenAI with Sam Altman, the company’s CEO, but in 2018 the billionaire attempted to take control of the start-up.
His request was rejected, forcing him to quit OpenAI and move on with his other projects.
In November, OpenAI launched ChatGPT, which became an instant success worldwide.
The chatbot uses ‘large language model’ software to train itself by scouring a massive amount of text data so it can learn to generate eerily human-like text in response to a given prompt.
ChatGPT is used to write research papers, books, news articles, emails and more.
But while Altman is basking in its glory, Musk is attacking ChatGPT.
He says the AI is ‘woke’ and deviates from OpenAI’s original non-profit mission.
‘OpenAI was created as an open source (which is why I named it ‘Open’ AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft, Musk tweeted in February.
The Singularity is making waves worldwide as artificial intelligence advances in ways only seen in science fiction – but what does it actually mean?
In simple terms, it describes a hypothetical future where technology surpasses human intelligence and changes the path of our evolution.
Experts have said that once AI reaches this point, it will be able to innovate much faster than humans.
There are two ways the advancement could play out, with the first leading to humans and machines working together to create a world better suited for humanity.
For example, humans could scan their consciousness and store it in a computer in which they will live forever.
The second scenario is that AI becomes more powerful than humans, taking control and making humans its slaves – but if this is true, it is far off in the distant future.
Researchers are now looking for signs of AI reaching The Singularity, such as the technology’s ability to translate speech with the accuracy of a human and perform tasks faster.
Former Google engineer Ray Kurzweil predicts it will be reached by 2045.
He has made 147 predictions about technology advancements since the early 1990s – and 86 per cent have been correct.
Source: Read Full Article