Empowering the Freelance Economy

AI Safety Summit: Is Sunak about to be played by Elon Musk?

Photo Source: Rishi Sunak Facebook Page
0 683

This week world leaders and tech talent elite are talking about artificial intelligence risk. If AI is such a threat, who is best placed to police it? And what part will freelancers have in all of this?

Andrew Ng, a co-founder of Google Brain and a professor at Stanford University, has accused big tech companies of exaggerating the human extinction risks of artificial intelligence (AI) to trigger strict regulation and shut down competition.

Andrew Ng/Image source: Andrew Ng

In an interview with the Australian Financial Review, Ng said that the idea that AI could lead to the extinction of humanity is a “lie” that is being promulgated by big tech companies. He argued that these companies are motivated by a desire to protect their market dominance and stifle competition from open-source AI projects.

If this has any grain of truth in it, then it’s conceivable Rishi Sunak, the Prime Minister of the UK, is at risk of getting played by Elon Musk and any other executives who have his ear this week. The PM is convening with the Tesla CEO for a post-AI Safety Summit chat at a time when political tensions in the Middle East are at fever pitch. The PM is reportedly prioritising the agenda on frontier AI and existential risk from unemployment to bioweapons. Rather than near-term risks like bias and misinformation, which in times of war can prove disastrous.

AI self-regulation: a risk worth taking for the sake of innovation?

The Institute for Public Policy Research (IPPR) warns that policymakers could miss the opportunity to regulate AI properly, leading to monopolies by a few global players.

IPPR argues that governments should not rely on self-regulation by AI companies, and should instead create strong national supervisory institutions with statutory powers.

“Just as governments failed to overhaul financial regulation until after the 2008 crisis, and are only belatedly responding to the challenges of the social media revolution, so they now risk being too slow and unambitious in planning how to get the most from AI while better managing the risks,” the report said.

Ng has doubts about a recent public statement by some AI experts, including the CEOs of OpenAI, DeepMind, and Anthropic, who compared the risks posed by AI to nuclear war and pandemics. Ng said that this statement was “irresponsible” and that it was “overhyping” the risks of AI.

Elon Musk in response to Ng’s comments in the article says, “Giant supercomputer clusters that cost billions of dollars are the risk, not some startup in a garage.”

In July of this year, Musk and an impressive line-up of language model experts started xAI. From the description, the site will serve a very human purpose: “To understand the true nature of the universe.”

More details about the site and how it fits with Musk’s plans to build X into a super app will come in time. Perhaps the Summit is a prime opportunity to reveal more.

Ng’s comments come at a time when there is growing public concern about the potential dangers of AI. Some experts have warned that AI could be used to create autonomous weapons that could kill without human intervention. Others have expressed concern that AI could eventually become so intelligent that it surpasses human intelligence and poses a threat to our existence.

However, Ng argues in the article that these fears are unfounded. He points out that AI is still in its early stages of development and that there is no evidence to suggest that it will ever become a threat to humanity. He also notes that AI has the potential to solve many of the world’s most pressing problems, such as climate change and disease.

Ng’s comments are likely to spark a debate about the future of AI and the role of big tech companies in its development. His warnings about the dangers of strict regulation are particularly timely, as governments around the world are considering new laws to govern the use of AI.

Strict regulations are already on the table

US Vice President Harris will speak at the UK Summit and no doubt reiterate some of the key objectives of President Biden’s executive order over AI announced this week.

Ng’s concerns about the potential for strict regulation are valid. Overly restrictive regulations could stifle innovation and make it more difficult for smaller companies to compete with big tech. However, it is important to find a balance between promoting innovation and protecting the public from potential harm.

He posted on X this week:

Laws to ensure AI applications are safe, fair, and transparent are needed. But the White House’s use of the Defense Production Act—typically reserved for war or national emergencies—distorts AI through the lens of security, for example with phrases like “companies developing any foundation model that poses a serious risk to national security.”

Andrew Ng

One way to do this is to focus on developing ethical guidelines for the development and use of AI. Arguably, these guidelines should be developed in consultation with a wide range of stakeholders, including experts in AI, ethics, and law. Governments can also play a role by promoting transparency and accountability in the AI industry. However, the US is already taking a much more hands-on approach.

With this Executive Order, the President directs the most sweeping actions ever taken to protect Americans from the potential risks of AI systems. One of which is the requirement that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government.

The EU is also taking a strict stance with its EU AI Act which should have something concrete by he end of 2023.

Freelancers: the independent voices of AI ethics?

Ultimately, the responsibility for ensuring that AI is used for good lies with all of us. We need to be informed about the risks and benefits of AI, and we need to hold big tech companies and those using AI for criminal purposes accountable for their actions. We also need to work together to develop ethical guidelines for the development and use of AI.

The US government is already promoting a hiring drive for AI talent. Freelancers from around the world will have many opportunities to build and share their knowledge as they work project to project. Above all, they can share their independent thinking. Something Rishi Sunak this week should be seeking.

What are your thoughts on the subject? Please share them in our comments section.

Leave A Reply

Your email address will not be published.