It is perhaps the new-age digital arms race.
As artificial intelligence use in the private sector grows, so too does it become a more-important tool for governments around the world.
In fact, the weaponization of the technology sits squarely on the minds of government officials at the top level.
“Every country around the world is racing to use artificial intelligence to build a future that expresses their values,” said Arati Prabhakar, director of the Joe Biden White House’s Office of Science and Technology Policy. “We can disagree about a lot of things in this country but one thing we can all agree on is none of us wants to live in a future shaped by technologies driven by authoritarian regimes.”
Prabhakar made the comments in a video interview shared in Orlando during the opening night of Techonomy 23, a summit that looks at the “promise and peril” of AI technologies.
The three-day event in the city’s Lake Nona region takes experts from several industries to tackle the topic.
Her appearance comes on the heels of a late October Biden executive order that aims to make AI safer, more secure and more trustworthy.
“American leadership in the world today requires American leadership in artificial intelligence,” she said. “That’s why the executive order was such a big step.”
The executive order created a new framework for AI safety and security.
Among these is a requirement that developers conduct safety tests and share their results with the government.
In addition, the executive order protects against using AI to create biological material and provides guardrails to protect against fraud and deception using AI.
Prabhakar compared the current state of AI regulation and oversight to the early days of medicine.
“Anyone could sell you a potion and it might kill you or it might make you well,” she said. “Clinical trials were a statistical way not of solving all of the problems but giving us enough confidence in a particular drug to be able to harness the benefit of pharmaceuticals and contain those risks.”