Skip navigation
ai_connections_network_blue Alamy

U.S. Takes First Step to Formally Regulate AI

The Biden administration follows China, Italy, Canada and the U.K.

The Biden administration said on Tuesday that it is seeking public comment on upcoming AI policies as the U.S. moves to put safeguards in place against harms like bias without dampening innovation.

In a first official step towards potential AI regulations at the federal level, the U.S. Commerce Department’s National Telecommunications and Information Administration (NTIA) wants public input on developing AI audits, assessments, certifications and other tools to engender trust from the public.

“The same way that financial audits created trust of financial statements for businesses, accountability mechanisms for AI can help assure that an AI system is trustworthy,” said Alan Davison, assistant commerce secretary for communications and information, at an event in Pittsburgh, Pennsylvania.

“But real accountability means that entities bear responsibility for what they put out into the world,” he added.

Written comments must be submitted by June 10 here.

The NTIA will be seeking input specifically on the types of certifications AI systems need before they can deploy, what datasets are used and how these are accessed, how to conduct audits and assessments, what AI designs developers should choose, what assurances should the public expect before the AI model is released, among other issues.

“Our initiative will help build an ecosystem of AI audits, assessments and the tools that will help assure businesses and the public that AI systems can be trusted,” Davidson said. “This is vital work.”

There already have been attempts to regulate AI, with more than 130 bills introduced in federal and state legislatures in 2021 that either were passed or proposed. This is a “huge” jump from the early days of social media, cloud computing and even the internet itself, Davidson said.

Meanwhile, China, Italy, Canada and the U.K. are stepping up scrutiny of generative AI.

Italy has temporarily banned ChatGPT and threatened to impose fines until OpenAI complies with its user privacy concerns, while Canada’s privacy chief said it will be scrutinizing the chatbot. Meanwhile, the U.K.’s privacy watchdog said organizations using or developing generative AI must ensure people’s data are protected because it is the law.

Continue reading this article on AI Business

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish