The European Union wants tech companies to warn users about artificial intelligence-generated content that could lead to disinformation, as part of a voluntary code that Twitter left last month.
While new AI technologies "can be a force for good," there are "dark sides" with "new risks and the potential for negative consequences for society," Vera Jourova, a European Commission vice president, told reporters on Monday. "The new technologies raise fresh challenges for the fight against disinformation."
Companies that signed up to the EU's voluntary code of practice to fight disinformation, including TikTok, Microsoft and Meta Platforms, should now "clearly label" any services with a potential to disseminate AI generated disinformation, Jourova said.
The EU is rushing to catch up and set rules for generative AI as it negotiates its AI Act, which will go up for a key vote in the European Parliament's plenary next week. Even if the EU institutions agree to a final version by the end of the year, companies will probably not need to comply until 2026 — so various politicians are proposing a slew of ideas to cover the bloc in the meantime.
The EU's voluntary code, which sets compliance for the EU's content moderation rules, the Digital Services Act, so far doesn't include the risks of AI generated content.
Jourova said signatories "who integrate generative AI into their services, like Bing chat for Microsoft, Bard for Google," should now also "build in necessary safeguards that these services cannot be used by malicious actors to generate disinformation."
Jourova has met with the more than 40 companies that signed up to the code. Elon Musk's Twitter left last month.
"We believe this is a mistake of Twitter. Twitter has chosen the hard way. They chose confrontation," Jourova said. "Make no mistake, by leaving the code, Twitter has attracted a lot of attention and its actions and compliance with EU law will be scrutinized vigorously and urgently."
Executive Vice-President Margrethe Vestager is working on a code of conduct with G-7 partners India and Indonesia to convince companies to add more safeguards as they roll out this technology.
In conversations with the U.S. government and industry leaders like OpenAI CEO Sam Altman, Vestager said there has been "consensus" on some of the guardrails. For example, Altman was interested in transparency requirements, external audits and red teaming.
"What is important is, of course, that the process between the countries is not captured by the lowest common denominator, so that it doesn't work," Vestager said. "It must be democracy-led and not industry-led, but it needs to have industry input."
Internal Market Commissioner Thierry Breton has announced an "AI Pact" that would bridge the years between when the AI Act is agreed and comes into force.
The pact, which Google CEO Sundar Pichai already said he'd sign onto, will help companies comply with the rules before the AI Act goes into force, Breton told reporters today.
The commission could offer stress tests for companies to gauge how they already comply with the rules, as the commissioner is already planning with Twitter to comply with the Digital Services Act.