Start your day with intelligence. Get The OODA Daily Pulse.
European Union officials unveiled new rules on Thursday to regulate artificial intelligence. Makers of the most powerful A.I. systems will have to improve transparency, limit copyright violations and protect public safety. The rules, which are voluntary to start, come during an intense debate in Brussels about how aggressively to regulate a new technology seen by many leaders as crucial to future economic success in the face of competition with the United States and China. Some critics accused regulators of watering down the rules to win industry support. The guidelines apply only to a small number of tech companies like OpenAI, Microsoft and Google that make so-called general-purpose A.I. These systems underpin services like ChatGPT, and can analyze enormous amounts of data, learn on their own and perform some human tasks. The so-called code of practice represents some of the first concrete details about how E.U. regulators plan to enforce a law, called the A.I. Act, that was passed last year. Tech companies played a major role in drafting the rules, which will be voluntary when they take effect on Aug. 2, before becoming enforceable in August 2026, according to the European Commission, the executive branch of the 27-nation bloc.