Report from TechCrunch
In Brief – The first regulatory compliance deadline of the EU AI Act went into effect on February 2, 2025, with companies required to comply with the rules related to AI applications that are prohibited under the law except with very limited exceptions. The AI Act categorizes AI applications into four risk levels: minimal, limited, high, and unacceptable. The initial set of rules apply to the highest risk AI systems that are basically never allowed. They include systems to build social credit scores or determine individuals’ risk profiles, using biometrics to infer a person’s personal characteristics, like their sexual orientation, or systems that try to infer peoples’ emotions at work or school. The collection of “real time” facial recognition in public places is restricted but not banned for law enforcement. The law also prohibits the creation, or expansion, of facial recognition databases from security cameras, or, in a shot at Clearview AI, from scraping images from online sources. While complying with this first set of rules will be mostly pro forma for large AI developers, a lengthy compliance timeline related to other AI Act guidelines follows, including the rules to be imposed on the largest general Generative AI systems.
Context – The biggest question in AI public policy remains whether governments are moving toward direct regulation or “soft law” governance. Some believe regulatory guardrails will benefit AI development by easing user uncertainty. Others argue the burdens will breed uncertainty, slow innovation, and drive entrepreneurs and investment elsewhere. The EU’s AI Act is the standard for regulation. The Trump Administration revoked the Biden AI executive order and is in the investment-focused camp, and the Starmer Government is focused on AI investment as well. Inside the EU, final AI Act holdouts included champions of local generative AI leaders such as France’s Mistral. The breakthroughs of China-based AI upstart DeepSeek gives EU challengers hope, but the coming rounds of the EU’s AI regulations are unlike anything likely to emerge in the US, or in China either, where AI guardrails are focused on state censorship rules.
