Starting today, the European Union has slammed the door shut on AI systems they’ve tagged as “unacceptably risky.” That’s not up for debate.
Regulators now have the power to erase entire product lines overnight. The first compliance deadline under the EU’s AI Act was always gonna kick off on February 2, and the bloc’s message is that if you break these rules, you’ll cough up €35 million (around $36 million) or 7% of your global revenue—whichever hurts more.
The European Parliament approved the AI Act last March after years of alleged fine-tuning. By August 1, the law was already live. Now, companies across industries must simply deal with it.
The EU’s blacklist covers some dark corners of AI. Forget about systems that score people based on behavior or reputation, like that dystopian social credit system China runs. That’s gone. AI designed to screw with people’s choices through sneaky tricks or subliminal messaging is also banned.
Also, AI that profiles vulnerabilities—say, preying on someone’s age or disability to manipulate them—is off the table. And here’s one for the history books: the EU made it illegal to predict whether someone will commit a crime based on their facial features.
If your AI is mining biometric data to make assumptions about gender, sexual orientation, or political beliefs, pack up and get out. Real-time biometric monitoring for law enforcement is also forbidden unless it meets highly specific conditions, according to the EU AI Act.
That means no facial scans at subway stations or public events just to catch “potential suspects.” Emotion-tracking AI in schools and workplaces also got cut, except in rare cases tied to medical treatment or safety.
These bans apply to every company operating within EU borders. Headquarters don’t matter—Silicon Valley giants, Asian AI startups, European labs, you name it. If you use a restricted system, the EU says you’re gonna pay the fine.
In September 2024, over 100 tech companies (including Google, OpenAI, and Amazon) signed a voluntary pledge called the EU AI Pact where they reportedly promised to clean up their AI projects early, ahead of the Act’s deadlines, and also map out which systems might fall under the high-risk or banned categories.
Interestingly enough, Meta, Apple, and French AI company Mistral flat out refused to, saying that the regulations are too rigid and would stifle innovation in October 2024.
Still, skipping the Pact doesn’t exempt anyone from following the law, even though most of these tech companies aren’t even touching the banned categories, realistically speaking.
But according to the EU AI Act, law enforcement agencies can still use AI systems that collect biometric data in public, though they’ll need approval from governing bodies beforehand. The exemption applies to emergencies like tracking down missing persons or stopping imminent attacks, and emotion-detecting AI in schools and offices, but only when justified by medical or safety concerns.
But the AI Act doesn’t operate in isolation. The General Data Protection Regulation (GDPR), Network and Information Security Directive (NIS2), and Digital Operational Resilience Act (DORA) all have overlapping requirements for data handling and security.
Cryptopolitan Academy: Are You Making These Web3 Resume Mistakes? - Find Out Here