The Pendulum Swings

Is it truly Innovation vs. Regulation?

The recent consensus on the AI Act marks a pivotal moment for the EU, demonstrating its continued influence in shaping global tech regulations. This landmark legislation, the first of its kind worldwide, positions the EU as a leader in AI governance. However, to truly capitalize on this momentum and establish the EU as a prime destination for AI innovation, a significant financial commitment is needed. We're talking about investing billions of euros in AI research and development, covering areas like computing power, chip infrastructure, and keeping talent within the EU. This investment is crucial for Europe's strategic autonomy in a critical 21st-century technology and to avoid the kind of dependence we've seen in oil and gas, which nearly pushed Europe into turmoil. Currently, Europe trails behind in advanced AI model development, with few exceptions, and this is increasingly becoming a geostrategic issue. What we need is a well-resourced European equivalent of DARPA to drive this initiative forward.

1. Risk-Based Classification

AI systems under the new framework will be categorized according to the level of risk they pose, ensuring a nuanced approach to regulation. Those that present higher risks will be subject to stricter regulations, effectively balancing the scales between ensuring safety and fostering innovation. This approach is akin to a great pendulum, thoughtfully swinging between the need for secure, reliable AI applications and the drive to push the boundaries of technological advancement. This balance is crucial in navigating the complex landscape of AI development and its integration into society. I wish Crypto would have had early regulation. I wish NFTs had early regulation. I wish Web3 had early regulation. I wish social media had early regulation.

Related article: Why Can't We Regulate Social Media Like Previous Media?


2. Biometric Systems Ban 

The Act prohibits AI systems that identify people using sensitive characteristics, such as race or sexual orientation, and bans indiscriminate internet face scraping. But be careful when you get in your new connected cars. What are they capturing? So much for Altman’s other pet project.

Related article: Wireless Data Privacy and Connected Cars Evaluating Risks


3. Transparency Requirements 

New standards for transparency in foundational AI models are set, fostering accountability.

The AI Act requires transparency by mandating the disclosure of key information to individuals and the public. This includes technical documentation, adherence to EU copyright laws, and detailed summaries of the training content used in AI models. However, this transparency poses a challenge with large foundational AI models. These models often use vast amounts of training data and have numerous complex layers, making it difficult to trace the exact pathways through which these algorithms reach their conclusions. This complexity can obscure the inner workings of AI systems, complicating efforts to fully understand and explain their decision-making processes.

4. Fines for Non-Compliance 

Companies not adhering to the rules may face fines up to 7% of their global turnover, emphasizing the Act's seriousness. 


5. Enforcement Challenges

Addressing the enforcement challenges posed by the EU AI Act is a multifaceted issue. Primarily, the Act's broad scope and intricate technical nature present significant hurdles. For member states, ensuring consistent application across diverse legal systems and industries is a daunting task. Moreover, the rapid evolution of AI technologies often outpaces regulatory frameworks, necessitating continual updates and adaptations. Additionally, there's the challenge of balancing innovation with regulation. Overly stringent rules might stifle technological advancement, while lax enforcement could lead to misuse and ethical breaches. Lastly, given the global nature of technology firms, cross-border cooperation and coordination are critical, yet complex, to achieve effective oversight. These challenges underscore the need for a dynamic, well-resourced, and collaborative approach to AI governance in the EU.

The regulatory oversight outlined in the AI Act involves a complex network of regulators across all 27 EU nations, suggesting a proposal for centralized AI oversight in each member state. To ensure safety and compliance, high-risk AI systems are subject to stringent conformity assessments prior to their introduction to the market. Additionally, regulators have the authority to request detailed information on these high-risk AI systems, conducting thorough algorithmic audits to guarantee adherence to regulations. However, this comprehensive approach presents significant challenges in terms of resources and expertise. The Act necessitates the recruitment of new specialists and demands coordination across multiple countries, introducing a layer of logistical complexity to its implementation.

The AI Act strikes a commendable balance, fostering innovation while safeguarding fundamental rights and public safety. However, it's not without its shortcomings, particularly in the regulation of foundation models (FMs). The term 'FMs' is becoming increasingly crucial, and while the Act's approach to these models is a positive step, it falls short in terms of robust enforcement.

Here's the crux of the matter: without specific FM regulation, the onus of regulatory compliance falls disproportionately on downstream providers. It's far less efficient and cost-effective to rectify errors at the deployment stage numerous times than addressing them at the source - the foundation models. This perspective aligns with the least-cost avoider principle in law and economics, advocating for efficient regulation at the FM level. Self-regulation in this area is not only inefficient but potentially hazardous.

Now, let's address the innovation concern. Does effective FM regulation impede innovation? Research says no. A recent study revealed that for advanced AI models, like Bard and ChatGPT, which are below the capabilities of GPT-4 and Gemini, the expected compliance costs amount to only about 1% of the total development expenses (you can check out the study here: https://lnkd.in/ecZTE9RF). This cost is a small price to pay for ensuring AI safety. It’s a viable investment for anyone in the industry and a fundamental aspect of adhering to industry best practices.

Let the pendulum swing!

About the Author

Curt Doty specializes in branding, product development, social strategy, integrated marketing, and UXD. He has extensive experience on AI-driven platforms MidJourney, Adobe Firefly, Bard, ChatGPT, Colossyan, Murf.ai, and Shutterstock. His legacy of entertainment branding: Electronic Arts, EA Sports, ProSieben, SAT.1, WBTV Latin America, Discovery Health, ABC, CBS, A&E, StarTV, Fox, Kabel 1, TV Guide Channel, and Prevue Channel.

He is a sought after public speaker having been featured at Mobile Growth Association, Mobile Congress, App Growth Summit, Promax, CES, CTIA, NAB, NATPE, MMA Global, New Mexico Angels, Santa Fe Business Incubator, EntrepeneursRx and AI Impact. He is now represented by Ovationz. His most recent consultancy RealmIQ helps companies manage the AI Revolution.

© 2023 Curt Doty Company LLC. All rights reserved. RealmIQ and MediaSlam are divisions of the Curt Doty Company. Reproduction, in whole or part, without permission of the publisher is prohibited. Publisher is not responsible for any AI errors or omissions.

Curt DotyComment