by Eva JAIDAN – Head of Artificial Intelligence – MEGA International
Artificial Intelligence: Regulate Without Slowing Innovation
The challenge with any technological breakthrough, including AI, is how to regulate effectively without constraining innovation. As AI rapidly evolves, its risks increase along with its potential, raising concerns for both businesses and regulators. The main question is whether it’s possible to balance regulatory oversight with the freedom needed for entrepreneurial innovation, particularly on a global scale.
AI: The Dawn of a New Digital Revolution
With the rapid advancement of artificial intelligence, the digital revolution is entering a new phase. What we refer to as the “first digital revolution” primarily focused on usage, where physical processes are gradually being replaced by digital tools. AI, however, marks what could be seen as the “second” digital revolution: the optimization and partial automation of both tools and their usage. As a result, AI isn’t just changing how we use technology—it’s redefining the very nature of our interaction with the digital world.
AI is affecting every industry, with healthcare and pharmaceuticals being particularly notable and innovative examples. AI can drastically reduce the time and cost of bringing new drugs to market by using algorithms to quickly analyze millions of chemical compounds and predict their effectiveness. It also enables the repurposing of existing drugs for other conditions, as seen with treatments adapted for COVID-19. While the benefits are real and substantial, it’s crucial to remain cautious about the risks of biased or erroneous results and to avoid overreliance on AI. Furthermore, protecting patient data and ensuring its security must remain a top priority.
In 2023, advancements in finance have been equally spectacular, with improvements including simplified transactions, enhanced access to credit, improved fraud detection, streamlined financial operations, and the development of more customer-centric services. At the same time, the industry has reaped benefits from innovations driven by generative AI, including optimized industrial processes, reduced production downtime, and improved product quality. Global issues such as climate adaptation and health security have also benefited from this revolution, showcasing AI’s vast potential to tackle some of the world’s most pressing challenges.
The regulatory beginnings of AI: Identifying and Mitigating Risks
Technological advancements have been nothing short of spectacular, but our awareness of the potential issues and risks has grown significantly. Few innovations have so quickly demanded oversight and regulation. Europe’s “AI Act” is a prime example.
Adopted in 2024, this European regulation categorizes AI risks into four tiers: minimal risk (e.g., games or minor utilities like spam filters), limited risk (e.g., chatbots without system control), high risk (e.g., tools used in infrastructure or critical decision-making, with requirements for data quality, documentation, and human oversight), and unacceptable risk (posing a direct threat to humans). AI applications deemed unacceptable, such as biometric surveillance tools (e.g., facial recognition), social scoring, and predictive policing, are entirely prohibited.
Beyond risk identification, the AI Act aims to harmonize AI rules and regulate practices. This includes mandating the disclosure of AI-generated content, establishing transparency and accountability mechanisms for developers and users of AI systems. The most significant impact on enterprise IT infrastructure will likely stem from “high-risk” AI applications. Comprehensive mapping and supervision are essential to understand where and how generative AI is deployed within an information system and technology stack.
Enterprise Architects and AI: A Proactive Role in Response to Regulations
Legislation may not directly impact enterprise architecture, except for internally developed AI projects. However, the use of “high-risk” AI tools could lead to some vendors withdrawing or restricting their products. Enterprise architects must be ready to replace or discontinue non-compliant tools, which could have significant effects on the IT landscape.
Europe isn’t the only region addressing AI risks. The WHO has emphasized the need to secure systems and encouraged dialogue among health stakeholders. Sam Altman, CEO of OpenAI, has called for global AI regulation, criticizing the EU’s AI bill for being too restrictive, particularly regarding its requirement to cite copyrighted materials used to train generative AI. In contrast, a 2023 executive order signed by President Joe Biden in the US focuses on system security and privacy protection without placing restrictions on source material.
European legislation emphasizes data transparency and stakeholder accountability. For enterprise architects, finding a balance between compliance and flexibility—driving innovation while ensuring security and ethics—will be essential.
Safeguarding AI’s Emergence: Essential “Guardrails”
To ensure responsible innovation and protect users, it is essential to have rules and monitoring tools in place for the development and use of AI. These measures include:
- A documented and maintained risk management program.
- Advanced data governance for training, validation, and testing.
- Comprehensive technical documentation prior to market release.
- Detailed records and logs of all operations.
- Transparent and understandable user documentation.
- Human oversight of operations.
- Robust cybersecurity and a high level of accuracy.
By adhering to these guidelines, AI can evolve responsibly, ethically, and sustainably, while minimizing risks. However, striking a balance between regulation and innovation is essential. Investments should be directed towards promising AI areas, while maintaining a robust regulatory framework.
Regulating to Influence Investments: The Right Balance?
While regulatory approaches differ across countries and continents, public authorities are now working to regulate AI usage. However, they’re not the only ones—many organizations, both public and private, are establishing strict internal rules for AI use to prevent data leaks, protect industrial secrets, and address concerns around transparency and bias.
Whether through public regulation or internal controls, the key question remains: does setting up a regulatory framework in advance risk constraining innovation? And would after-the-fact oversight be enough to prevent bad practices or abuses that could quickly emerge? This is a question that applies to any type of innovation.
Perhaps the solution lies in an incentive-based approach rather than a restrictive one. We could implement measures to guide AI R&D towards specific goals while emphasizing user awareness and training. Similar to cybersecurity, users are often the first line of defense—or potentially the weakest link.
A well-informed user can identify abuse and approach information thoughtfully. Companies play a vital role in raising awareness, educating users, and promoting best practices in AI. Likewise, educating young students is essential for developing their critical thinking skills amidst an overwhelming amount of information.
The Need for Global Regulation
Given the transnational nature of digital networks, AI regulation must be agreed upon globally. Although this requires compromise, this approach ensures effectiveness. Imposing strict regulations, like those adopted in Europe, on AI systems with infrastructures located elsewhere would be unrealistic and counterproductive. It could also hinder Europe’s progress in the AI race.
Consistent and harmonized global regulation would maximize AI benefits while minimizing risks. This approach ensures sustainable and ethical development of this revolutionary technology.
About the author
Eva Jaidan is the lead Data Scientist at MEGA. With a PhD in Industrial AI and an applied mathematics background, she brings over 8 years of expertise in applying AI and developing end-to-end data science products. Eva is dedicated to delivering value to our customers through innovative AI-empowered products. She leads the AI and Analytics strategy at MEGA, ensuring that our solutions drive transformation and help organizations reach their goals.
About MEGA International MEGA International is a global SaaS software company offering solutions for Enterprise Architecture, Business Process Analysis, Governance, Risk & Compliance, and Data Governance operating in 52 countries. MEGA created HOPEX, a collaborative platform that provides a single repository to help companies collect, visualize, and analyze information to plan better and adapt to change.