Don’t miss our Skill Seeker Series from Sept 30 – Oct 30.

Don’t miss our Skill Seeker Series from Sept 30 – Oct 30.

COMING SOON: Cybersecurity. Join the waitlist for an exclusive IT & Cybersecurity Workshop discount.

ALERT: We’re offering creatives and marketers next-gen skills for FREE, in partnership with Adobe. Do you qualify?

← Back to the blog

Article

Global Perspectives: A Conversation on AI Governance Trends with Tom Szekeres and Devanshu Mehrota

General Assembly
October 4, 2024
Global Perspectives Blog Series header graphic

As AI continues to evolve and expand, regulating this powerful technology has become a top priority for governments and policymakers worldwide.

To help you better understand how lawmakers around the world are approaching the matter—and the implications for businesses and your own career path—we sat down with U.K.-based Tom Szekeres, Digital Consultant and Strategy Director at re~grow and Lead Generative AI and Modern Marketing Instructor at General Assembly, and U.S.-based Devanshu Mehrota, Senior Director and Analyst at Gartner and Lead Data Science Instructor at General Assembly.

Watch the recording here.

Password: BTid=di1

Or read on for highlights on the revealing discussion surrounding AI’s impact on business growth and how emerging regulations are giving rise to an exciting range of career opportunities in tech and other fields—in Europe, the United States, and across the globe. 

Approaches to AI governance vary worldwide

Around the world, the approach to AI regulation and governance is largely shaped by each region’s goals, values, and priorities for the technology. In short, Devanshu highlighted the differences between the U.S.’s decentralized and pro-innovation approach to AI governance versus the EU’s unified and more risk-sensitive model. Tom noted that the UK has taken a similar approach as the U.S., focusing on growth and innovation instead of overhasty regulation of a fast-changing technology.

Devanshu: So far, the U.S. has embraced a more flexible model to AI governance with the goal of encouraging development and innovation in the sector. Rather than strict rules, AI regulations act more as frameworks, establishing broad principles—such as transparency, explainability, privacy, security, accountability, and bias mitigation—and leaving it up to businesses to determine how to best meet these standards.

Since there is no comprehensive AI legislation at the national level and laws vary significantly from one state to the other, AI governance in the U.S. lacks cohesion and consistency. 

In contrast, the EU has taken a more cautious stance with stricter, use-case-specific policies. Instead of a patchwork of laws, it’s created a centralized legal framework—the AI Act. The law classifies AI systems under four risk levels—minimal, limited, high, and unacceptable—and assigns different requirements to each one.

For instance: 

  • Limited and minimal risk systems, which include applications such as games, spam filters, and chatbots, have few restrictions beyond disclosing to users that they are interacting with an AI.
  • High-risk applications, such as self-driving cars, credit assessments, or worker management technologies, are considered potentially harmful to individuals and are, therefore, subject to the strictest regulations.
  • Technologies falling under the unacceptable risk category, such as biometric categorization, whereby AI is used to categorize individuals to categories based on their characteristics ranging from age, sex, and hair color to ethnic origin and political orientation, are strictly banned.

Tom: In terms of regulations, the U.K.’s approach has been very similar to that of the U.S. Instead of enacting nationwide AI legislation, regulation has been delegated to the most relevant authorities. There are three reasons behind this “wait-and-see” strategy.

The first is the belief that experts in each sector are better equipped to understand the nuances of AI applications and develop targeted regulations. Second, a general agreement that a step-by-step approach, with rules tailored to each sector, is a smarter way to regulate a technology that’s still evolving. The third one is that, after years of post-Brexit economic slowdown, the U.K. is looking to AI as a way to boost innovation and regain some of the growth lost after Brexit.

Going back to what Devanshu mentioned about EU regulations, comprehensive legislation does offer the advantage of regulatory consistency and cohesion across the region. However, establishing strict rules so early on could also potentially hurt innovation and make it difficult for businesses and individuals to maximize AI’s benefits.

New roles are arising as AI regulations evolve

Tom and Devanshu emphasized that, as AI technology and AI regulations evolve, so too will the need for diverse, specialized roles focused on securing ethical, accurate, and compliant AI outputs and applications.

Devanshu: Using AI raises endless questions and concerns. How do you ensure that what’s coming out of the model makes sense? Do you have an auditor evaluating the model’s performance and ethical integrity regularly? How do you secure compliance? All of these concerns have sparked new career paths focused on addressing them:

  • AI-generated work auditor: This position emerged due to concerns about AI hallucinations. Essentially, the AI-generated work auditor is responsible for evaluating AI outputs used in making critical decisions.
  • AI QA analyst: Similar to how business analysts evaluate business performance and assess its impact, AI QA analysts assess the quality and performance of AI systems, ensuring they operate as intended and produce consistent outputs.
  • AI ethicist: This role requires someone with expert knowledge in AI—ideally with a PhD in the field—who deeply understands the technology’s implications and relevant policies so they can guide organizations in applying AI responsibly.
  • AI compliance manager: With broad variation in laws across borders and regions and a fast-changing regulatory landscape, organizations will require dedicated professionals dedicated to securing compliance.

Tom: I would add two more roles to the ones Deavanshu mentioned:

  • Data governance experts: These people will ensure that an organization’s data—particularly the data used to train and support AI models—is handled responsibly, ethically, and in compliance with data governance laws.
  • AI developers: AI developers will be essential for creating and maintaining effective AI systems with real-world applications. It’s critical that they have a solid understanding of regulatory requirements to ensure compliance from the ground up.

AI is reshaping soft and hard skill sets

As AI becomes more integrated into the workplace, people will need to revise their soft and hard skill sets. Tom and Devanshu highlighted that the new AI-integrated workplace will demand a basic understanding of AI regulations and ethics, along with the technical skills to work effectively with AI systems.

Tom: In terms of soft skills, I think we’re going to see a lot of demand at the intersection of AI, ethics, communications, and law. For example, as I mentioned earlier, companies seeking to ensure compliant AI development and applications from the outset will prioritize developers who are well-versed in AI laws and regulations. In addition, soft skills like critical thinking and communication will be increasingly important across all roles, as there will be a growing need to explain AI decisions clearly and effectively.

Devanshu: When explaining how hard skills will evolve to my students, I often use this example: 10 years ago, people listed skills like Excel or PowerPoint on their resumes. Today, that’s not necessary because these are now considered basic, universal skills—like using email or the internet. Since they are some of the most widely used technologies in programming machine learning models, Python and SQL will soon become two of those basic skills. 

The age of AI is just getting started

Tom: AI is still very much a work in progress—it has a propensity to hallucinate, applications are unclear, and it’s made high-profile mistakes. What worries me is that people may get caught up in the hype cycle and start losing faith in the technology before it reaches its full potential. So, if you’re experimenting with generative AI, remember that the technology is still in its early stages. And that a combination of good engineering and sound public and private policy can go a long way in helping us overcome these initial hurdles. 

Devanshu: It’s easy to get spooked by all the doom and gloom there’s been around AI. But the truth is that AI holds incredible potential to make us more productive, more innovative, and more creative. Don’t think of AI as a threat but as an opportunity to be better.

Make it real

At General Assembly, we deliver the goods to help you forge ahead and keep you ahead of the curve. Thinkers and doers who don’t wait for some imagined future, but build their skills (AI and more) to contribute to the future we want and need. It’s a charge-up from the inside out, so change never stops you in your tracks.
Explore our course catalog and move forward with real skills.

LET’S CONNECT

What’s your reason for connecting? *

By providing your email, you confirm you have read and acknowledge General Assembly’s Privacy Policy and Terms of Service.