I made video explaining how the EU AI Act works. If you'd like to leave feedback please do it through this form or through the comments on the video!
Transcript:
[0:00] Introduction:
Three years ago the European Commission helped propose the AI Act in April of 2021. After extensive negotiations, the Act came into force on August the 1st and we now celebrate its one month anniversary. To understand how the Act is made, you have to first understand
the European Union. The European Union is divided into three different bodies. You have the Commission, which proposes new laws. You have the European Parliament, where laws are debated. And you have the Council of the EU, where different member states are represented individually by delegates. Together, these three institutions form the trifecta for legislation in the European Union, and those three bodies are responsible for the AI Act that's now in force.
Okay, so we have some understanding of the three main bodies of the EU, but why does the European Union actually matter? Well, on the one hand, it has 450 million people that live across different member states. It's combined GDP is around 16 trillion euros, and it's the biggest single market in the world.
Single market here means goods, services, and people can easily move across to different member states in the EU. So all of this combined ends up meaning that the European Union plays a significant role in the global economy. And that means that no AI lab nor any business can really ignore the effects and advantages that come with trading within the EU.
[1:31] - Brussels Effect
Now one reason why legislation in the EU tends to have significance is because of something called the Brussels effect. This is the idea where legislation that's been adopted by the EU tends to spread globally. EU sets a standard for something and that standard tends to be adopted all over the world.
We can see these with different things. There have been standards set by the EU on environmental protection, chemical regulation, GDPR. A lot of other countries have looked at the ways in which regulation in the EU has been set up and have So there are two reasons I'm going to point out. One is the competency of the EU.
So there are two reasons I'm going to point out. One is the competency of the EU. So there In making this act, the EU is signalling to other countries that they've taken the first step in becoming a regulator, and that, given they've spent the last three years trying to craft this piece of legislation, they're That they probably know what they're talking about, that they have a lot of competent people in the bodies of the EU crafting this legislation.
And that if you were to say, borrow the EU, AI act and implemented in your own country, things probably won't go too badly. And then the other reasons are the economic incentives. If you're an AI lab. You probably want your model to operate within the EU so you can make more money. Now, in doing this, you're going to have to follow the standards and regulations set by the AI Act.
So, you're going to have to conduct evaluations and other bits and bobs I will get into in a sec. But, you're going to have to follow the act. And it makes sense then that if you're going to follow the regulations and standards put in place by the EU that you just have a coherent way of testing your models.
You have a coherent model that works everywhere. You don't just have an EU compliant model and a non EU compliant model. That just doesn't make much sense. It's going to cost you more. It's going to be more inefficient. So you're going to decide the standards we're putting in place for our models that work in the EU, we're just going to apply universally.
And again, You have the Brussels effect taking place once more. Now the EU AI Act has three different ways of categorising different AI models. Now the two I want to focus on are general purpose AI models and general purpose AI models with systemic risk. Now, I want to thank Alejandro Ortega, who helped explain some of the EU AI Act to myself, which is now informing the way this video is made.
[3:44] - How are models categorised?
Now, the way the act defines general purpose AI models. is models that are generally capable of performing a wide range of distinct tasks. Now, that is a vague definition, and lots of models are covered under this. All you really need is a model that was trained on unsupervised learning and on a billion parameters.
If you're a developer for general purpose AI model, you have a responsibility to publish information on the architecture of your model, The data that was used and the way in which your model was trained. That means that developer has a responsibility to make sure that there is a publicly available source for others to see what type of data an AI model was trained on.
And the rest of the information around say, architecture, training techniques, energy consumption, is meant to be reported to the newly built AI model. AI office of the European Union. The AI office are the new regulators in town that are meant to make sure that these developers are actually complying with the regulation, but also providing the information at any possible time that they may require it.
Developers who build general purpose air models for systemic risk have those obligations, but also more. They have to conduct red teaming efforts, evaluations on their model, to ensure that there are cyber security measures in place so that their model weights can't be stolen, and to also be able to mitigate against the risks that their model presents.
So the question then is, is how are AI models that pose systemic risk actually categorised? There's two routes. One of them is by default, any model that is trained using more than 10 to the raised power 25 floating point operations will be considered a model that poses some systemic risk. Or the alternative route is that instead of using the floating point operations metric, you figure out through evaluating the model, whether the model Has high impact capabilities.
This means if a model isn't trained a bunch, but still has some really powerful capabilities, you have a way of ensuring that it stays safe.
[5:55] - The EU AI Office
Now, I touched on them lightly, but the EU AI office is going to play an important role in this as well. They are the main body responsible for seeing that the AI act actually works.
This ends up meaning that they will be responsible for monitoring and enforcing the regulation, ensuring that developers are complying with them. And finally to communicate to the public about its findings and what it's actually doing and that it's keeping the developers in check. I think it's easy to mistake that with the EU AI coming into force, that it's actually having an effect currently.
And that isn't the case. There is a long, long timeline on how the EU AI Act is supposed to unfold. And this is reasonable. You need a gap in place between a regulation and people starting to adopt it. The AI office is currently trying to figure out their codes of practices to figure out how it's going to evaluate models.
Figure out their communication channels for speaking to developers to get the information they want. A lot of this currently needs to be set up, which makes this a very, very important time. A lot of opportunities here to actually shape how the AI act is going to take place and the effect it's going to have.
In a year's time, it will start to have an effect on general purpose AI models. And at that point, there should hopefully be the metrics and evaluations set in place to test models, to test that developers are actually following in line with the app. There's also going to be lots of figuring out on. What kinds of punishments or fines are going to come into place if developers choose not to comply with the EU AI Act?
A lot of this figuring out is going to take place now. And if you're personally interested in working in this, the AI office is hiring albeit in their San Francisco office, but probably if you are an EU citizen, this seems like a good opportunity for you to try and utilise.
Okay. That was the EU AI Act very, very briefly.
I hope you enjoyed.