The EU AI Act could help get to Trustworthy AI, according to the Mozilla Foundation
The EU AI Act may turn out to be one of the most influential legal frameworks in the world. Here is how, and why, the Mozilla Foundation is engaging with it, and how you can get involved too
One year after the first draft was introduced, details about the EU AI Act remained few and far between. Despite the fact that this regulatory framework is not still finalized — or rather, precisely because of that reason — now is the time to learn more about it.
Previously, we covered some key facts about the EU AI Act: who it applies to, when it will be enacted, and what it’s about. We embarked on this exploration alongside Mozilla Foundation’s Executive Director Mark Surman and Senior Policy Researcher Maximilian Gahntz.
As Surman shared, Mozilla’s focus on AI came about around the same time the EU AI Act started its lifecycle too — circa 2019. Mozilla has worked with people around the world to map out a theory of how to make AI more trustworthy, focusing on two long term outcomes: agency and accountability.
Today we pick up the conversation with Surman and Gahntz. We discuss Mozilla’s recommendations for improving the EU AI Act and how people can get involved, and Mozilla’s AI Theory of Change.