The state of AI ethics: The principles, the tools, the regulations

The state of AI ethics: The principles, the tools, the regulations

What do we talk about when we talk about AI ethics? Just like AI itself, definitions for AI ethics seem to abound. A definition that seems to have garnered some consensus is that AI ethics is a system of moral principles and techniques intended to inform the development and responsible use of artificial intelligence technologies.

If this definition seems ambiguous to you, you aren’t alone. There is an array of issues that people tend to associate with the term “AI ethics,” ranging from bias in algorithms, to the asymmetrical or unlawful use of AI, environmental impact of AI technology and national and international policies around it.

For Abhishek Gupta, founder and principal researcher of the Montreal AI Ethics Institute, it’s all that and more. The sheer number of sets of principles and guidelines that are out there that each try to segment or categorize this area into subdomains — sometimes overlapping, sometimes not — presents a challenge.

The Montreal AI Ethics Institute (MAIEI) is an international nonprofit organization democratizing AI ethics literacy. It aims to equip citizens concerned about artificial intelligence to take action, as its founders believe that civic competence is the foundation of change.

The institute’s State of AI Ethics Reports, published semi-annually, condense the top research & reporting around a set of ethical AI subtopics into one document. As the first of those reports for 2022 has just been released, VentureBeat picked some highlights from the almost 300 page document to discuss with Gupta.

Read the full article on VentureBeat

Join the Orchestrate all the Things Newsletter

Stories about how Technology, Data, AI and Media flow into each other shaping our lives. Analysis, Essays, Interviews, News. Mid-to-long form, 1-3 times/month.


Write a Reply or Comment

Your email address will not be published.