Regulation of AI is necessary for ethical reasons. The European AI Act is a good start
Academics and business representatives gathered in Prague to discuss the most pressing ethical, political, and legislative issues concerning the new European regulation of artificial intelligence, the so-called AI Act. The workshop mapped the background to the creation of the AI Act, its advantages, and its limits. The event was organized by the Centre for Environmental and Technology Ethics - Prague (CETE-P) together with the consulting company PricewaterhouseCoopers Czech Republic (PwC).
“The AI Act is the result of a long-term process that began with the work of an advisory group, which I had the honor to be part of. This group gave the European Commission general recommendations on how the AI Act should look. However, the subsequent negotiation and legislative process was, in my view, not entirely transparent, lacking, for example, public and expert participation,” said Mark Coeckelbergh from CETE-P, a world-leading technology philosopher who was part of the High-Level Expert Group on AI, in his presentation.
Coeckelbergh proposes greater democratization of AI. He argues that there is a “democratic deficit” in the context of the current state of affairs. He sees the problem in the concentration of power and decision-making authority in a small number of people in technology corporations, which has also implications for formation of regulations and legislations for the use of AI. Coeckelbergh examines this topic in detail in his new book Why AI Undermines Democracy and What To Do About It.
Risk Assessment
The European AI Act operates on the principle of risk-based assessment. It distinguishes between four categories according to the severity of the risks associated with a given product, with specific obligations for companies based on each category. The legislation even prohibits the use of AI in some cases, such as social scoring, employee monitoring, emotion recognition, and other areas. The strictest category includes the use of AI in contexts such as education or hiring processes.
Christina Hitrova from PwC says that European legislation is important particularly for this reason. “The AI Act is inevitable. There is a so-called liability gap in relation to the use of AI – states have a duty to protect the human rights of their citizens, but if AI is not regulated, it can put some of these rights at significant risk, for example, because of biases embedded in AI products,” said Hitrova at the workshop.
The problem of prejudices and discrimination in relation to AI was explained by scientist Roman Neruda from the Computer Science Institute. “Most of what we call AI today works on the principle of machine learning – the model learns from a huge amount of data, and if there is something ethically problematic in the data, for example, some racist behavior, it will appear in the resulting model,” said Neruda. It is exactly these problematic moments that the AI Act mandates companies to track and evaluate.
AI Act’s Limits
Speakers mentioned several limitations of the AI Act in its current form. For example, the evaluation and method of addressing risk is largely left up to the companies themselves, allowing them to avoid strict rules. Furthermore, AI products that have been on the market for a longer period have more time to adapt to the new rules, whereas new products have to adapt right away, which represents a certain disproportionality.
The main general problem with the regulation is the nature of AI. As Coeckelbergh pointed out, “AI is different from conventional products; it can evolve in different directions that cannot be predicted, and therefore it is difficult to establish appropriate regulation. Nor, therefore, can we know exactly what impact the AI Act will ultimately have.”
Linda Kolaříková, a lawyer from the Karel Čapek Center, stressed at the workshop that rules imposed on the usage of AI must be formulated in the form of principles rather than clear guidelines. “AI regulation should be principles-based and performance-based; it should not aim to determine what exactly companies ought to do, but rather we need to define general rules and limits that are flexible and adaptive,” summarized Kolaříková.
Regulation of AI is necessary for ethical reasons. The European AI Act is a good start
Academics and business representatives gathered in Prague to discuss the most pressing ethical, political, and legislative issues concerning the new European regulation of artificial intelligence, the so-called AI Act. The workshop mapped the background to the creation of the AI Act, its advantages, and its limits. The event was organized by the Centre for Environmental and Technology Ethics - Prague (CETE-P) together with the consulting company PricewaterhouseCoopers Czech Republic (PwC).
“The AI Act is the result of a long-term process that began with the work of an advisory group, which I had the honor to be part of. This group gave the European Commission general recommendations on how the AI Act should look. However, the subsequent negotiation and legislative process was, in my view, not entirely transparent, lacking, for example, public and expert participation,” said Mark Coeckelbergh from CETE-P, a world-leading technology philosopher who was part of the High-Level Expert Group on AI, in his presentation.
Coeckelbergh proposes greater democratization of AI. He argues that there is a “democratic deficit” in the context of the current state of affairs. He sees the problem in the concentration of power and decision-making authority in a small number of people in technology corporations, which has also implications for formation of regulations and legislations for the use of AI. Coeckelbergh examines this topic in detail in his new book Why AI Undermines Democracy and What To Do About It.
Risk Assessment
The European AI Act operates on the principle of risk-based assessment. It distinguishes between four categories according to the severity of the risks associated with a given product, with specific obligations for companies based on each category. The legislation even prohibits the use of AI in some cases, such as social scoring, employee monitoring, emotion recognition, and other areas. The strictest category includes the use of AI in contexts such as education or hiring processes.
Christina Hitrova from PwC says that European legislation is important particularly for this reason. “The AI Act is inevitable. There is a so-called liability gap in relation to the use of AI – states have a duty to protect the human rights of their citizens, but if AI is not regulated, it can put some of these rights at significant risk, for example, because of biases embedded in AI products,” said Hitrova at the workshop.
The problem of prejudices and discrimination in relation to AI was explained by scientist Roman Neruda from the Computer Science Institute. “Most of what we call AI today works on the principle of machine learning – the model learns from a huge amount of data, and if there is something ethically problematic in the data, for example, some racist behavior, it will appear in the resulting model,” said Neruda. It is exactly these problematic moments that the AI Act mandates companies to track and evaluate.
AI Act’s Limits
Speakers mentioned several limitations of the AI Act in its current form. For example, the evaluation and method of addressing risk is largely left up to the companies themselves, allowing them to avoid strict rules. Furthermore, AI products that have been on the market for a longer period have more time to adapt to the new rules, whereas new products have to adapt right away, which represents a certain disproportionality.
The main general problem with the regulation is the nature of AI. As Coeckelbergh pointed out, “AI is different from conventional products; it can evolve in different directions that cannot be predicted, and therefore it is difficult to establish appropriate regulation. Nor, therefore, can we know exactly what impact the AI Act will ultimately have.”
Linda Kolaříková, a lawyer from the Karel Čapek Center, stressed at the workshop that rules imposed on the usage of AI must be formulated in the form of principles rather than clear guidelines. “AI regulation should be principles-based and performance-based; it should not aim to determine what exactly companies ought to do, but rather we need to define general rules and limits that are flexible and adaptive,” summarized Kolaříková.
•• All News
Celetná 988/38
Prague 1
Czech Republic
This project receives funding from the Horizon EU Framework Programme under Grant Agreement No. 101086898.