The EU’s proposed AI legal guidelines would regulate robotic surgeons however not the army
As US lawmakers rifled through another hearing in Congress on the dangers of algorithmic bias in social media, the European Commission (basically the EU executive) has put forward a comprehensive legal framework that, if adopted, will have a global impact on the AI development could have a future.
This is not the first attempt by the Commission to steer the growth and development of this new technology. After extensive meetings with stakeholders and other stakeholders, the EC published both the first European strategy for AI and the coordinated plan for AI in 2018. The guidelines for trustworthy AI followed in 2019 and the Commission’s White Paper on AI and the report on the impact of artificial intelligence, the Internet of Things and robotics on security and liability in 2020. Just like with its ambitious General Data Protection Regulation (GDPR) plan in 2018, the Commission is aiming for a basic public trust in technology based on strong user and privacy protections and against their possible misuse.
OLIVIER HOSLET via Getty Images
“Artificial intelligence should not be an end in itself, but a tool that must serve people with the ultimate goal of increasing people’s well-being. Artificial intelligence rules that are available in the Union market or otherwise affect Union citizens should therefore put people first (human-centered) so that they can be confident that the technology will be used in a way that is safe and is compliant with the law. including respect for fundamental rights ”, the Commission included in its draft regulation. “At the same time, such rules for artificial intelligence should be balanced, proportionate and not unnecessarily restrict or hinder technological development. This is of particular importance because, while artificial intelligence is already present in many areas of people’s daily lives, it is not possible to anticipate all possible uses or applications of it that may arise in the future. “
In fact, artificial intelligence systems are already ubiquitous in our lives – from the recommendation algorithms that help us decide what to watch on Netflix and who to follow on Twitter, to the digital assistants in our phones and driver assistance systems that support the Monitor the road for us (or not) when we drive.
“The European Commission has once again boldly advocated new technologies, just as it did with data protection under the GDPR,” said Dr. Brandie Nonnecke, director of the CITRIS Policy Lab at UC Berkeley, told Engadget. “The proposed regulation is very interesting in that it tackles the problem with a risk-based approach,” similar to that used in Canada’s proposed AI regulatory framework.
These new rules would divide the EU’s AI development efforts into a four tier system – minimal risk, limited risk, high risk and immediate ban – based on their potential harm to the common good. “The risk framework they work within is really about risk to society, whereas whenever risk is discussed [in the US]There is quite a risk associated with the question, “What is my liability, what is my exposure?” Said Dr. Jennifer King, Fellow for Privacy and Data Policy at Stanford University Institute for Human-Centered Artificial Intelligence, told Engadget. “And if that includes human rights as part of that risk, it is kind of amalgamated, but to the extent that this can be externalized, it is not included.”
The absolutely forbidden applications of the technology include all applications that manipulate human behavior in order to circumvent the free will of the user – especially those that exploit the vulnerabilities of a certain group of people based on their age, physical or mental disability – as well as “real” Time biometric identification systems and those that allow “social evaluation” by governments according to the 108-page proposal. This is a direct nod to China’s social credit system, and since these rules would, in theory, still govern technologies that affect EU citizens, whether or not those individuals are physically within EU borders, it could in the near future lead to some interesting international incidents. “There is a lot of work going on in operationalizing the guidelines,” noted King.
Jochen Eckel / Reuters
High-risk applications, on the other hand, are all products where the AI is “intended to be used as a safety component of a product” or the AI is the safety component itself (think of your car’s collision avoidance function) identified by eight specific markets, including critical infrastructure, education, legal and judicial affairs, and employee recruitment, are considered part of the high risk category. These can go on the market, but are subject to strict regulatory requirements before they go on sale, e.g. B. The AI developer’s obligation to maintain compliance with EU regulations throughout the product’s life cycle, to ensure strict data protection guarantees and to keep a person permanently in the EU control loop. Sorry, that means no fully autonomous robotic surgeons for the foreseeable future.
“The reading I got from it was that Europeans seem to be imagining an oversight – I don’t know if it’s overreach to say from cradle to grave,” King said. “But there seems to be a feeling that continuous monitoring and evaluation is needed, especially with hybrid systems.” Part of this scrutiny is the EU’s drive to sandboxes for AI regulations that allow developers to build and test high-risk systems under real-world conditions, but without the real-world consequences.
“These measures are designed to prevent the kind of cooling effect seen as a result of the GDPR, which after its inception resulted in a 17 percent increase in market concentration,” Jason Pilkington recently argued for Truth on the Market. “However, it is unclear whether they would achieve this goal.” The EU is also planning to set up a European Artificial Intelligence Committee to oversee compliance efforts.
Nonnecke also points out that many of the areas these high-risk rules apply to are the same ones that academic researchers and journalists have studied for years. “I think this really highlights the importance of empirical research and investigative journalism to help our lawmakers better understand the risks of these AI systems and the benefits of those systems,” she said. One area to which these regulations specifically do not apply is AIs that are specifically built for military operations. So bring the killbots with you!
Ben Birchall – PA Images via Getty Images
Limited risk applications include things like chatbots on service websites or deepfake content. In these cases, the AI maker just needs to notify users in advance that they are more likely to be interacting with a machine than with another person or even a dog. And for minimal risk products, like the AI in video games and the vast majority of applications that the EU expects, the regulations do not impose any special restrictions or additional requirements that would have to be met before going to market.
And should a company or developer dare to ignore these rules, they will find that violating them comes with a hefty fine – one that can be measured as a percentage of GDP. In particular, fines for non-compliance can amount to up to 30 million euros or 4 percent of the company’s worldwide annual turnover, whichever is higher.
“It is important for us at European level to get a very strong message across and to set standards for how far these technologies can go,” Dragos Tudorache, Member of the European Parliament and Chairman of the Artificial Intelligence Committee, told Bloomberg all in one recent interview. “It is a must to put a legal framework around them and it is good that the European Commission is moving in that direction.”
Whether the rest of the world will follow Brussell’s lead remains to be seen. Given the way the regulations currently define what an AI is – and in very broad terms – we can probably expect that legislation to affect almost every aspect of the global market and every sector of the world economy, not just that digital area. Of course, these rules have to go through a rigorous (often controversial) parliamentary process that can take years to complete.
As for America’s chances of making similar regulations of its own, well. “I think we’ll see a proposal at the federal level, yes,” said Nonnecke. “Do I think it will pass? Those are two different things. “
All products recommended by Engadget are selected by our editors independently of our parent company. Some of our stories contain affiliate links. If you buy something through one of these links, we may receive an affiliate commission.
Comments are closed.