Europe Proposes Strict Guidelines for Synthetic Intelligence

The European Union on Wednesday put tough regulations in place on the use of artificial intelligence, a unique policy that describes how businesses and governments can deploy a technology that is considered to be one of the most significant but ethically charged scientific breakthroughs, last reminder.

The draft regulation would limit the use of artificial intelligence in a range of activities, from self-driving cars to hiring decisions, bank loans, selection of school enrollment to assessment of exams. It would also cover the use of artificial intelligence by law enforcement and judicial systems – areas classified as “high risk” because they could endanger the security or fundamental rights of people.

Some uses would be prohibited altogether, including live facial recognition in public spaces, although there would be several exemptions for national security and other purposes.

The 108-page directive is an attempt to regulate an emerging technology before it becomes mainstream. The rules have far-reaching implications for big tech companies like Amazon, Google, Facebook and Microsoft, which have invested resources in developing artificial intelligence, but also for numerous other companies that use the software to develop drugs, take out insurance policies and assess Use creditworthiness. Governments have used versions of the technology in criminal justice and assigned public services such as income support.

Companies that violate the new regulations, which could take several years to go through the European Union’s political decision-making process, could face fines of up to 6 percent of global sales.

“When it comes to artificial intelligence, trust is a must and not a pretty thing,” said Margrethe Vestager, executive vice president of the European Commission, which oversees digital policy for the 27-nation bloc, in a statement. “With these landmark rules, the EU is leading the way in developing new global standards to ensure that AI can be trusted.”

Under European Union rules, companies deploying artificial intelligence in high-risk areas would be required to provide regulatory authorities with evidence of safety, including risk assessments and documentation explaining how the technology makes decisions. Organizations must also maintain human control over how the systems are created and used.

Some applications, such as chatbots that enable human-like conversation in customer service situations, and software that creates hard-to-see manipulated images such as “deepfakes”, would need to make users understand that what they are seeing is computer generated.

For the past decade, the European Union has been the technology industry’s most aggressive watchdog in the world. Their policies have often been used as a blueprint by other nations. The block has already passed the world’s most far-reaching data protection regulations and is debating additional laws on antitrust law and the moderation of content.

However, Europe is no longer the only one pushing for stricter supervision. The biggest tech companies are now facing broader reckoning from governments around the world, each with their own political and political motives to contain the power of the industry.

In business today

Updated

April 21, 2021 at 12:05 p.m. ET

In the United States, President Biden has filled his administration with industry critics. Britain is creating a technical regulator to oversee the industry. India tightens social media oversight. China has targeted domestic tech giants like Alibaba and Tencent.

The results of the coming years could change the way the global internet works and the use of new technologies as people have access to different content, digital services or online freedoms depending on where they are.

Artificial intelligence – the process of training machines to get things done and make decisions on their own by examining large amounts of data – is viewed by technologists, business executives, and government officials as one of the most transformative technologies in the world, one that promises significant productivity gains.

However, as the systems become more complex, it can be more difficult to understand why the software is making a decision. This problem could worsen as computers become more powerful. Researchers have raised ethical questions about its use, suggesting that it could perpetuate prejudice in society, invade privacy, or lead to more workplaces being automated.

The publication of the bill by the European Commission, the bloc’s executive body, met with mixed reactions. Many industry groups expressed relief that the regulations were not stricter, while civil society groups said they should have gone further.

“There has been a lot of discussion in recent years about what it would mean to regulate AI and the previous fallback option has been to do nothing and see what happens,” said Carly Kind, director of the Ada Lovelace Institute in London , which investigates the ethical use of artificial intelligence. “This is the first time a country or regional bloc has tried.”

Ms. Kind said many have concerns that the policy is too broad and has left too much discretion to businesses and technology developers to regulate themselves.

“When it doesn’t have strict red lines and guidelines and very firm boundaries on what is acceptable, it opens up a lot for interpretation,” she said.

The development of fair and ethical artificial intelligence has become one of the most controversial issues in Silicon Valley. In December, the co-head of a team at Google investigating the ethical uses of the software said she was fired for criticizing the company’s lack of diversity and the prejudices built into modern artificial intelligence software. Debates raged within Google and other companies about selling the latest software to governments for military use.

In the United States, the risks of artificial intelligence are also considered by government agencies.

This week, the Federal Trade Commission warned against the sale of artificial intelligence systems that use racially biased algorithms or that “could deny people employment, housing, credit, insurance, or other benefits.”

Elsewhere in Massachusetts and in cities like Oakland, California; Portland, Ore .; and San Francisco governments have taken steps to restrict police use of facial recognition.

Comments are closed.