ANALYSIS: As AI Meets Privateness, States’ Solutions Elevate Questions

While artificial intelligence may stir debates about the future, it’s already a part of many attorneys’ current practice. And in 2023, companies doing business in four states—California, Virginia, Colorado, and Connecticut—will need to comply with consumer privacy laws governing AI-powered data processing. The regulatory answers proposed by these states on how to leverage AI in compliance with privacy laws are already spurring questions that will likely linger long after the laws take effect.

There are some high-level similarities among these laws’ AI-related requirements, including mandatory risk assessments and individual rights to opt out of certain automated decisions. But there are also some major gaps—particularly surrounding the redress available for harmful outcomes—as well as various inconsistencies in the laws.

In light of these issues, and as complex of a subject matter as AI is, significant ambiguity will likely hang around throughout the next year.

AI Invasion’s of Privacy Law

For those new to AI, the function known as “machine learning” facilitates the large-scale analysis of data to predict outcomes. Numerous industries are already leveraging this technology for beneficial uses, ranging from thwarting cyberattacks to designing safer scooters.

AI’s rapid implementation has also led to greater scrutiny of its risks, notably regarding discrimination and social media harms. For privacy regulators, one area of ​​concern lies in the massive pools of personal data that machine learning often requires.

Businesses subject to the EU’s General Data Protection Regulation (GDPR) since its May 2018 effective date should be familiar with that law’s AI-related requirements, for which the European Commission adopted guidelines roughly five years ago.

The GDPR refers to the automated processing of personal data for predictive purposes as “profiling.” Additional GDPR provisions govern the “automated decision-making” that may result from profiling or other processing methods.

In the US, these terms and related provisions have been partially emulated in four states’ comprehensive consumer privacy laws taking effect throughout next year. The following graphic compares AI-related requirements from the GDPR and California’s, Virginia’s, Colorado’s, and Connecticut’s privacy laws.

To enlarge this image, click here.

Some (Non-Algorithmic) Predictions

As next year’s state privacy enforcement priorities begin to take shape, expect businesses and privacy advocates to seek answers to three questions in particular.

1. How should companies explain the logic behind automated decisions?

Once the statute-driven rulemaking proceedings currently underground in California and Colorado are complete, both states will require businesses to explain their automated decision-making logic to individuals. These requirements are plainly inspired by the GDPR’s mandate to provide “meaningful information” on such logic.

But some privacy scholars are questioning whether providing US consumers with an explanation of how AI works would be worthwhile. The Stanford Institute for Human-Centered Artificial Intelligence recommended in a January 2022 article that California instead require businesses to provide details on the contents and sources of data used for automated decision-making.

Colorado’s proposed regulations are a bit more advanced in this regard, as they would require businesses to tell individuals which types of personal data are used to make automated decisions and to provide a “plain language explanation” of the logic. Nevertheless, businesses will likely need further guidance on how to satisfy this new requirement while simultaneously keeping consumer confusion to a minimum.

2. What redress will individuals have if automated decisions cause harm?

Colorado, Connecticut, and Virginia will require businesses to let individuals opt out of having their personal data used for automated decisions. Each state’s law expressly limits this right to decisions that could have serious outcomes, including ones concerning employment and lending. The CCPA requires California to adopt regulations that enable a similar opt-out right, although it’s currently uncertain whether this will also be limited to certain decision categories.

However, these laws all fail to specify what actions individuals may take if they’re harmed by automated decision-making. In contrast, the GDPR, as well as the national privacy laws of Brazil, China, and South Africa, each grant individuals some form of redress, such as the right to contest or otherwise obtain human review of an automated decision. The White House’s recently released Blueprint for an AI Bill of Rights similarly encourages a right to human consideration of “high-risk” matters.

Granted, individuals may often challenge automated decisions through other applicable laws, such as the Fair Credit Reporting Act or Americans with Disabilities Act. But for significantly impactful decisions that don’t affect credit or result in unlawful discrimination, companies will likely require greater clarity to effectively assess the risks of fielding complaints over AI logic gone wrong.

3. How will states enforce the right to delete personal data from algorithms?

In addition to the right to opt out of certain personal data processing, each state will grant individuals the right to have their personal data deleted. But none of these states’ privacy laws—nor the GDPR, for that matter—explicitly addresses how the right to deletion relates to personal data used to shape AI algorithms.

The Stanford article suggested that businesses could rectify some privacy concerns by creating synthetic data to essentially replace someone’s personal data, thereby avoiding the cost of retraining an algorithm to operate without such information. Of course, it would be quite helpful to businesses if state regulators signaled their approval of such practice as a valid means of fulfilling deletion requests.

Further complicating matters, the Federal Trade Commission has begun enforcing the wholesale deletion of algorithms that rely on unlawfully gathered personal data. Businesses will need to be mindful of this novel approach regardless of where they do business.

States may also decide to analyze public comments submitted for the FTC’s ongoing commercial surveillance rulemaking—which encompasses automated decision-making, among numerous other topics—to help shape their own guidance in this evolving area. Even if the FTC doesn’t achieve its lofty goal of passing a broad federal privacy rule, states are well-positioned to carry the baton of AI regulation forward.

Access additional analyzes from our Bloomberg Law 2023 series here, covering trends in Litigation, Transactional, ESG & Employment, Technology, and the Future of the Legal Industry.

Bloomberg Law subscribers can find related Practical Guidance documents, tools for keeping track of new laws, and in-depth reference materials on our Privacy & Data Security Practice Center resource.

High-ranking privacy and cybersecurity practitioners provided insights on keeping up with evolving compliance standards at the Bloomberg Law 2022 In-House Forum, now available on-demand to all readers who register online.

If you’re reading this on the Bloomberg Terminal, please run BLAW OUT in order to access the hyperlinked content, or click here to view the web version of this article.

Comments are closed.