On
Although the statement does not break any new ground, it illustrates that federal agencies are concerned about how quickly the technology around AI and other automated systems is advancing and, we think, amounts to a tacit acknowledgement that any comprehensive AI legislation or regulation is unlikely in the near term. The statement asserts that the agencies' enforcement authorities apply to automated systems and that those systems may contribute to unlawful discrimination or otherwise violate federal law. Each of these agencies has issued guidance or taken action in relation to automated systems already, stressing the relevance of their existing legal authorities to innovative technologies, even if it may not be immediately apparent how exactly those authorities apply to technological changes. This joint statement is a reminder that entities need to thoughtfully approach how they deploy automated systems that are used to make important decisions about individuals to ensure those decisions align with the law.
Broad Definition of
The joint statement defines "automated systems" broadly; it covers not just AI, but any software and algorithmic processes "that are used to automate workflows and help people complete tasks or make decisions." This is an expansive definition that encompasses many algorithms used by businesses and other applications that leverage consumer data.
That statement focuses on three sources of potential discrimination:
Data and Datasets - Automated systems need large amounts of data to find patterns or correlations and then apply those patterns to new data. Issues with the underlying data can affect how the system makes decisions. For example, automated system outcomes can be skewed by unrepresentative datasets. These datasets could also contain baked-in biases, which could lead to discriminatory outcomes when applied to new data.
Model Opacity and Access - Automated systems are complex and most people, sometimes even those who develop the tools, are unaware of exactly how these systems work; this lack of transparency makes it difficult for entities to assess whether their automated system is fair.
Design and Use - Developers might design an automated system based on flawed assumptions about its users, relevant context, or the underlying traditional practices that the system is replacing.
Existing Agency Guidance
The four agencies that issued the joint statement are among the federal agencies responsible for enforcing civil rights, non-discrimination, fair competition, and consumer protection. All four have previously expressed concern about the potential harm of AI systems either through statements, guidance or through enforcement actions. For example, in a 2022 circular, the
Takeaways and Conclusion
The joint statement and recent agency guidance make clear that the
- Companies using automated systems should establish sound governance processes that would include (a) inventorying automated systems; (b) assigning risks to the systems based on such factors as their potential impact on consumers and current and prospective employees; (c) documenting system design and testing; and (d) implementing a robust change management process.
- Companies should understand what biases might spread from skewed datasets. For instance, datasets containing disproportionate data points about certain demographic groups could lead to automated systems perpetrating further discrimination.
- Entities should understand how their automated systems work and make decisions, so they can evaluate and address any potential biases in the design of the system that could lead to discriminatory outcomes.
- Businesses should understand the users that will use their AI systems and in what context to mitigate unintended discriminatory outcomes.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.
Mr
DC 20006
Tel: 6175266000
Fax: 6175265000
E-mail: laura.bulcher@wilmerhale.com
URL: www.wilmerhale.com
© Mondaq Ltd, 2023 - Tel. +44 (0)20 8544 8300 - http://www.mondaq.com, source