In this episode of his "Clearly Conspicuous" podcast series, "Part 2: An FTC Official Speaks About the Regulation of
Listen to more episodes of Clearly Conspicuous here.
Podcast Transcript
Good day. This podcast is part two in a series concerning the regulation of artificial intelligence by the
Let's talk about federal policy.
Bias and Discrimination in
So now let's talk about potential for bias and discriminatory uses of AI. As indicated by the executive order, AI outputs can sometimes be biased or discriminatory. It is well documented, but AI systems have discriminated, often inadvertently, with respect to inpidual immutable characteristics, including race, ethnicity, gender and language.
But what triggers AI bias? There are a number of reasons how an AI system can discriminate. AI systems sometimes behave this way because the bias is embedded in the data on which the algorithm was trained. Other times, an AI system may discriminate because its underlying model is being used for something other than its original purpose. In either case, a company runs the risk of violating the law.
From the
Monitoring and Disclaimers for Vendors and Contractors
So let's move to another topic: dealing with monitoring and disclaimers. Under the FTC Act, companies may be liable for what vendors or contractors do on their behalf. This means the companies have an implied duty to vet and monitor the third parties they engage. Whether a company regularly monitors its vendors and contractors is an important factor in enforcement discretion. In other words, if a company's AI system results in consumer harm, the FTC will investigate whether the company monitored both the product and its vendors and contractors. A showing of diligence and continuous monitoring practices may dissuade the FTC from prosecuting or, at the very least, minimize the remedy some. Disclaimers can be used when marketing AI, as long as they are clear and conspicuous. The extent to which a disclaimer limits liability is narrow, similar to disclaimers and waivers in the context of, say, tort claims. To put it another way, a disclaimer cannot cure blatant deception or harm that the consumer cannot reasonably avoid.
FTC Enforcement Approaches
So what about enforcement? During the course of an investigation and negotiations, the FTC considers injunctive relief and monetary relief, and both forms of remedies. In this context, injunctive relief comes in the form of requiring companies to implement certain compliance provisions in their AI programs. If appropriate and legally available, monetary relief also comes in in the form of a civil penalty. Does the FTC have any recourse against the technology itself? In a 2021 commission statement, former FTC Commissioner Chopra stated that no longer allowing "data protection law violators to retain algorithms and technologies that derive much of their value from ill-gotten data is an important course correction." Based on this directive, the FTC now seeks algorithmic deletion as a remedy in this enforcement action. For example, the FTC brought actions against
Best Practices for Companies to Limit Liability
OK. Again, we've covered quite a bit in these last podcasts. What about the best practices? What safeguards can companies implement to limit their liability? The FTC recommends reviewing its recent policy statement on biometric information. While the statement deals with biometrics, its guidance can be readily applied and available to AI systems. In a nutshell, the FTC believes that AI best practices include:
-
conducting pre-release assessments concerning foreseeable harms
- being transparent to consumers regarding the collection and the use of the data
- evaluating vendors' capabilities to minimize risks to consumers
- providing appropriate training for employees and contractors whose job duties involve interacting with AI systems and their related algorithms
- conducting ongoing monitoring of AI systems to ensure that their use is operating as intended and not likely to harm consumers
-
taking steps to mitigate the risks of those harms and not releasing the product initially if those risks could not be mitigated
Companies must remember that the FTC Act does not expressly outline the standard of reasonable foreseeability. In other words, the commission does not have to prove intent. Let me say that again — the FTC does not have to prove intent. That said, under a theory of unfairness, the FTC will consider the reasonableness of a company's conduct — what the company knew about its AI system, what it should know and what steps it took to mitigate the risks and remedies and the harm — in its discretion to prosecute a company.
Key Takeaway
So here's the key takeaway. The world of AI is rapidly evolving. Consequently, expect more comprehensive regulation in the very near future.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.
Mr
Suite 100
20006-6801
Tel: 6175232700
Fax: 6175236850
E-mail: webcontent@hklaw.com
URL: www.hklaw.com
© Mondaq Ltd, 2024 - Tel. +44 (0)20 8544 8300 - http://www.mondaq.com, source