The Federal Trade Commission filed lawsuits against five different companies alleging that those companies either made deceptive claims about AI products and services, or used AI in deceptive ways. The FTC announced that these lawsuits are part of a crackdown on companies allegedly engaging in this behavior called “Operation AI Comply.” AI washing has been a recent focus of federal enforcers. This week’s lawsuits represent another step taken by the FTC furthering its position that there is no AI exception to the law.
Artificial . . . evidence?
Earlier this spring, A Washington State Court Judge issued what is widely believed to be the first evidentiary decision regarding Artificial Intelligence. In Washington v. Puloka, following a Frye hearing, the Judge excluded AI-enhanced video from being considered as evidence. The video originated from Snapchat, and was enhanced using Topaz Labs AI Video, which is a commercially available software program widely used in the cinematography community. The Judge was not persuaded by this widespread commercial adoption, and held that the relevant community for purposes of Frye was the forensic video analysis community – which had not accepted the use Topaz AI.
The opinion shows careful consideration of an issue of first impression. Notably, it was important to the Judge’s opinion that there was another version of the video (the original) that was available and usable – even if it was low resolution, with motion blur. Further, the expert who edited the video did not know the details of how the Topaz Labs AI program worked – that is, he was not sure whether it was generative AI, could not testify to the reliability of the program, and did not know what datasets it was trained on. A different result may prevail where there is no other alternative, and in a situation where there is more testimony regarding the operation of the AI system at issue.
These issues will continue to pop up in courts across the country, and may need to be dealt with in a systematic way to ensure greater consistency. For example, the Advisory Committee on Evidence Rules has been considering proposed amendments to Rules 901 and 702 that would directly address AI-generated evidence.
Federal Enforcers target “AI Washing”
The SEC has entered into settlements on charges with two investment advisers based on misleading statements in their SEC filings regarding their use of Artificial Intelligence technology. Late last year, the Chair of the SEC had warned against overstating use of AI technology so as to mislead investors, and the settlements this week show an intent to follow-through with this priority. The SEC’s efforts to protect investors dovetail with the FTC’s warnings and enforcement actions against misleading consumers by overstating AI capabilities. Companies in the AI space, particularly those with SEC filing obligations, should be aware of this enforcement activity when making claims regarding their technology.
Senate Democrats Introduce Bill to Scrutinize Price-Fixing Algorithms
Several Democratic senators introduced a bill intended to stop companies from utilizing predictive technology to raise prices. Businesses are increasingly delegating important competitive decisions, including price-setting power, to artificial intelligence, algorithms, and other predictive technology software. The new bill, titled Preventing Algorithmic Collusion Act, is intended to ensure that such conduct by direct competitors to raise prices does not avoid scrutiny under the antitrust laws. The proposed bill includes several important aspects. First, it would presume a price-fixing agreement exists whenever direct competitors raise prices by sharing competitively sensitive information through pricing algorithm software. Second, it would require businesses to disclose the use of algorithms in setting prices and allow antitrust enforcers to audit the algorithm. Third, it would prohibit companies from using competitively sensitive information from direct competitors in developing a pricing algorithm, and fourth, it directs the FTC to study the impact on competition from pricing algorithms. Businesses utilizing technology to help with pricing and other competitive decisions should monitor these enforcement efforts.
FTC launches GenAI investigation
The Federal Trade Commission announced today that it has begun an investigation into Generative AI investments and partnerships. The FTC is using its investigative power pursuant to Section 6(b) of the FTC act which allows the FTC to issue compulsory process (similar to a subpoena or Civil Investigative Demand) to learn information about an organization, without a specific law-enforcement purpose. Historically, the FTC has used its 6(b) power to conduct studies regarding particular industries or practices that may inform future agency positions or enforcement priorities. The investigation announced today is a concrete fact-gathering step by the FTC regarding the regulation of Generative AI.
What does herring fishing have to do with AI?
Herring fishing – of all things – could have a big impact on AI regulation in 2024. That is, cases brought by two herring fishing companies are before the Supreme Court that could have wide-reaching influence. The cases challenge actions taken by the National Marine Fisheries Service and longstanding Chevron deference. Under Chevron, courts afford deference to reasonable agency interpretations of ambiguous laws. At oral argument last week, the Court signaled a willingness to overturn Chevron deference. This is notable for the Artificial Intelligence space that lacks explicit legislation from Congress. Indeed, the Executive Order on Artificial Intelligence last year is largely directed at Federal Agencies, instructing the agencies to take action. In the absence of Chevron deference, actions taken by agencies pursuant to that order could be more susceptible to legal challenge. Justice Kagan even called out AI in oral argument as an area that could see effects from the Court’s ruling. The Supreme Court is expected to rule by the end of June.
FTC Staff Issues Reminders to AI Companies
Today, the Staff in the Office of Technology of the Federal Trade Commission (“FTC”) posted a reminder to AI companies, enumerating the ways that they can run afoul of the laws enforced by the FTC. In particular, FTC Staff called out Model-as-a-Service companies, and impressed the importance of safeguarding individual and proprietary data involved in creating the models. FTC Staff indicated that there could be both consumer protection and competition concerns associated with a failure to do so. Further, FTC Staff warned that AI companies need to be forthcoming in how data is being used, and companies that omit material facts that would affect whether customers buy a particular product or service may run afoul of competition laws.