Emerging Theories of Liability in the Age of AI: A New Frontier for Litigation

Yassin Qanbar
April 9, 2025

AI and the Law: Exploring Emerging Liability Frameworks

While recent debates surrounding copyright infringements highlighted by controversies involving beloved creators such as Studio Ghibli have captured headlines, these discussions represent just the tip of the iceberg. Beyond intellectual property concerns, AI systems introduce a range of overlooked yet equally critical avenues for legal scrutiny. 

Issues such as biometric privacy violations, consumer protection pitfalls, workplace discrimination through automated decision-making, and the troubling inaccuracies of AI detection tools in education remain largely underexplored, despite their profound legal and societal implications.

This article explores the emerging landscape of AI-related litigation, highlighting new theories of liability across privacy, consumer rights, employment discrimination, educational fairness, and financial transparency.

1. Privacy-Based Theories of AI Liability

AI privacy concerns have led to many lawsuits that challenge how companies collect and use personal data. The courts now face new legal questions, and two liability theories stand out as the most important battlegrounds.

Data Scraping Litigation Against Training Datasets

AI needs big datasets to learn from, and companies often get this data by scraping websites, social media, and other online sources. This practice has sparked several major lawsuits about using scraped content to train AI.

The New York Times lawsuit against OpenAI shows this new type of legal challenge. The paper claims OpenAI used its copyrighted material without permission to train AI models. OpenAI says "fair use" protects them because their use serves a "transformative" purpose. Content owners argue this threatens their business models.

Similarly, Chegg sued Google and Alphabet, alleging that Google exploited its dominance in online search to coerce online publishers like Chegg to supply content that Google then republished without permission in AI-generated answers, directly competing against Chegg’s subscription based services.

Automated data collection often breaks website rules. The legal picture got more complex when the Northern District of California ruled that state contract laws about data scraping might be overruled by the Copyright Act. The court thought allowing these claims would "entrench a private copyright system," which affects how companies try to protect their data through contracts.

A 2023 lawsuit (since dismissed) claimed Google's web scraping for its Bard AI model broke privacy, anti-hacking, and intellectual property laws. 

IBM also faced backlash and legal challenges after building a facial recognition tool with about one million Flickr photos they downloaded without permission. In response to growing concerns over privacy and the ethical use of facial recognition technology, IBM announced in June 2020 that it would cease offering, developing, or researching general-purpose facial recognition software.

Data scraping raises privacy concerns when content has personal information. The Office of the Privacy Commissioner of Canada looked into Clearview AI and found that even public online information needs clear permission when it comes to sensitive biometric data.

Biometric Data Collection Without Consent

Facial recognition technology has triggered a flood of lawsuits under biometric privacy laws. Clearview AI ran into massive legal trouble after gathering over 10 billion images from public websites and social media without asking users. Their huge database, which they used to train and run AI systems, broke the Illinois Biometric Information Privacy Act (BIPA). This law says companies must get clear permission before collecting biometric data.

BIPA has become the centerpiece of biometric privacy lawsuits because it lets individuals sue and offers big damages:

The Illinois Supreme Court made things even riskier in February 2023. In Cothron v. White Castle, they ruled that each individual scan without consent counts as a separate violation. Companies using biometric authentication now face potentially devastating damages that pile up with every scan.

Biometric data isn't like passwords or social security numbers - it creates unique privacy risks. Biometrics are biologically unique to the individual; therefore, once compromised, the individual has no recourse. These permanent markers make data breaches especially dangerous.

California's privacy laws allow lawsuits even without specific biometric rules. The Renderos v. Clearview AI case moved forward based on California's Constitution and Unfair Competition Law. Courts decided that biometric information is "by its very nature... sensitive and confidential".

A recent settlement was approved in the MDL, wherein Clearview AI agreed to provide a 23% equity stake in the company to the settlement class, estimated to be worth approximately $51.75 million based on a $225 million valuation.

2. Consumer-Facing AI Systems and Liability Risks

Companies are rushing to deploy AI systems that interact directly with consumers. This has led to a wave of new legal challenges. These technologies promise convenience and efficiency but often collect sensitive data and create unexpected liability risks.

Voice Services: Impersonation and Consent Issues

Voice cloning technology has evolved faster, creating legal vulnerabilities that regulators don't deal very well with. The Federal Trade Commission proposed new rules in February 2024 to specifically ban AI impersonation. They claimed that "fraudsters are using AI tools to impersonate individuals with eerie precision". Voice-over actors filed a class action against LOVO, Inc. They alleged the company "stole their voices and identities without permission or compensation".

The legal risks are substantial. All but one of these leading voice cloning services have safeguards that are easy to bypass, which makes unauthorized voice cloning simple. These weaknesses create liability under several legal frameworks:

  • Right of publicity violations (voice as protected likeness).
  • Federal Lanham Act claims through false advertising.
  • State-level privacy protections like New York's publicity laws.

AI in Drive-Thru and Customer Service Interactions

Financial services and restaurant chains face scrutiny for deploying AI systems in customer interactions. McDonald's faced a lawsuit after implementing AI chatbots at ten Chicago drive-thrus. The lawsuit alleged violations of Illinois' Biometric Information Privacy Act. The complaint claimed McDonald's processed voice data to predict "age, gender, accent, nationality, and national origin" without getting mandatory written consent.

Banks implementing AI in contact centers face higher risks. Voice data processing often counts as biometric information collection. This triggers consent requirements in states with strong privacy laws. One attorney puts it simply: “at the end of the day, it boils down to consent, consent, and more consent. The more fulsome you are in your disclosures and the more honest you are, the less risk you have.

Patagonia faced a lawsuit that targets Patagonia’s implementation of Talkdesk’s AI product, Copilot, described as a generative AI assistant that listens to customer interactions and automatically provides agents with suggested responses across chats, emails, calls, and texts. Copilot stores this data in the cloud, creating detailed interaction histories that allow companies to track customer conversations across different communication channels, with all information stored on Talkdesk’s servers without sufficient disclosures.

Chatbot Liability: When Algorithms Make Promises

Businesses may be legally bound by their AI systems' representations, even incorrect ones. Air Canada learned this the hard way in a landmark case. The company had to honor its chatbot's wrong statement about bereavement fare eligibility. The British Columbia Civil Resolution Tribunal ruled that companies must take "reasonable care to ensure that the representations on their website are accurate and not misleading" whatever they come from static text or an AI chatbot.

This suggests businesses could face negligent misrepresentation claims when their AI systems "hallucinate" or give wrong information that customers trust. Many financial institutions remain cautious about using generative AI directly with customers because of these risks.

AI Influencer Marketing 

We are beginning to see AI influencers gain large followings on social media. These AI influencers will have significant value in promoting products and brands to their audiences. However, the FTC has strict rules on what must be disclosed when promoting brands or products. 

These rules will likely require that the AI influencer make certain disclosures regarding the nature of the paid endorsement, specifically as it relates to the person in the video being AI generated. As we see the proliferation of brands using AI generated “people” to promote their products, the FTC and plaintiff lawyers will likely challenge the level of disclosure that brands are providing. 

AI Algorithms Insurance Claim Denials

Health insurers UnitedHealthcare and Cigna face litigation over AI and algorithmic systems allegedly used to improperly deny claims. UnitedHealthcare's "nH Predict" algorithm reportedly denied essential post-acute care, overruling medical recommendations. 

Also, Cigna's "PxDx" system faced claims of bulk-denying thousands of medical requests without adequate review. 

These cases highlight broader risks: insurers across health, auto, and property sectors could face increased liability if opaque AI-driven processes deny legitimate claims without transparency or accountability.

3. Workplace AI Implementation and Legal Exposure

AI's presence in workplaces has sparked a surge of lawsuits that changes how companies face liability. Companies started using artificial intelligence to make work easier. Now AI handles everything from hiring to checking how well employees perform, which creates new legal risks for businesses of all types.

Algorithmic Hiring Practices and Discrimination Claims

AI in recruitment brings significant discrimination risks. A striking 70% of companies and 99% of Fortune 500 companies use AI tools to hire people. These systems often carry forward old biases because they learn from past data that shows discrimination in institutions. The EEOC settled its first AI discrimination case in 2023 against iTutorGroup. Their algorithms rejected women over 55 and men over 60 automatically, which filtered out more than 200 candidates just because of their age.

Courts have determined that AI vendors can be directly responsible for discrimination. The Mobley v. Workday ruling showed that vendors providing AI tools can be liable as "agents" not “agencies” under federal anti-discrimination laws when employers let them handle hiring.

Employee Monitoring Through AI Systems

Eight out of ten of the largest private U.S. employers now track their workers live. AI tools look at messages, computer use, physical movement, and even how employees feel. Legal risks go beyond privacy:

  • Federal agencies like the NLRB, FTC, DOJ, and DOL have created new alliances to stop potential misuse.
  • Workers under constant watch show signs of mental stress and lower motivation.
  • AI-based employee monitoring may also violate labor laws protecting workers’ rights to unionize, organize collectively, and negotiate their employment conditions.

Liability for Automated Decision-Making in HR

Colorado passed the first detailed state AI law that makes employers:

  1. Create risk management programs
  2. Complete yearly assessments
  3. Tell employees about AI use
  4. Report algorithmic discrimination within 90 days

California is advancing regulations to address AI-driven discrimination in employment:​

  • Prohibition of Harmful Automated Systems: The proposed rules state that using automated systems that adversely affect applicants or employees based on protected characteristics violates state law.​
  • Record Retention: Employers must retain records related to AI-driven employment decisions for at least four years, an extension from the previous two-year requirement.​
  • Human Oversight: The regulations emphasize the necessity for human oversight in AI processes to ensure fairness and compliance with anti-discrimination laws.

4. AI Detection Software Reliability Issues

The rapid integration of AI-generated content into education has created a fraught legal landscape. As schools increasingly adopt AI detection software marketed to identify plagiarism or cheating, significant liability issues are emerging.

The core problem with AI detection services is that they fundamentally rely on distinguishing AI-generated text from human-written content. However, modern generative AI is explicitly designed and trained to mimic human writing, making accurate detection inherently unreliable.

Despite this fundamental flaw, numerous companies are commercially marketing AI detection software to educational institutions, charging substantial fees for a service whose reliability remains questionable at best. Schools and universities often rely heavily on these tools' outcomes, potentially making critical decisions like disciplinary actions or academic penalties based on unreliable, incorrect findings.

AI Detection in Education

The AI era has turned education into a legal battleground. Schools now face lawsuits over plagiarism detection tools that affect student rights and create liability issues.

False Positives in Plagiarism Detection Tools

AI detection tools show worrying levels of inaccuracy. Turnitin claimed their false positive rate was below 1% at first. But their tool doesn't deal very well with "hybrid" texts that mix human and AI writing. This becomes obvious when AI-generated content makes up less than 20% of a document. These technical issues create real problems. 

Vanderbilt University in 2022 highlighted their concerns about using AI detection software and why they no longer use it. Stating: “Based on this, we do not believe that AI detection software is an effective tool that should be used”. Other universities have come to similar conclusions as well.

Student Rights and AI Misidentification Cases

Wrong accusations have led to lawsuits across the country. A student's scholarship disappeared after being falsely accused of cheating for using Grammarly (a tool her university actually recommended). In Massachusetts, students took legal action against their school because it had no clear AI policy before punishing them.

Schools face more legal risks when they punish students based on unreliable detection tools. In a 2024 survey research, The Center for Democracy and Technology (CDT) found teachers report that 40% of students get into trouble just because of how they react to AI accusations. This creates a dangerous pattern where schools judge students by their behavior instead of solid proof.

Some claim that these tools unfairly target vulnerable groups:

5. “AI Washing” Leading to False Advertising and Securities Fraud Litigation

Securities litigation has seen a new trend emerge around “AI washing”, companies that overstate their artificial intelligence capabilities to lure investors and drive up stock prices. This misleading practice has triggered regulatory enforcement and investor lawsuits that showcase novel liability theories in the Future of AI Litigation.

AI in Financial Markets

AI is transforming the financial industry, offering benefits like improved trading strategies and enhanced risk management. However, the rapid adoption of AI also introduces significant risks, particularly systemic vulnerabilities and the potential for market manipulation.

The use of AI in trading can amplify systemic risks if similar algorithms are adopted widely, leading to "monoculture" effects where market stability is threatened by homogeneous decision-making. Regulators have raised concerns that AI's capacity to process large datasets may create market concentration, with a few players controlling critical data and exacerbating risks.

Additionally, the opacity of AI systems makes detecting market manipulation more challenging. AI-driven strategies, especially those utilizing reinforcement learning, can produce unpredictable behaviors that are hard to anticipate and monitor with traditional surveillance methods. This creates significant hurdles in ensuring market integrity. 

One expert cautions about an "AI arms race" between regulators and those who try to manipulate markets with increasingly advanced AI methods.

SEC Enforcement Actions Against Overstated AI Claims

The Securities and Exchange Commission has taken strong action against AI washing through enforcement:

  • In January 2025, the SEC launched its "first AI-washing enforcement action" against Presto Automation for misrepresenting its AI-powered restaurant service technology.
  • In March 2024, the SEC settled charges with two investment advisers (Delphia and Global Predictions) for making false statements about their AI use, imposing $400,000 in total civil penalties.

SEC Chair Gary Gensler speaks plainly about AI washing: "don't do it... I don't know how else to say it". Director of Enforcement Gurbir Grewal has also warned companies to ensure their "representations regarding your use of AI are not materially false or misleading".

Investor Class Actions for AI Capability Misrepresentation

Federal court actions targeting AI washing doubled in 2024, with 15 cases compared to seven in 2023. These lawsuits typically claim violations of Sections 10(b) and 20(a) of the Exchange Act, alleging defendants made materially false or misleading statements about their AI capabilities.

Key examples include lawsuits against Innodata for allegedly misrepresenting its proprietary AI that actually relied on thousands of low-wage offshore workers, and against Oddity Tech for overstating sophisticated AI use when its technology was "nothing but a questionnaire" according to a NINGI research report.

Looking Ahead

As litigation continues to test AI's legal boundaries, we can expect increasingly nuanced and complex disputes to emerge. The current gaps in regulatory frameworks (particularly in areas such as copyright, biometric data collection, securities, Insurance fairness, social media accountability, and AI detection technology) leave courts navigating uncharted waters. Lower court decisions are likely to remain inconsistent, reflecting uncertainty about how traditional laws apply to rapidly evolving AI technologies.

Looking forward, legislative bodies may be compelled to address these gaps explicitly, driven by mounting public pressure and high-stakes litigation. However, until clearer standards emerge, businesses should anticipate continued legal uncertainty. 

In the recent Carlton Fields Class Action Survey, Companies are reportedly demanding that their Law Firm partners help them “look around the corner” to proactively manage risks and spot impending class action litigation. 

Firms equipped with litigation prediction and trend-spotting tools like Rain Intelligence will find themselves better positioned as the go-to trusted partners in navigating the coming wave of AI litigation.