When the California Privacy Protection Agency (“CalPrivacy”) announced a $1.35 million settlement in September 2025 – the largest CCPA penalty to date – one of the itemized grievances stood out for any practitioner who has wrestled with a vendor redline: the company had failed to amend or enter into third-party data protection vendor contracts by regulatory deadlines.

This hints at where state privacy enforcement is heading. The consumer-facing side of privacy compliance – notices, opt-out links, cookie banners – is visible and testable. But the back-end architecture of a compliant privacy program lives at least in part in vendor contracts, and regulators increasingly treat those contracts as evidence of program maturity (or its absence). Nowhere is this more concrete than in California’s 11 CCR § 7051.

Continue Reading The Paper Trail: State Privacy Law Contracting Requirements

The lesson from the PocketOS database deletion is not that agentic AI is dangerous. It’s about governance and controls.

You have probably seen some version of the headline by now: “AI Agent Deletes Company’s Entire Database in 9 Seconds.” It is a compelling story. But the headline, while technically accurate, obscures the far more important lesson buried in the details.

So what actually happened? PocketOS, a small SaaS company that makes software for car rental businesses, was using a popular AI-powered code editor running on Anthropic’s Claude Opus 4.6 model. The AI agent was tasked with resolving a routine issue in a staging environment. When it hit a credential mismatch, the agent decided on its own initiative to “fix” the problem by deleting a volume on Railway, the company’s cloud hosting provider. The agent found a password in an unrelated file and used it to execute a deletion command. Because of permissions made available to the agent and the way access to the infrastructure was configured, that single command using a password which was valid across all systems wiped both the production database and all associated backups.  

The agent, when asked to explain itself, produced what multiple outlets described as a “confession,” acknowledging it had violated its own safety instructions. The story has gone viral. The framing in most coverage puts the AI squarely at the center of the narrative: the agent “went rogue,” it “confessed,” it acted autonomously and destroyed a business. But the reports are not entirely accurate and usually miss the point.

Continue Reading The AI Didn’t Go Rogue. Guardrails Were Never There.

As another piece of harmonization legislation, the AI Act is unsurprisingly reminiscent in regulatory philosophy to the GDPR. Many of the same data principles (transparency, accuracy, security) are present, as is an explicit risk-based approach. Understanding precisely where there is overlap with your existing GDPR program is a head start in your AI Act compliance program design. But it is also important to recognize where the two frameworks diverge. The GDPR regulates what happens to personal data, the legal basis for collection, how it is used, how long it is kept, who can access it. The AI Act generally regulates the AI system itself – namely, how it is designed, tested, documented, governed, and deployed. While that difference in regulatory object creates structural differences in inputs and outputs, the framework itself does have a lot of commonalities.

This post suggests a strategy for efficiently building a unified compliance framework for both regimes.

Continue Reading One Compliance Program for Two Frameworks: Aligning the EU AI Act and GDPR for Efficiency

Episode 14 is now live. In this episode of Consumer Counterpoint, we sit down with Chicago partner Jay Carle to discuss the launch of Seyfarth’s new D.A.T.A. Law practice group. Jay shares insights into the group’s multidisciplinary approach and how it’s designed to help clients stay ahead of emerging data and technology challenges.

Watch Episode 14 Here:

Subscribe to the Consumer Class Defense Blog today and get notified when each new vidcast goes live.

Over the past decade, a vibrant defense‑innovation ecosystem has emerged across the U.S. and Europe, powered by venture‑backed defense tech startups, dual‑use technology companies, and commercial‑first innovators entering national‑security markets. As these companies begin collaborating with defense agencies, they encounter compliance obligations for handling sensitive government information. For those seeking to enter the US national security innovation sector, the center of attention remains on safeguarding Controlled Unclassified Information (CUI).

While the recently codified Cybersecurity Maturity Model Certification (CMMC) addresses more than CUI, its principal aim is to remediate inconsistent compliance with the implementation of the NIST SP 800-171 controls required to safeguard CUI in the Defense Federal Acquisition Supplement (DFARS). Whether or not a company sees itself as a “defense contractor,” understanding CUI and CMMC is rapidly becoming essential for participating in this expanding global ecosystem.

Against that backdrop, this post outlines CUI’s role within CMMC, identifies the primary sources of the underlying safeguarding obligations, and explains how CMMC operationalizes verification of those requirements, especially at Level 2.

Continue Reading Safeguarding Sensitive Government Information: Why the Cybersecurity Maturity Model Certification (CMMC) Matters for the Global Defense Innovation Ecosystem

Introduction

Robotics and artificial intelligence are converging at an unprecedented pace. As robotics systems increasingly integrate AI-driven decision-making, businesses are unlocking new efficiencies and capabilities across industries from manufacturing and logistics to healthcare and real estate.

Yet this convergence introduces complex legal and regulatory challenges. Companies deploying AI-enabled robotics must navigate issues related to data privacy, intellectual property, workplace safety, liability, and compliance with emerging AI governance frameworks.

The Shift: Robotics as an AI Subset

Traditionally, robotics was viewed as a standalone discipline focused on mechanical automation. Today, robotics is increasingly powered by machine learning algorithms, natural language processing, and predictive analytics—hallmarks of AI technology.

This evolution raises critical questions for legal teams:

  • Who owns the data generated by AI-enabled robots?
  • How do we allocate liability when autonomous systems make decisions without human intervention?
  • What contractual safeguards should be in place when outsourcing robotics solutions to third-party vendors?

As robotics increasingly incorporates AI functionality, traditional contract structures for hardware procurement and service agreements require significant updates. This evolution introduces new risk categories that must be addressed through precise drafting and negotiation.

Continue Reading The AI-Driven Evolution of Robotics

On Friday, October 17, 2025, U.S. District Court Judge Vince Chhabria issued a biting Order granting defendant Eating Recovery Center, LLC’s (“ERC”) motion for summary judgment on the plaintiff Jane Doe’s California Invasion of Privacy Act (CIPA) claims, a law enacted in 1967 to address the increasing use of wiretapping to eavesdrop on private phone conversations. In particular, Judge Chhabria found it “undisputed” that the alleged Meta Pixel did not read, attempt to read or attempt to learn the contents of Doe’s communications with ERC while the communications were in transit as is required by the statute, and thus Doe’s CIPA claims failed.

More notable were Judge Chhabria’s thoughts on the state of recent plaintiffs’ attempts to apply CIPA’s “already obtuse language” to website activity and online technologies. Calling the statute “a total mess,” Judge Chhabria opined that it “was a mess from the get-go, but the mess gets bigger and bigger as the world continues to change.” As a result, Courts are now faced with the “borderline impossible” task of determining whether website operators’ conduct falls under the ambit of the CIPA statute.

He further noted that the CIPA language at issue is “ambiguous,” acknowledging that there was at least an interpretation wherein ERC’s alleged online conduct violates CIPA. However, because CIPA is a criminal statute imposing criminal liability and punitive civil penalties, the “Rule of Lenity” of applies, even when invoked in a civil action. Under the Rule of Lenity, Courts must narrowly construe civil statutes that impose punitive civil penalties. That narrower interpretation does not cover ERC’s alleged conduct.

In his final call to action, Judge Chhabria called on the California Legislature to “step up” and “bring CIPA into modern age” to address whether such online activity should be covered by the statute. California courts are consistently issuing conflicting rulings in CIPA cases, which leaves businesses and practitioners equally confused. Judge Chhabria urged the Legislature to not only go back to the drawing board, but to “erase the board entirely and start writing something new.” 

Senate Bill 690, which failed to advance out of committee in the California State Assembly, would not have erased the drawing board entirely but did attempt to clarify that CIPA would not apply when used for “a commercial business purpose.” The bill unanimously passed the Senate in June 2025; however, as a result of being stalled in the Assembly, will not move forward until 2026 at the earliest (if at all).

Key Considerations

With the ongoing uncertainty surrounding CIPA exposure, companies should give careful thought to their cookie banner / consent management practices, including conducting regular testing to ensure operation is consistent with expectations. 

If you have any questions about this post, please contact the authors or another member of the Firm’s DATA Law practice. 

On July 24, 2025, the California Privacy Protection Agency (“CPPA”) unanimously voted to adopt a package of Proposed Regulations for the California Consumer Privacy Act (“CCPA”), marking a significant development in California privacy law. These cover Automated Decision-making Technology (“ADMT”), mandatory Cybersecurity Audits, Risk Assessments, and clarifications for the CCPA’s applicability to Insurance Companies. The package will move into its final review stage before formal enactment, once filed with the California Office of Administrative Law.

CCPA Steering Toward Operational Compliance

This is a clear signal that privacy compliance expectations in California are trending toward a more operational phase. The new rules are designed to give Californians greater control over how their personal information is used while pushing businesses toward higher levels of transparency and accountability, especially when automated decision-making and high-risk data processing is involved. For companies, this is more than just a theoretical update – it’s a clarion call to ensure these requirements are built into day-to-day governance, technology and process design, and vendor management practices.

Continue Reading California Privacy Protection Agency (CPPA) Finally Voted to Adopt Much Debated Update to CCPA Regulations: What Your Business Should Know

The UK’s Data (Use and Access) Act received Royal Assent last Thursday, June 19th, bringing into law some significant changes to the country’s post Brexit data protection framework, among an array of other, related rules (on matters ranging from financial conduct to smart meters and “underground assets,” which is more to do with pipes than spies, unfortunately). The Act is more of a selective nip and tuck than a complete makeover, intended to foster innovation by reducing and simplifying compliance burdens, while retaining the core principles and safeguards of UK GDPR and related regulations.

Implementation will be phased. If not reading further, the main takeaway is that it will be important to pay attention to further developments as most of the changes do not come into force until there is further implementing rulemaking.

This week (June 24th), the European Commission officially extended its “adequacy decision” for the UK until 27 December 2025 as previously promised, in order to allow the Commission to carry out its assessment of the adequacy of the new framework. Given further extension (to ensure continued free data flows between the EU and UK) necessarily depends on some parity between the rules in place in both markets, it’s nice to see both sides playing nicely together. Without renewal, there will be additional burdens for businesses that transfer personal data from the EU to the UK, including those that are headquartered in a third country like the US.

We round up some of the tweaks below:

  1. One Point Companies Should Immediately Evaluate: Complaints Handling. The Act specifies that controllers must facilitate complaints “by taking steps such as providing a complaint form which can be completed electronically and by other means.” Controllers must also acknowledge complaints within 30 days and act on them without undue delay. There is the notion that controllers may later be required to notify the regulator of the number of complaints received in a given period.
  2. A new Trust Framework for digital verification services (DVS) is to be implemented. Although this is yet to be formalized, it will result in new enhanced rules to replace the current voluntary Digital Identity and Attributes Trust Framework overseen by the Department for Science, Innovation and Technology. A publicly available register of compliant DVS providers will be set up and a trust mark will be introduced to help users identify certified and trustworthy digital identity providers. Registered providers will be able to directly verify personal information with public authorities via an “information gateway.” For DVS providers, there will be some additional work required to get registered and stay compliant. For companies that want to utilize DVS providers, however, this will eventually be a welcome streamlining of certain verification processes, such as KYC, age verification and employer right to work checks, particularly when contrasted with undertaking these processes in-house. Happily, there is also recognition of overseas electronic signatures (provided certain criteria are met) which should help with related friction in international transacting (e.g., for overseas companies utilizing overseas signature products) – although globally speaking, the UK has always been relatively sensible on this front.
  3. Some additional welcome clarity and flexibility for essential aspects of the UK GDPR, including:
    • Introduction of a New Lawful Basis: “Recognised Legitimate Interests.” This will be significant for some specific use cases (e.g., detecting, investigating and preventing crime), because this basis does not require the controller to balance the legitimate interests being relied on by the controller against the interests of the data subject whose personal data is being used, if such legitimate interests are “recognized” at law.
    • New Examples of the Ever Nebulous “Legitimate Interests”: including direct marketing, intra-group transmission of personal data of clients, employees or others, where necessary for internal administrative purposes or for ensuring the security of network and information systems – which are particularly helpful for US multinationals where business processes and decision-making is heavily matrixed or centralized.
    • Flexibility as to Seeking Consent for Scientific Research Purposes: Data subjects can give broad consent and organizations may not need to provide additional privacy notices or seek additional consent for the additional processing purpose of scientific research, (any research that can be reasonably described as scientific, whether publicly or privately funded or carried out as a commercial or non-commercial activity). We can expect this to be a favorite of business engaging in any kind of data heavy R&D.
    • Permitting Use of Tracking Technologies and Cookies without Consent: Consent is not required where strictly necessary to protect information related to the services requested, ensure security of the user terminal, prevent or detect fraud or technical faults and to enable automatic authentication of the user’s identity or maintain records of selections made or information provided by the user on the website. Note that fines related to unauthorized direct marketing activities have been increased to UK GDPR levels (from the relatively more modest levels set by PECR).
    • Increased Clarity with Regard to Automated Decision-Making (ADM): The Act provides for rules to clarify what activity is regulated as ADM (e.g., it defines a decision “based solely on automated processing” as one where there is no meaningful human involvement, etc.) and arguably lifts some limitations for business relying on such decisions (e.g., in AI applications and algorithmic processing).
    • Clarity as to Extent of Search Required in Response to DSAR. The Act clarifies that the data subject is only entitled to information the controller is able to provide based on a reasonable and proportionate This was not previously addressed, leading to frequent consternation among data controllers.
    • Increased Clarity as to the Existing Requirements for Transfers of Personal Data to Third Countries.

There are a few points of less clarity as well. Notably, with regard to:

  1. Artificial Intelligence (AI). The Secretary of State has nine months to publish a Report on the Use of Copyright Works in AI Systems. We remain on tenterhooks.
  2. Access to and Portability of Customer and Business Data / Smart Data Schemes. The Secretary of State has been given authority to regulate access and provision of customer and business data, including to third party recipients, including through standardized APIs or other means, in line with broader UK GDPR principles but with arguably broader coverage than under the corollary EU Regulation that will be applicable in the EU later this year (The EU Data Act). We will have to wait and see what these will actually look like.

Connect with your Seyfarth lawyer or a member of our global privacy team for guidance on these developments tailored to your business needs.

On June 3, 2025, the California Senate unanimously passed Senate Bill 690 (SB 690), a bill that seeks to add a “commercial business purposes” exception to the California Invasion of Privacy Act (CIPA).

After multiple readings on the Senate floor, SB 690 passed as amended, and will now proceed to the California State Assembly. SB 690, as originally drafted, was explicitly made retroactive to any cases pending as of January 1, 2026.  The most recent amendments on the Senate floor remove the retroactivity provisions, meaning the bill, if passed by the Assembly and signed by the Governor, will only apply prospectively.  The amendments to remove the retroactive provisions of SB 690 are not unexpected. Retroactive application provisions are traditionally frowned upon by the California legislature and may offend due process principles.

If passed, SB 690 would exempt the use of certain online tracking technologies from violating CIPA, provided they are used for a “commercial business purpose” and comply with existing privacy laws like the California Consumer Privacy Act (CCPA).  SB 690 could significantly impact prospective litigation under CIPA for online business activities.  Indeed, there may be the proverbial “rush to the courthouse” if plaintiffs and plaintiffs’ attorneys begin to feel that passage of SB 690 is forthcoming or likely, now that the bill will proceed to the State Assembly.

Businesses may want to consider engaging their government relations teams or contacting members of the California State Assembly to express their positions on the bill as it now passes to the other chamber of the California legislature.