Sep 11, 2025

It used to be simple. See a video. Believe what it shows. Case closed. Not anymore.
Three Days, Three Deepfakes
Day One
It’s Monday morning, and Chad is mad. Someone in marketing used an AI app to turn him into a donkey crooning You’re So Vain. The video went viral in the company Slack.
Chad doesn’t think it’s funny. HR doesn’t either.
You’re flipping through the handbook, wondering if a line says, “Don’t deepfake your coworkers into barnyard karaoke.” There isn’t.
Day Two
Tuesday brings an anonymous hotline report. This one comes with a video of Barbara from finance allegedly pocketing cash from the till. The clip looks clean, the angle is perfect, and the gossip spreads instantly. Barbara swears it never happened.
Ten years ago, this would’ve been an open-and-shut case. Today, the question isn’t what happened — it’s is the evidence even real?
Day Three
By Wednesday, product is buzzing about a new feature: hyper-realistic avatars built from selfies. They want to know: “Can we launch this without being accused of enabling deepfakes?” You can already hear regulators clearing their throats.
Welcome to the new frontier of corporate life. Deepfakes aren’t curiosities anymore. They’re HR headaches, investigative landmines, and product dilemmas rolled into one.
What Are Deepfakes, Really?
A deepfake is synthetic media — a photo, video, or audio file created with artificial intelligence — designed to mimic someone saying or doing something they never did. Unlike old-school Photoshop jobs, deepfakes capture subtle details: lip movements, micro-expressions, speech cadence. To the untrained eye, they’re indistinguishable from reality.
The technology has advanced fast. Generative adversarial networks (GANs) and autoencoders can take a single headshot and minutes of audio and generate an entirely fake video of a person. What once took a Hollywood budget now takes a free app.
That accessibility is why deepfakes have moved from internet novelty to real workplace risk. We’ve already seen:
Political deception: A fake video of Ukrainian President Volodymyr Zelensky telling his army to surrender. (Social media platforms quickly removed it as an “absolutely terrible” forgery.)
Commercial misappropriation: A fake Tom Hanks ad selling dental insurance. (Hanks warned fans the AI-generated video was unauthorized and “I have nothing to do with it.")
Corporate fraud: Entire Zoom calls with “executives” that never existed, used to trick employees into wiring millions. (In early 2024, a Hong Kong employee of engineering firm Arup was duped by an AI-generated video call posing as company leaders, leading her to transfer HK$200 million (≈$25 million) to criminals.)
The legal implications are clear. Video and audio evidence used to be the gold standard. Now it’s often the most suspect piece of evidence on your desk.
Day One: The Prank — Harassment and Handbook Gaps
Chad’s donkey karaoke might sound like a joke. Legally, it’s a problem.
The Legal Framework: Workplace harassment law already covers this kind of digital mischief. Title VII’s hostile work environment rules aren’t limited to in-person or verbal conduct – offensive images or videos can create a hostile environment if they humiliate or intimidate an employee. In other words, deepfake harassment counts. Courts and the EEOC recognize that ridicule via fake imagery (even off-hours) can poison the work environment, triggering liability for the employer if it’s based on a protected trait or if the employer fails to act.
Defamation is also in play. “It was just a joke” is not a defense if a deepfake harms someone’s reputation. A false video attributing bad behavior to a coworker could be libelous if others believe it. While intentional infliction of emotional distress has a high bar, a deliberately humiliating synthetic video might clear it if it’s truly extreme and outrageous.
Then there are policy gaps. Most employee handbooks don’t mention deepfakes or AI-generated content at all. That leaves companies exposed and HR without clear authority to act. The prank falls into a gray zone not anticipated by your anti-harassment, social media, or conduct policies.
States like California and Texas have enacted laws against deepfakes, but those focus on election interference and non-consensual pornography, not office prank. (California banned deceptive AI images in election campaigns and created a civil cause of action for pornographic deepfakes, while Texas made it a crime to fabricate deceptive videos to influence elections.) In the workplace context, employers largely have to set the rules themselves.
How Unified Law Handles It: When we act as your in-house department, we don’t wait for Chad’s donkey video to send HR scrambling. We:
Update your handbook to define “deepfake” and ban its misuse. (No more ambiguity on digitally harassing coworkers.)
Draft consent rules: No using an employee’s likeness in any media without written approval.
Build training so employees understand that “funny” deepfakes can equal serious misconduct.
Set enforcement ladders so discipline is consistent and defensible.
What looks like barnyard karaoke can quickly become a harassment claim. We make sure your culture and policies close the gap before the first incident.
Day Two: The Investigation — Evidence in Doubt
Barbara’s “till theft” video shows how investigations have changed.
The Legal Framework: Evidence authentication is now a minefield. Under Federal Rule of Evidence 901, evidence must be “authenticated” – proven to be what it claims. With deepfakes, a chain of custody isn’t enough; a convincingly fake video can sail through normal procedures. Courts are increasingly requiring expert testimony and forensic analysis before admitting digital recordings. In fact, the federal judiciary is considering new rules to address this: a proposed Rule 707 would treat AI-generated evidence like expert testimony, meaning the proponent must show it’s based on reliable methods and data. In short, any machine-generated output (like a deepfake) would have to clear higher authenticity and reliability standards before a judge would trust it.
Employment law comes into play if you act on shaky evidence. Fire Barbara on the basis of a fake video, and you risk wrongful termination claims (for lack of good cause) and defamation claims if you effectively accused her of theft. You might even see an emotional distress claim if the ordeal was extreme. On the flip side, ignore a report that later turns out to be true, and you risk negligent supervision or retention claims for keeping an actual thief on staff. The stakes are high on either end, so investigative diligence is key.
Whistleblower protections also cast a shadow. An anonymous hotline report is legally protected – you can’t retaliate against the reporter. But if the “evidence” they provide is synthetic, you’re in a bind. Discipline the accused (Barbara) without authenticating the video, and you could be firing an innocent employee (and facing the claims above). Discipline the reporter for submitting a likely deepfake, and you risk a retaliation claim. Until you prove the video is fake or real, any action is legally fraught.
Case examples:
In 2019, criminals used a deepfake voice to impersonate a CEO and convinced an employee at a UK energy firm to send them $243,000. It was one of the first corporate deepfake scams on record – the employee recognized the boss’s accent, but it was an AI fake.
In early 2024, Arup Group’s Hong Kong office lost $25.6 million after a finance employee joined a video call with “executives” — all deepfakes — and dutifully wired funds as instructed. (The video conference had many participants who looked and sounded exactly like real senior officers, so the employee didn’t suspect a thing before transferring HK$200 million in a series of transactions.)
These examples underscore that seeing (or hearing) is no longer believing. The law recognizes this: after the UK voice scam, courts and insurers began demanding more verification for unusual transactions, and in the Arup case, authorities treated the incident as sophisticated fraud – not an employee blunder.
How Unified Law Handles It: When we run investigations for clients now, authentication is built in from the start:
We commission forensic reviews of metadata, compression artifacts, and digital “fingerprints” in audio/video evidence. If a video of Barbara surfaces, we’ll analyze file details, hash values, and pixel inconsistencies that might reveal tampering.
We deploy AI-based detection tools (e.g. FakeCatcher, Reality Defender) to identify signs of deepfake generation. These tools analyze subtle quirks in lighting, audio frequency, and image artifacts that human eyes might miss.
We document every step, creating a defensible record to show a court or regulator that our conclusions are grounded in science. If we conclude a video is fake, we can prove how we know. If it’s real, we establish that too.
We train HR and compliance teams to assume nothing. The old reflex was “if it’s on camera, it happened.” The new mindset is “if it’s on camera, it might be fake.” Our training modules walk investigators through verification steps before any disciplinary decision is made.
Investigations are no longer just about what happened — they’re about what’s real. We give you the tools and processes to make defensible calls when evidence itself is in doubt.
Day Three: The Product — Innovation Meets Regulation
Your product team wants to launch hyper-realistic avatars. As counsel, you see risk all over it.
The Legal Framework: When you release a deepfake-like tool, you invite a swarm of potential legal issues:
FTC deceptive advertising: Under Section 5 of the FTC Act, using AI to create false or misleading endorsements or personas is a deceptive trade practice. In fact, the FTC just passed a rule explicitly banning “fake or AI-generated celebrity testimonials” and similar endorsements. If your avatar tool could generate a video of a celebrity or CEO endorsing a product without consent, expect the FTC to take interest. The agency has shown it will hold companies liable for enabling deception with AI, not just the end-user who deploys it.
Right of publicity: Every state has some form of law protecting a person’s name and likeness from being used commercially without consent. A hyper-realistic avatar of a real person could violate these rights. Congress is considering a federal No Fakes Act to create a unified right of publicity for digital replicas, with statutory damages and takedown remedies. If passed, it would give individuals (or their estates) a federal cause of action if, say, your app generates their face or voice in an ad without permission, with fines potentially $5,000 per unauthorized use (and more for willful or commercial violations). Even before that becomes law, many states (like California, New York, Tennessee) have updated their publicity statutes to explicitly cover AI-generated likeness.
Intellectual property: Deepfake tools can implicate copyright and trademark law. If users can upload Disney characters or Nike logos and create “new” content, you might be contributing to infringement. At a minimum, failing to police blatant IP misuse on your platform could lead to contributory liability (think Napster/Grokster, but for AI content). The U.S. Copyright Office’s recent report on digital replicas advocates for stronger protections here, and companies should expect enforcement efforts to follow. In short, your tool should have filters to prevent generating content that is obviously someone else’s IP.
Product liability (negligence): If your app launches without safeguards and it was foreseeable that it would be used to harm people (e.g., create fake revenge porn or fraud videos), plaintiffs may argue you were negligent. This is novel territory, but we’re already seeing theories of liability against AI tools that lack “safety by design.” An analogy is social media platforms being sued for not preventing harms – courts are debating those cases now. Your AI avatar tool could be next if, for example, it empowers an impersonation scam that costs someone millions.
Global frameworks: Outside the U.S., the rules get stricter. The EU’s draft AI Act treats deepfake technology as “high risk” and will require clear disclosure when content is AI-generated. Providers of deepfake tools in the EU will likely have to build in watermarks or metadata indicating AI origin, and users (deployers) must label output to inform viewers it’s synthetic. Similarly, China already requires watermarks on AI-generated media. Even if you’re U.S.-based, these global standards often become de facto best practices. Additionally, the NIST AI Risk Management Framework (a U.S. guidance from 2023) has quickly become a de facto industry standard for AI governance. Regulators and customers may expect you to follow NIST’s principles (which emphasize transparency, accountability, and safety in AI). Ignoring these frameworks could not only hurt your reputation but also become evidence of negligence or unfair practice.
How Unified Law Handles It: We don’t clear products after the fact. We embed legal at the design stage:
Require user consent flows for any likeness use. If an avatar is based on a real person (even the user themselves), we build in clear consent and representation that they have rights to that face/voice. For celebrity/look-alike features, we strongly discourage them or implement license libraries.
Build in watermarking & labeling: From day one, we ensure the product outputs contain hidden watermarks or metadata tagging them as AI-generated. And we design the UI to add a small label (e.g. “Synthetic media” in a corner of videos) in compliance with emerging laws. These technical solutions fulfill the transparency obligations regulators expect.
Create takedown protocols: We set up a system (similar to DMCA for copyright) where anyone can report an unauthorized or harmful deepfake generated by our tool, and we can swiftly remove or disable it. This ties into the coming legal duty to remove deepfake content (as seen in the Take It Down and No Fakes proposals).
Align with U.S./EU standards: We proactively follow the EU AI Act’s requirements and the NIST framework’s guidance on risk mitigation. By doing so, our clients can honestly tell regulators (or investors) that their product was built with the strictest global standards in mind, not just the laxest local law.
In 2025, “move fast and break things” breaks companies. We pair your innovation with legal guardrails so your product doesn’t become a lawsuit.
The Legal Landscape: Where the Law Stands Today
Deepfake law is a patchwork of existing doctrines, new proposals, and state-by-state activity. Here’s the overview:
Existing Frameworks
Employment law: Title VII of the Civil Rights Act of 1964 covers harassment via deepfakes if it targets protected characteristics (e.g. sex or race) or is severe enough to create a hostile workplace. Employers also face wrongful termination exposure if they fire someone based on falsified evidence, and negligence claims if they fail to protect employees from known deepfake harassment.
Torts: Defamation and intentional infliction of emotional distress apply to deepfakes that harm reputations or cause emotional harm. A fake video or audio can be defamatory if it portrays someone in a false and damaging light. And if a deepfake is used to humiliate or terrorize someone publicly, civil IIED liability could follow (though the threshold is high).
Criminal statutes: Federal law hasn’t outlawed deepfakes per se, but existing crimes can reach them. For example, using a deepfake to impersonate someone could violate identity theft laws (18 U.S.C. § 1028A) or fraud statutes. The Department of Justice has prosecuted deepfake-enabled schemes under wire fraud and conspiracy laws. In the corporate context, if an employee uses deepfake audio to trick a colleague into a fraudulent payment, that’s still wire fraud – the AI aspect just makes it novel. (Notably, after the $243k voice scam in 2019, some jurisdictions began treating voice impostor scams as identity theft.)
Evidence law: Courts are warily adjusting authentication standards. Federal Rule of Evidence 901(a) requires that an item of evidence is what the proponent claims it is. Now judges demand more foundation for digital evidence – often requiring experts to attest a video or audio is unaltered before it’s admitted. Proposed amendments to the Federal Rules (under consideration by the Judicial Conference) would explicitly address AI outputs. One proposal is a new Rule 707, which would force parties offering machine-generated evidence to meet the reliability requirements of expert testimony (FRE 702). In essence, if you want to use an AI-created piece of evidence in court, you may need an expert to validate the AI’s process and output.
Privacy and IP: State right-of-publicity laws already protect individuals from unauthorized commercial use of their name, image, or voice. Deepfakes that put your face in an ad or your voice in a game can trigger these laws. (For instance, California’s statute explicitly covers digital replicas of performers.) Intellectual property laws also come into play: a deepfake that uses someone’s copyrighted performance or a trademarked uniform/logo could be infringement. And distributing AI tools trained on copyrighted data has raised questions (currently in litigation) about fair use and IP violation. While we wait for clearer precedent, companies should treat unauthorized use of real identities in deepfakes as a likely violation of someone’s rights.
Criminal law: A few states have begun to criminalize certain deepfake uses. For example, Texas passed a law making it a misdemeanor to create a deceptive deepfake video intended to influence an election. Virginia was one of the first to criminalize non-consensual deepfake pornography back in 2019. And on the federal level, using deepfakes in a scheme to deceive could invoke fraud or cyberstalking statutes. We also see proposals to explicitly criminalize some deepfake production (especially involving sexual exploitation or election interference).
Federal Proposals
No Fakes Act: This bipartisan bill (reintroduced April 2025) would create a federal right of publicity for “digital replicas.” It gives individuals (and their heirs) the right to sue anyone who creates or distributes AI-generated likenesses of them without consent for commercial gain. The Act includes statutory damages (e.g. a set amount per violation) and would require online platforms to honor takedown requests for deepfakes of real people. In short, it federalizes the rule that you can’t use someone’s face/voice as AI content without permission.
COPIED Act: Short for Content Origin Protection and Integrity from Edited Deepfake Media Act, this bill focuses on labeling and provenance. It would direct NIST to establish standards for watermarking AI-generated content and require AI tool developers to let users embed provenance info in outputs. It also would prohibit removing those watermarks and would empower creators to sue if their content is used in AI without authorization. The goal is to make sure there’s a detectable “paper trail” (or code trail) for synthetic media.
TAKE IT DOWN Act: Enacted in May 2025, this law (codified at 47 U.S.C. § 223(h)) attacks non-consensual intimate imagery, including deepfakes. It set up a system where victims (or minors’ parents) can submit removal requests to online platforms, and platforms must remove the reported images within 48 hours or face penalties. In other words, if someone posts a fake nude or sex video of an employee, this federal law can be invoked to get it taken down quickly. (It’s a complement to the older revenge-porn laws, with an emphasis on speed and including AI fakes.)
Protect Elections from Deceptive AI Act: A proposal to outlaw certain election-related deepfakes. It would prohibit distributing materially deceptive AI-generated audio, images, or video of a candidate within a certain period before an election, if done to influence the election. It also gives candidates the right to sue to remove such deepfakes and seek damages. Exceptions are included for satire or parodies, but the thrust is to criminalize or penalize deepfake propaganda in campaigns. This reflects a wave of concern after incidents like faked videos in recent elections.
Others: Congress has floated several related bills (some with great acronyms), such as the DEEPFAKES Accountability Act (focused on national security threats and a labeling requirement), the DEFIANCE Act (enhanced penalties for deepfake porn targeting individuals), and the REAL Accountability Act (which would hold AI companies liable for not implementing reasonable safeguards). None have passed yet, but the volume of bills shows that federal lawmakers are paying attention.
State Activity
California: A pioneer here. In 2019 California passed AB 730, banning the distribution of deceptive deepfake videos of candidates within 60 days of an election (with civil remedies). The same year, it passed AB 602, creating a civil cause of action for victims of pornographic deepfakes (with up to $150,000 in statutory damages for willful violations). California also expanded its publicity rights law to cover digital replicas of deceased personalities. (Recent California bills – AB 2839 and AB 2655 – aimed to go further on political deepfakes and online labeling, but were temporarily blocked on First Amendment grounds.)
Texas: Texas was one of the first states to address political deepfakes, making it a crime (via SB 751 in 2019) to create or distribute “a deceptive video with intent to injure a candidate or influence an election.” Violation can be a misdemeanor. Texas has also updated its revenge porn laws to ensure they cover AI-generated intimate images (reflecting a trend of states broadening definitions of “intimate image” to include deepfakes).
Virginia: An early adopter as well. In 2019, Virginia enacted a law criminalizing the distribution of non-consensual pornographic deepfakes (the first state to do so). Virginia also has a law against deepfake election interference (passed in 2020, prohibiting fake candidate media in the run-up to elections). Other states like Illinois and Maryland followed with civil remedies for deepfake victims, and New York added deepfakes to its sexual privacy and publicity statutes.
Trend: By 2025, at least 17–18 states have passed some form of deepfake-specific law, and many more have bills pending. In 2024 alone, over 30 state bills on deepfakes were introduced. The majority of these laws focus on the two big flashpoints: elections and sexually explicit content. That leaves a lot of gray area (like employee-on-employee harassment, or deepfake consumer fraud) unaddressed, for now. For multi-state employers, this patchwork means you need to stay agile – what’s legal “speech” in one state might be illegal in another (at least when it comes to deepfakes near elections or involving private images).
Global Perspective
European Union: The EU’s AI Act (expected to come into force in 2024–2025) explicitly flags deepfakes. It will require those who create or publish AI-generated media to disclose that the content is synthesized. Deepfake generators will likely need to embed a visible or invisible watermark, and anyone posting such content in the EU must clearly label it as artificially generated (unless it’s obvious satire or protected art). The EU treats AI that can manipulate behavior or opinion (which deepfakes certainly can) as high-risk, meaning providers will have to implement risk management, logging, and oversight for such systems. The EU approach is stringent and will affect U.S. companies doing business there – essentially setting a higher compliance bar for deepfake-related products.
United Kingdom: As of 2025, the UK has taken a different route – sector-specific guidance rather than comprehensive legislation. The UK has no standalone “AI Act” yet, preferring a principles-based approach enforced by existing regulators. For example, the Information Commissioner’s Office (ICO) has issued guidance on deepfakes in the context of data protection (emphasizing that using personal data to create a deepfake without consent could violate privacy laws). Ofcom (the communications regulator) has been looking at deepfakes in online harms and media contexts. The UK is essentially tasking each regulator (privacy, media, financial, etc.) to incorporate AI/deepfake issues into their rules. This could lead to sector-specific obligations – e.g., advertising standards prohibiting deepfake ads, or financial regulators issuing rules on deepfake fraud in banking. UK companies are advised to follow the voluntary AI principles the government published (like fairness, transparency, accountability) and to be ready for more formal rules down the line.
Elsewhere: Many countries are grappling with deepfakes. China has implemented laws requiring deepfake content to be clearly labeled and forbidding certain uses (with real enforcement teeth). Canada is studying deepfake impacts on elections and considering updates to its election laws. International bodies like the OECD have included deepfakes in broader AI guidelines. All this means that the “reasonable” standard for companies is shifting toward expecting disclosure and preventive measures for deepfake technology globally, even if U.S. law is slower to require it.
Unified Law tracks these changes across jurisdictions. Our approach is to update policies to anticipate laws, not just react to them. We draft playbooks so that if (and when) a new deepfake law hits your state or industry, you’re already in compliance.
Unified Law’s Playbook: Building a Deepfake-Resilient Company
At Unified Law, we don’t just flag risks. We close them. When deepfakes hit the workplace, here’s how we operate as your in-house legal department:
Handbooks and Policies: We draft and routinely update internal policies to address deepfakes. This means defining “synthetic media” and explicitly banning employees from creating or sharing deceptive deepfakes related to work. It also means requiring employees to report if they are the target/victim of a deepfake. These policy provisions give HR the clear authority to act fast when an incident arises (no more “there’s nothing in the handbook about this” paralysis). We include examples in training: e.g., “doctoring a colleague’s image or voice using AI without consent” is listed as misconduct.
Investigations: We overhaul your investigation protocols to handle suspect media. That includes having a plan to authenticate evidence: which tools to use, when to call an external forensic expert, and how to document the steps. If a whistleblower sends in a video, we treat it like a potentially faulty chemical test – it needs validation. Our playbook ensures decisions (disciplinary or otherwise) are defensible. If we ever end up in court or before an agency, we can show that we verified the evidence (or lack thereof) at every step. No more gut decisions based on “it looked real to me.”
Product Development Involvement: We sit in on product meetings, not just court hearings. If your company is developing AI-driven products (like that avatar generator on Day Three), our attorneys are embedded with the dev team. We implement “compliance by design.” For example, we help architect the avatar tool to include consent gates, watermarks, and use restrictions from the outset. We run interference with Marketing when they have a “great idea” to use a famous face in a promo — ensuring it’s cleared or nixed. By being insiders in the innovation process, we prevent legal disasters before they launch.
Governance and Training: We brief boards and executives on the evolving patchwork of deepfake laws. We map out which states or countries pose the biggest risk for the company’s context (e.g., if you operate in California or Europe, the standards will be higher). We then update corporate governance documents accordingly. For example, we might update the Board’s risk committee charter to include AI/deepfake oversight. We craft crisis response plans: if a malicious deepfake targets the CEO or if a scandalous fake video of an employee goes viral, we have a cross-functional response team ready (Legal, PR, IT, HR all coordinated). We also train employees – from recruiters (on spotting fake resumes or references) to security teams (on deepfake phishing attempts) – so everyone is aware of the threat and knows how to respond.
This is the difference between outside counsel and an in-house department. We think like insiders because we’ve been insiders. Our job is to make sure that when a prank, an investigation, or a product launch brushes up against deepfakes, you’re not scrambling. You’re already covered.
Verify, Then Believe
Deepfakes aren’t hypothetical. They’re already appearing in harassment claims, hotline reports, and product meetings.
The old rule was seeing is believing. The new rule is to verify, then believe.
Unified Law Group, PB LLC doesn’t just explain this risk. We act as your in-house department, updating policies, running investigations, embedding legal in product, and protecting your company when deepfakes hit.
Because when “evidence” can be faked, the only real defense is a legal team that knows how to question it first.