UPDATE: OpenAI Fires Back, Calling Court Order a "Sweeping Overreach"
Since this story was first published, OpenAI and CEO Sam Altman have publicly responded to the court order, confirming they are actively fighting the decision to preserve user chats.
OpenAI isn't taking this sitting down. In a new blog post, the company’s COO called the demand a “sweeping and unnecessary” overreach that “fundamentally conflicts with the privacy commitments we have made to our users.”
Sam Altman went even further on X, calling it an "inappropriate request that sets a bad precedent." He even floated a powerful new idea: the need for "AI privilege." He argued that conversations with an AI should be as confidential as those with a doctor or a lawyer, a principle he says the company will fight for.
Here are the most critical takeaways from their response:
- Who’s impacted? The court order affects users of ChatGPT Free, Plus, Pro, and Team, as well as standard API customers.
- Who’s NOT impacted? Your data is not subject to the order if you are a ChatGPT Enterprise, ChatGPT Edu, or an API customer with a Zero Data Retention (ZDR) agreement.
- What happens to the data? OpenAI states the preserved data is stored separately in a secure system under legal hold. It can only be accessed by a small, audited legal and security team to comply with legal obligations and will not be used to train models.
This official response adds crucial context to the user panic and legal debate detailed below. With that in mind, let's dive into why this court order has everyone freaking out in the first place.
Oh, and goes without saying, but this is not legal advice. We share all this info for educational use only.
Your ChatGPT Logs Are No Longer Private (And Everyone's Freaking Out)

Remember when you thought your deleted ChatGPT conversations were, you know, actually deleted?
Yeah, about that...
A federal judge just dropped a bombshell that's sending shockwaves through the AI community: OpenAI must now preserve ALL user conversations indefinitely. Yes, including the ones you deleted. Yes, including those "temporary" chats that were supposed to disappear. And yes, this affects hundreds of millions of users worldwide—from people asking embarrassing health questions to businesses discussing trade secrets through OpenAI's API.
Here's what we're diving into today:
- Why a copyright lawsuit between The New York Times and OpenAI suddenly means YOUR private conversations are being preserved forever.
- The 10 major concerns users are panicking about (spoiler: some are overblown, others are terrifyingly legitimate).
- Whether you could actually get sued for using ChatGPT (probably not, but the reality is more complicated).
- What this means for the future of AI and whether we're heading toward a Napster-style reckoning.
- Why this might reshape how we think about fair use, digital media, and AI-generated content forever.
The wildest part? This all started because The New York Times thinks people are using ChatGPT to read their articles without paying. Now, thanks to one judge's order on May 13, 2025, every intimate conversation, every business strategy session, every late-night anxiety spiral you've shared with ChatGPT is being preserved for potential legal review...whether you deleted it or not.
The internet is having an absolute meltdown about it. Because duh. So let's dive into the details and try to make sense of all this (from a normie point of view, because we're not lawyers).
P.S: Before we go any further. This is not legal advice, and we are not lawyers. This is all shared for educational purposes and commentary / discussion only!
The Concerns Are Palpable, and Everybody is Freaking Out…
Hacker News reacted to the case about how you’d expect. The thread revealed deep concerns about the intersection of copyright litigation and user privacy, with many viewing this as a dangerous precedent that sacrifices millions of users' privacy for one company's copyright claims. Here’s a brief list of all the concerns everybody has about this issue (as well as the counter arguments, if applicable):
1. Business and Enterprise Risk Concerns
Concerns:
- API users are affected, not just ChatGPT users, creating major business risk.
- Companies need to review their OpenAI contracts and risk tolerance.
- There’s “proprietary code” exposure risk for businesses.
- International businesses are now worried about US discovery affecting their data.
- Employee queries could be aired publicly in unrelated court cases.
- Competitive intelligence risk - businesses submitting data that competitors might access.
Counter-arguments:
- This is just retention for legal purposes, not training - contractual protections still apply.
- Companies already trust cloud services like GitHub with their private data anyway.
- And, people have nowhere else to go - AI is too convenient to abandon.
Sure… convenient until a lawsuit.
2. Privacy and “Deleted” Data Concerns
Concerns:
- OpenAI's documentation is now "incredibly misleading" about deletion.
- “Deleted chats” are now just “hidden chats.”
- Shouldn’t it be illegal to call something "deleted" when it's not?
- In fact, any supposed secret/temporary chat functionality is now “flat out lying to people.”
- Data could become public in 20 years if OpenAI gets hacked or sold.
- There’s no indication in the app telling people this is happening.
- People now regret personal conversations with AI.
- Everything should be private by default.
Counter-arguments:
- “Soft” delete is standard practice everywhere online.
- The documentation already had exceptions for legal obligations.
- Everyone should have assumed data was retained anyway.
- You don't have a legal right not to be logged (but maybe we should??)
- Using the internet means no privacy expectations.
- This is just “Conspiratorial thinking” and records will be handled appropriately.
- Every app keeps logs - why the surprise?
- Always assume cloud services archive everything forever.
- This is just standard legal procedure.
3. Legal Jurisdiction and GDPR Concerns
Concerns:
- Violates privacy laws in multiple jurisdictions.
- There’s immediate GDPR implications.
- For example, the US court orders can't override GDPR for EU customers.
- How do companies handle GDPR deletion requests with this order?
Counter-arguments:
- US can give orders to US companies that conflict with EU law (it’s complicated).
- Legal holds are standard practice.
- OpenAI has to figure out compliance - that's their problem.
4. Scope Creep and Precedent Concerns
Concerns:
- This will expand to all AI systems - Gemini, Grok, etc.
- They'll eventually outlaw local inference entirely.
- The judge's reasoning has been compared to mandating encryption backdoors.
- Sets a dangerous precedent for mass surveillance (somewhat equivalent to wiretapping the entire US population).
Counter-arguments:
- Come on, local inference won't be affected.
- This is targeted at copyright infringement, not general surveillance.
- These orders already exist for most communication platforms.
5. Government Surveillance Concerns
Concerns:
- "The government also says thank you for your ChatGPT logs."
- Palantir wants the data.
- OpenAI may already be sharing data with government “behind closed doors.”
- National security pretexts for data collection.
- Data becomes accessible to law enforcement beyond original case.
Counter-arguments:
- OpenAI is “literally spending money fighting this court order.”
- This is about copyright litigation, not government surveillance.
People do love to assume that corporations will do the most evil thing possible... I mean, they usually do, right?
6. Personal and Sensitive Information Concerns
Concerns:
- Nobody wants their health queries exposed.
- Mental health and therapy-like conversations at risk.
- "Embarrassment" could lead to discrimination or prosecution.
- LGBT+ users in hostile countries at risk.
- Immigration status queries could expose undocumented people.
- The worst nightmare scenarios for privacy abuse have all been realized.
- Past breaches show data is never handled appropriately.
Counter-arguments:
- Yeah, nobody had anything good to combat these concerns.
7. Trust and Transparency Failures
Concerns:
- No user notification this is happening.
- Documentation is misleading about data deletion.
- Users were more or less tricked into thinking temporary/secret chats were private.
- Trust requires believing all future executives, acquirers, admins, and governments.
Counter-arguments:
- Documentation already included legal obligation exceptions.
- Users should have known better (we heard this A LOT).
8. Technical Implementation Concerns
Concerns:
- True deletion is technically very difficult across modern systems.
- Backups complicate GDPR compliance.
- Unlink() doesn't actually delete data.
- Data encryption keys could be compromised.
Counter-arguments:
- Per-user encryption could solve this - delete key to delete data.
- Backup retention periods can be limited.
- Some companies do take privacy seriously, especially in Europe.
9. Copyright vs. Privacy Debate
Concerns:
- One organization's lawsuit destroys millions of users' privacy.
- Copyright holders painted as villains using invasion of privacy as a weapon.
- Legal system allows one judge to affect millions on a whim.
- NY Times' “boomer copyright concerns” now trump user privacy??
Counter-arguments:
- OpenAI's illegal practices caused this - they could have used legal data only.
- OpenAI “used copyrighted material illegally without permission.”
- Again…standard litigation hold practice.
10. Alternatives and Solutions Discussed
Suggestions:
- Use could use local LLMs.
- Use DeepSeek to avoid US monitoring.
- Use NanoGPT with crypto payments.
- Open source LLMs are the future.
- Archive your own prompts.
Counter-arguments:
- DeepSeek also keeps all logs.
- Local LLMs aren't “subscription grade” yet.
- Alternatives will face same legal requirements.
So obviously there's a WIDE range of opinions on what this means, who's to blame, and what's at stake. So it's good to take a step back and look at the actual case, what led up to this, and what it means at the moment.
-
Case Details
Court: U.S. District Court for the Southern District of New York
Judge: U.S. District Judge Sidney Stein (primary case judge)
Magistrate Judge: Magistrate Judge Ona T. Wang (handling discovery disputes)
Filed: December 27, 2023.
Parties
Plaintiffs: The New York Times, The New York Daily News and the Center for Investigative Reporting (consolidated cases)
Defendants: OpenAI and Microsoft
Core Claims
The lawsuit contains several major copyright infringement claims:
1. Training Data Infringement
The Times's core allegation is that OpenAI is infringing on copyrights through the unlicensed and unauthorized use and reproduction of Times works during the training of its models. Lawyers for The New York Times believe that the paper's articles are one of the biggest sources of copyrighted text that OpenAI used to build ChatGPT into the premier AI chatbot.
2. Direct Output Infringement
In its complaint, the Times presented extensive evidence of products from OpenAI and Microsoft displaying near word-for-word excerpts of articles when prompted, allowing users to get around the paywall. LLMs (large language models) sometimes "memorize" parts of the works included in training data. When this happens, the models can occasionally generate near-verbatim reproductions of the works.
3. Contributory and Induced Infringement
The New York Times made its case, for now, that OpenAI and its most prominent financial backer Microsoft were responsible for inducing users to infringe its copyrights.
4. Market Substitution Claims
LLMs produce "synthetic" search results that, when prompted, can reproduce "significantly more expressive content from [an] original article than what would traditionally be displayed" by an online search, effectively allowing readers to circumvent the Times's paywall.
Requested Relief
The Times is seeking significant remedies:
- According to the complaint filed by the Times, OpenAI should be on the hook for billions of dollars in damages over illegally copying and using the newspaper's archive.
- The lawsuit also calls for the destruction of ChatGPT's dataset…and "destruction…of all GPT or other LLM models and training sets that incorporate Times Works"
- The New York Times alleges that it generally licenses articles at $10 each, and there's an allegation in the complaint that the data set includes at least 16 million unique records of content from across the Times. Whoa.
Current Status
Motion to Dismiss Ruling
A federal judge rejected OpenAI's request to toss out the copyright lawsuit from The New York Times. Judge Sidney Stein, whose three-page order rejected parts of OpenAI and Microsoft's motion to dismiss, said in her decision that the Times' "numerous" and "widely publicized" examples of ChatGPT producing material from its articles justified allowing the claims to continue.
Recent Preservation Order (The Source of Current Controversy)

The court order requiring OpenAI to preserve user logs stems from recent discovery disputes:
On Tuesday, May 13, 2025, in the ongoing AI-related copyright infringement litigation against OpenAI, the Hon. Ona T. Wang issued an order directing OpenAI "to preserve and segregate all output log data that would otherwise be deleted on a going forward basis until further order of the Court (in essence, the output log data that OpenAI has been destroying), whether such data might be deleted at a user's request or because of 'numerous privacy laws and regulations' that might require OpenAI to do so."
Magistrate Judge Wang denied OpenAI's motion to reconsider her order for OpenAI to preserve and segregate its output log data, after the New York Times had alleged that OpenAI had destroyed relevant evidence.
OpenAI's Defenses
OpenAI leaders have argued that the company's mass data scraping, including articles from The Times, is protected under a legal doctrine known as "fair use" Judge allows 'New York Times' copyright case against OpenAI to go forward. OpenAI responded last month that the Times had paid an unnamed "hired gun" to manipulate its products into reproducing the newspaper's content (the "prompt hacking" defense).
This case represents the first big test for AI in the copyright space and could have massive implications for the entire AI industry's approach to training data and user privacy.
So does any of this implicate any users of ChatGPT's services, as the commenters are afraid of?
Seems Like No Direct Legal Risk to Users (in THIS case)
Based on my research, individual ChatGPT users do not appear to face direct legal liability from this case, though there are legitimate privacy concerns. The lawsuit is against OpenAI and Microsoft, not individual users. The New York Times made its case, for now, that OpenAI and its most prominent financial backer Microsoft were responsible for inducing users to infringe its copyrights. This is a claim of contributory infringement against OpenAI - meaning OpenAI helped users commit infringement - but the lawsuit itself targets the companies, not the users.
In its complaint, the Times presented extensive evidence of products from OpenAI and Microsoft displaying near word-for-word excerpts of articles when prompted, allowing users to get around the paywall. However, these examples appear to be used as evidence of OpenAI's practices, not as the basis for suing individual users.
Why User Data Is Being Preserved
The preservation order exists because the News Plaintiffs argued that the volume of ChatGPT Free, Pro, and Plus conversations that have been destroyed was "substantial," as well as that OpenAI has refused to provide the volume of output log data retained or destroyed for other OpenAI products at issue in the litigation.
The Times needs this data to prove their case against OpenAI - specifically to show:
- How often ChatGPT reproduces copyrighted content.
- Whether OpenAI's systems facilitate copyright infringement.
- The scale and nature of alleged infringement.
Legitimate Privacy Concerns
While users don't face direct legal liability, there are real privacy issues:
- Discovery Access: The news organizations argued the order is necessary to rebut OpenAI's argument that customers rarely use AI models to infringe copyrights. This means The New York Times' legal team may review user conversations to find examples of copyright infringement.
- Potential Public Disclosure: As one commenter noted in the HN thread, user conversations could potentially be introduced as evidence in court proceedings, though Magistrate Judge Ona T. Wang's order to "preserve and segregate all output log data" forces OpenAI to rescind users' control over when and how their ChatGPT conversation data is used and retained but Wang denied OpenAI's bid to undo her order on May 29, two days after reassuring the AI company that user data wouldn't be publicly available.
- Loss of Deletion Rights: "OpenAI is NOW DIRECTED to preserve and segregate all output log data that would otherwise be deleted… whether such data might be deleted at a user's request or because of 'numerous privacy laws and regulations'." This means users can no longer actually delete their conversations despite what the UI suggests.
So, what's the verdict so far? The Hacker News commenters' fears about legal liability appear to be overblown in this particular case. This isn't a case targeting individual users, it's focused on the companies themselves.
However, their overarching privacy concerns are legitimate. And here’s why:
Conversations that users thought were deleted are now permanently preserved and may be reviewed by opposing legal teams as part of discovery in this high-stakes litigation.
The real concern is about the systemic precedent set rather than individual liability.
Let me break down each scenario:
1. Does OpenAI have to follow this rule forever, or just for the duration of the case?
Technically: This preservation order is case-specific and should end when the litigation concludes. The order states OpenAI must preserve data "until further order of the Court" - so theoretically it could be lifted. Still, if you're a lawyer using ChatGPT, these folks suggest you may want to revisit your policies.
Practically: OpenAI faces a massive practical problem. Once they've built systems to preserve all user data, and once they've established that they can do this without violating their privacy commitments, it becomes much harder to argue they shouldn't do it in future cases. The technical and legal infrastructure is now in place.
Precedent Effect: Even if this specific order ends, OpenAI will likely face similar preservation demands in other copyright cases, privacy lawsuits, regulatory investigations, etc. The genie is out of the bottle.
2. Does every other AI company have to follow this rule, just in case they're sued?
Legally: No. This is a case-specific discovery order against OpenAI in this particular litigation. It doesn't create binding precedent for other companies.
Practically: Absolutely yes, and this is the scary part. Other AI companies' lawyers are undoubtedly advising them that:
- They could face similar litigation.
- Courts have now shown willingness to order comprehensive data preservation.
- Building deletion capabilities that you later have to reverse is expensive and looks bad.
- Defensive preservation is now the safer legal strategy.
Risk Management: Companies will likely start preserving more data prophylactically rather than risk being seen as destroying potential evidence.
3. Can other companies now sue OpenAI to see competitors' employees' secret chat logs?
This is unlikely for several reasons:
Discovery Scope: Civil discovery must be relevant to the claims in the case. A competitor suing OpenAI over, say, trade secrets, couldn't just fish through random employee chats. They'd need to show relevance to their specific claims.
Protective Orders: Courts routinely issue protective orders limiting who can see sensitive discovery materials and how they can be used.
Privacy/Privilege: Employee communications might be protected by attorney-client privilege, work product doctrine, or privacy laws.
BUT: If there were relevant claims (antitrust, IP theft, etc.), employee communications about those topics could potentially be discoverable.
4. Is there no way to undo this?
There are several potential paths, but they're all difficult:
Direct Appeal: OpenAI filed an objection to Judge Stein asking him to overturn Magistrate Judge Wang's recent order and Magistrate Judge Wang denied OpenAI's motion to reconsider her order. They can continue appealing up the chain.
Case Resolution: If OpenAI wins the underlying case, or settles, the preservation order should end.
Changed Circumstances: If OpenAI could show the order causes disproportionate harm, they might get it modified.
Legislative Action: Congress could theoretically pass privacy legislation that limits such orders, but that's a long shot.
5. Could OpenAI appeal this to the Supreme Court?
Very unlikely. This is a discovery order, not a final judgment. The Supreme Court typically only hears:
- Final judgments in cases.
- Certified questions of exceptional importance.
- Constitutional issues of national significance.
Discovery disputes rarely meet these standards. OpenAI would need to:
- Appeal to the Circuit Court first.
- Show this raises a fundamental constitutional or statutory issue.
- Convince SCOTUS it's important enough to hear.
The Supreme Court hears maybe 60-80 cases per year out of thousands of petitions.
6. Would someone else have to sue to get the order revoked?
No, that's not how it works. This order is specific to this litigation. A third party can't just sue to overturn discovery orders in someone else's case.
However, future plaintiffs could cite this precedent to argue for similar preservation orders in their own cases against OpenAI or other AI companies.
The Real Problem: Practical Precedent
This isn't really about formal legal precedent - it's about practical precedent:
- Technical Feasibility: Courts now know AI companies can preserve everything.
- Legal Acceptance: A federal judge found this reasonable and proportionate.
- Risk Aversion: Companies will preserve more data to avoid looking like they're hiding evidence.
- Snowball Effect: Each similar order makes the next one easier to justify.
The Cat May Indeed Be Out of the Bag
The most troubling aspect is that this could become the new normal without any actual legal ruling on the merits. Companies might adopt comprehensive data retention policies just to avoid litigation risk, even though:
- No court has ruled this is legally required.
- No final judgment has been issued.
- The underlying copyright claims haven't been decided.
- The privacy implications haven't been fully considered.
This is precedent by exhaustion - making comprehensive surveillance the path of least legal resistance, without ever actually deciding whether it should be required.
Even if OpenAI wins this case completely, the practical effect on user privacy across the AI industry might be permanent.
So what actually counts as infringing copyrights if we buy OpenAI's argument about fair use?
For instance, if you upload a book or an article to read it, or summarize it, does that count as infringing rights? What if you never post your summaries, or if you do, you post a link to the original?
It’s important we consider the boundaries of fair use and copyright infringement in the context of AI/ChatGPT usage, taking into account different scenarios involving uploading copyrighted content to ChatGPT and what might constitute infringement.
That said, the boundaries are complex and still evolving, especially with AI.
Let’s break down the key factors:
The Fair Use Analysis
Fair use depends on four factors that courts weigh together:
- Purpose and character of use (commercial vs. non-commercial, educational, transformative).
- Nature of the copyrighted work (factual vs. creative, published vs. unpublished).
- Amount and substantiality of the portion used.
- Effect on the market for the original work,
Uploading Content to ChatGPT
The act of uploading itself involves copying, which is a restricted right under copyright law. However, this might be fair use depending on:
Likely Fair Use:
- Personal research/education: Uploading an academic article to get help understanding complex concepts.
- Small portions: Uploading a few pages or excerpts rather than entire works.
- Transformative purpose: Using AI to analyze, critique, or create something new from the work.
Possibly Infringing:
- Commercial use: Uploading content as part of a business workflow.
- Entire works: Uploading complete books, especially recent/creative works.
- Substitution: Using summaries instead of purchasing the original.
Creating and Sharing Summaries
Summaries themselves can be fair use, but it depends on execution:
Likely Fair Use:
- Transformative summaries: Analyzing themes, comparing to other works, adding original commentary.
- Factual extraction: Summarizing news events, scientific findings, or historical facts.
- Brief excerpts: Using short quotes with substantial original analysis.
Possibly Infringing:
- Detailed plot summaries: Especially of creative works like novels or films.
- Verbatim reproduction: Even if "summarized," copying substantial passages word-for-word.
- Market substitution: Creating summaries so detailed they replace reading the original.
Attribution vs. Fair Use
Important misconception: Providing links or attribution does not automatically make something fair use. Attribution is good practice and may help with some factors, but doesn't resolve copyright issues by itself.
The Current Legal Uncertainty
The legal landscape around AI and copyright is still developing. Courts have said fair use of a copyrighted work must generate something new that is "transformative," or comments on or refers back to an original work. The Times argues that this does not apply to how OpenAI reproduces the paper's original reporting.
Another part of the legal analysis will involve an idea known as "market substitution," referring to whether chatbot answers are a substitute for, say, reading The Times website, or if chatbots and newspapers operate in different marketplaces.
Practical Guidance
Lower Risk Activities:
- Uploading small excerpts for analysis or homework help.
- Creating original analysis that happens to reference copyrighted works.
- Using AI for research where you're adding substantial original thought.
Higher Risk Activities:
- Uploading entire copyrighted works, especially recent creative content.
- Creating detailed summaries that could substitute for the original.
- Commercial use of copyrighted content without licensing.
The Safest Approach:
- Use publicly available/open content when possible.
- Focus on analysis and commentary rather than reproduction.
- Keep uploads to minimal necessary portions.
- Add substantial original value to any outputs.
Remember, this is an evolving area of law, and different courts may reach different conclusions. The ultimate test will likely be whether your use is genuinely transformative and doesn't harm the market for the original work.
Consider us here at The Neuron. We write a newsletter, and newsletters are interesting because they're always linking out and remixing / adding context / putting different sources together to create a new story.
And really if you look at modern day digital journalism, a lot of reporting seems to be re-reporting a story someone else broke. Newsletters do this a lot (arguably it’s the business model for more “news-brief” style newsletters).
The gray area of fair use in newsletters has always fascinated me. Unfortunately, I don’t know any lawyers with tons of free time on their hands, so I chatted about it with Claude (who is quick to remind us they are not a lawyer and this is not legal advice).
Here’s what Claude had to say:
The newsletter economy has indeed created new tensions in fair use that traditional copyright doctrine wasn't really designed to handle.
The Newsletter Fair Use Sweet Spot
Newsletters often operate in a potentially favorable fair use zone because they typically:
Transformative Elements:
- Curation as transformation: Selecting, organizing, and contextualizing multiple sources creates new value.
- Analysis and commentary: Adding original insights, connecting dots between stories.
- Synthesis: Creating a coherent narrative from disparate sources.
- Editorial voice: Providing perspective and interpretation.
Market Dynamics:
- Traffic generation: Often drive readers to original sources rather than substituting for them.
- Different audience/format: Serving busy readers who want curated analysis vs. those seeking primary sources.
- Time-shifted value: Often revisiting stories with additional context or follow-up.
The Re-Reporting Ecosystem
The journalism industry has largely accepted practices that create interesting fair use questions:
Traditional Wire Model: Reuters breaks a story → 500 outlets "rewrite" it with minimal original reporting.
Newsletter Model: Original outlet breaks story → Newsletter excerpts, analyzes, connects to broader trends.
Aggregation Model: Multiple sources → Curated collection with editorial framework.
Gray Areas and Tensions
How Much Excerpting?: There's no magic word count, but newsletters often push boundaries by including substantial quotes or key paragraphs. The question becomes whether the excerpting serves the transformative purpose or substitutes for reading the original.
Commercial vs. Editorial: Many newsletters are subscription businesses, which weighs against fair use, but they're also clearly editorial/commentary, which weighs in favor.
Attribution vs. Permission: Like you noted, linking back is good practice but doesn't automatically create fair use protection. However, it does help with the "market harm" analysis if you're driving traffic.
“Hot News” Misappropriation: This separate doctrine (beyond copyright) can protect breaking news from immediate commercial appropriation, though it's narrow and not always enforced.
Industry Evolution
The newsletter boom has created new economic models that complicate traditional fair use analysis:
Value Creation: Many newsletters genuinely create value through curation and analysis - they're not just free-riding.
Market Complementarity: Often serves different reader needs than original sources.
Attribution Culture: Newsletter culture generally emphasizes crediting sources.
Economic Reality: Original reporting is expensive; analysis and curation are different (but valuable) skills.
Practical Considerations
Here’s what Claude suggested would count as lower risk practices versus higher risk.
Lower Risk Practices:
- Leading with your analysis/take rather than the excerpt.
- Using quotes to support your argument rather than tell the story.
- Linking prominently and early.
- Adding substantial original reporting or context.
- Focusing on factual rather than creative content.
Higher Risk:
- Reproducing complete breaking news stories.
- Excerpting creative or literary journalism extensively.
- Competing directly with subscription content.
- Using quotes as the primary content rather than supporting analysis.
The Unsettled Future
What's fascinating is that the newsletter economy might actually be pushing fair use doctrine in a positive direction. Courts are increasingly recognizing that:
- Linking and attribution matter for market harm analysis.
- Curation and synthesis can be genuinely transformative.
- Different formats can serve different markets without substitution.
But we're still in early days. The OpenAI case could actually provide some guidance on how courts view “transformative” use in the digital age, which could impact newsletter practices too.
The reality: Most newsletter operators are probably in a legally gray area, but one where industry practice, reader expectations, and economic reality all point toward this being legitimate fair use - especially when done thoughtfully with genuine added value.
Now what about writing with generative AI?
It was a fun conversation! But for me, this part was really the question in my mind:
Are all of these still "fair use" if a robot does them?
Transformative Elements:
Curation as transformation: Selecting, organizing, and contextualizing multiple sources creates new value.*
Analysis and commentary: Adding original insights, connecting dots between stories.*
Synthesis: Creating a coherent narrative from disparate sources. *
Editorial voice: Providing perspective and interpretation.
For instance, let’s say you were a really lazy newsletter creator, and you just had a list of sources, and you had AI read all your sources, and put together an entire newsletter for you. AI did the curation, it added its own analysis and commentary, it synthesized a coherent narrative from all these different sources, and added its own perspective and interpretation, all in one shot.
That can’t possibly be fair use, could it?
As far as what we know about the current copyright landscape today, the US copyright office at least has said that if an AI did all of that, at a bare minimum, you the human couldn’t copyright it and own it yourself (because “no one” made it, since AI doesn’t have personhood atm).
To me, this is the million-dollar question that's going to reshape media, copyright law, and probably the entire knowledge economy:
Can a robot create a transformative work, and if so, can a human own what that robot creates?
The Core Tension
Each of those transformative elements can be done by AI, but whether they legally count as transformative is genuinely unclear. We're in uncharted territory where:
- The output might look identical to human-created transformation.
- The process is fundamentally different (statistical pattern matching vs. human creativity/judgment).
- The legal framework wasn't designed for this scenario.
I also asked Claude for its thoughts on this topic, and here’s what it said (again, caveats, this is not legal advice and Claude is not a lawyer; just think of these as the broad strokes arguments):
Arguments FOR AI Transformation Being Fair Use
Output-Focused Analysis: Courts might focus on whether the final product is transformative, regardless of how it was created. If an AI-generated newsletter genuinely adds new insights, connects disparate sources, and serves a different market need, that could still be fair use.
Human Direction/Curation: The human is still making crucial editorial decisions; what sources to include, what questions to ask, how to frame the analysis. The AI might be a sophisticated tool, but the editorial judgment is still human.
Tool Precedent: We don't question whether using spell-check, grammar software, or research databases affects fair use. Maybe AI is just a more sophisticated version of editorial tools.
Arguments AGAINST AI Transformation Being Fair Use
Lack of Human Creativity: Fair use doctrine often emphasizes human creativity and original expression. If the "transformation" is just algorithmic recombination of training data, courts might not see it as sufficiently transformative.
Scale and Automation: AI can process and "transform" vast amounts of content automatically in ways that might feel more like industrial copying than editorial curation.
Training Data Issues: The AI's ability to "synthesize" might itself depend on copyrighted material in its training data, creating a recursive copyright problem.
But in Practice…
Detection Challenges: How would anyone prove you used AI for analysis vs. writing it yourself? The line between "AI-assisted" and "AI-generated" is often blurry.
(side note: this is where OpenAI’s court order is a big deal; under the current ruling, lawyers could prove you use AI by literally looking at your chat logs).
Hybrid Workflows: Most sophisticated users combine AI capabilities with human judgment; using AI for initial synthesis, then editing, fact-checking, and adding original perspective.
Industry Evolution: Major media companies are already using AI for various tasks. The industry might settle into accepted practices before courts catch up.
What Courts Might Focus On
The Human Element: How much human creativity, judgment, and editorial control is involved? Using AI to help organize sources you selected might be different from asking AI to find and analyze sources autonomously.
Market Impact: Does the AI-assisted content harm the market for original works, or does it serve different needs? This might matter more than the creation process.
Quality of Transformation: Is the output genuinely adding new value and insight, or just automating content appropriation?
My Prediction (yes, this is literally Claude’s prediction)
I suspect courts will eventually develop a spectrum approach:
More Likely Fair Use:
- AI as research/organization tool with substantial human editorial oversight.
- Clear human editorial voice and decision-making.
- Genuine analysis that serves different audience needs.
- Proper attribution and linking.
Less Likely Fair Use:
- Fully automated content generation with minimal human input.
- AI systems trained specifically to mimic particular publications.
- Large-scale automated appropriation of content.
- Output that directly competes with/substitutes for original sources.
The Sweet Spot: AI-enhanced human editorial work where the technology amplifies human judgment rather than replacing it.
The fascinating thing is that this question might be answered by the very case we were discussing. If courts decide that AI training and output can be transformative fair use, that could provide a framework for how AI-assisted journalism gets treated too.
Bottom line: We're probably in a brief window where the legal framework is unsettled, but the practical capabilities are rapidly advancing. The courts will eventually catch up, but in the meantime, the businesses and creators who thoughtfully combine AI capabilities with genuine human editorial judgment are probably in the strongest position.
Our Take:
Remember Napster? How long ago that was? And how they got totally wrecked by their court case? Today, years later, Napster has come and gone, but we have Spotify, and YouTube Music, and Apple Music, and all these ways to share and listen to music for free online.
OpenAI could totally end up like Napster here. It could be the pioneer that proved the demand was there, and showed us the path, but it could also be the necessary sacrificial lamb that gets slaughtered so that we can settle how to navigate the use of genAI online once and for all.
Think about the timeline here: Napster died in 2001, and it wasn't until 2006 that Spotify launched in Europe (2011 in the US). That's a solid 5-10 year gap where the music industry was in chaos, fighting lawsuits, watching CD sales crater, and generally flailing around trying to sue their way back to the old days. Sound familiar? We're probably in year 2 of that same cycle with AI.
In fact, if you think about it, the current landscape where AI slop is flooding the social media feeds in this sort of unfiltered wave is kind of like how Limewire and Napster and all the peer to peer music sites flooded your PC with MP3s back in the day.
And man, if you remember those Limewire days, you know exactly what I'm talking about. Half the MP3s were mislabeled ("Definitely_Led_Zeppelin_Stairway.mp3" turns out to be some garage band from Toledo), recorded off the radio with DJ chatter still in them, or came packaged with enough viruses to brick your Gateway desktop. Today's AI slop feels eerily similar—content that's technically what you asked for but somehow always a little off, a little broken, definitely not quite trustworthy.
These days, your music data is much more neatly organized, in one little library on this platform or that platform. It’s not even yours anymore; you’re just renting it. But it’s free, with ads. Or $12.99 without.
I could see the outcome of all this going a similar route. Where today, we’re being flooded with AI slop because the rules aren’t set in place. But sooner or later, the guardrails will go up, the bad actors will get their wrist slapped, maybe the lamb will even get sliced and diced, but then we end up with a system where…
- You can buy “data” for your feeds (such as a NYT article) at 0.25 cents a pop like songs on iTunes.
- You and your AI will access all your licensed content through a single platform library like Netflix.
- Or You andthe AI can just read and access everything online, point blank period, in some sort of metered fashion, and based on the number of queries you send a month, you get a bill that’s proportional to how much it cost your AI to access all that data.
That third point probably ends up somewhere close to the "YouTube model", where content creators get micropayments every time an AI system references their work (imagine getting 0.0001 cents every time ChatGPT quotes your article to answer someone's question). Because Google is likely to be the ultimate winner in the AI battle, I think the Youtube model is pretty likely. Arguably, Google should right now pay websites a cut of ad revenue generated every time their data shows up in the AI Overviews box (if they aren't already) just like Youtubers get paid a cut of their own ad revenue.
The payment model is baked into their business already; they just aren't applying it equally everywhere.
Now, why do I think Google will win once the copyright dust settles? Because...
- They already own all your data already and won't suffer any brand reputational damage from this move.
- They have billions of dollars to pay out or settle with, if required.
- Because of regulatory capture, they can benefit from complying with any new regulations that kill off startup threats like OpenAI.
So what happens to publishers and digital media if we all move to the Youtube model?
Let's look at the past to forecast the future. The music industry spent years fighting streaming like it was the apocalypse. And for many artists, it kind of was. Sure, streaming, both ad-supported and subscription-based, accounted for 83% of music industry revenues in the U.S. during 2022, but here's what they don't tell you: the industry still hasn't recovered to its inflation-adjusted CD-era peak. Inflation-adjusted music revenue peaked in 1999 at $23.7 billion at a time when the CD was also in its prime. That year CD sales alone amounted to $21 billion, more than twice the recording industry's total revenue for 2018.
The real story? Record labels figured out how to make streaming work for them (taking 46% of streaming revenue while artists get 7%). But individual artists? They're getting absolutely crushed. You need 229 Spotify streams to make what you used to make from selling a single CD. No wonder just 0.4% of artists on streaming platforms earned enough from streaming to make a living.
Now apply that same dynamic to publishers and digital media. In a YouTube-style AI world, The New York Times might get 0.0001 cents every time their content gets referenced. Multiply that by millions of queries, and the big publishers might survive (just like major record labels did). But what about smaller publishers, independent journalists, newsletter writers like us? We'll be fighting for fractions of pennies while Google takes its 45% cut (or whatever they negotiate).
Everything Becomes Royalties
My prediction is everything has to switch to a royalty/licensing model. If you create stuff online, and a platform hosts/serves what you create to others, you should get a cut. Arguably, you should get MOST of the cut. The Googles and YouTubes and App stores of the world should just take a flat 2% or 5% like the credit card companies and call it a day. Instead, they're the ones who advertisers pay today when it should be the other way around (in the newsletter industry and trad media, that's how it worked). However, even if nothing changed from the current model, percentages what they are, and the current model was applied equally, that'd be fine too.
This might ultimately push all big platforms away from user-generated content for liability (and payment) purposes, though.
We're already seeing hints of this future. As Andrej Karpathy points out, AI-generated video is becoming a "differentiable process" where you can optimize for any objective—engagement, clicks, even pupil dilation. Soon, platforms won't need to index finite sets of human-created content when they can generate infinite content optimized directly for engagement. The implications are both exciting and terrifying.
But that's a tangent for another day. The point is: we're heading toward a world where every piece of content, every creative work, every bit of information becomes part of a massive tokenized streaming system, and that system needs a correspondingly massive royalty-tracking system. Just like Spotify knows exactly how many times you played that song, future AI platforms will need to track every time your content gets referenced, remixed, or synthesized.
Call it the token-ification (streaming-ification?) of all human knowledge. OpenAI showed us the path with their credit system. Whether they're the vehicle that takes us there remains to be seen. Hope you like micropayments.
Maybe the crypto bros were right all along... they were just early.
Agentic Tokenization = web3?!
As for the privacy...
So here we are. A copyright lawsuit about newspaper articles has somehow morphed into a privacy nightmare for millions of ChatGPT users. Your deleted conversations aren't deleted. Your temporary chats are permanent. And somewhere in a discovery folder, lawyers might be reading about that time you asked ChatGPT to help you understand your weird rash at 3 AM.
The most frustrating part? This feels inevitable in hindsight. Of course the first major AI copyright case would set terrible precedents for user privacy. Of course companies would start preserving everything "just in case." Of course we'd end up in a world where the delete button is basically decorative.
But if the Napster parallel holds, we're still in the messy middle chapter. The one where everyone's panicking, the rules make no sense, and the technology is way ahead of the law. Remember, it took nearly a decade for the music industry to go from "sue everyone" to "here's Spotify."
For AI, that probably means:
- OpenAI might not survive in its current form (RIP Napster).
- The winners will be the companies with the deepest pockets and best lawyers (hello, Google).
- Users will eventually get a sanitized, corporatized version that's somehow both more convenient and less private.
- We'll look back on these wild west days with a mix of nostalgia and horror.
Until then? Maybe think twice before telling ChatGPT your deepest secrets. Or at least assume that conversation might end up in a court filing someday. Because in 2025, apparently nothing is truly private—not even the conversations you thought you deleted.
P.P.S: If you're a lawyer with tons of time on your hand and want to correct / add your two cents to this article, email us! Grant@theneurondaily.com will get you the fastest response. We'll totally attribute you... but no royalties yet unfortunately :(