Forget Lawsuits. Build Licensing Infrastructure Instead.
Over the past year, publishers have taken an increasingly assertive stance against unlicensed use of their content by generative AI systems. Lawsuits from major news organizations, including The New York Times, have triggered a wave of scrutiny around how large language models are built, fine-tuned, and deployed. These legal actions are grounded in real concerns: AI models are trained on high-quality content that required investment and expertise to create, yet the creators are often excluded from any economic benefit, credit, or even visibility.
In the last six months alone:
- The News/Media Alliance, representing over 2,000 publishers, filed a formal brief opposing Cohere’s motion to dismiss, citing widespread, systematic use of copyrighted material without consent. Link
- Ziff Davis (owner of CNET, PCMag, and IGN) filed suit against OpenAI, alleging unauthorized ingestion of its digital content despite explicit exclusions. Link
- A judge approved a class-action lawsuit against Anthropic, brought by authors who claim their books were used without license. Link
- In the Southern District of New York, the court overseeing The New York Times case issued a preservation order, requiring OpenAI to maintain user logs that could reveal infringing content generations. (Which has its own privacy issues…) Link
Some applications have begun to experiment with basic forms of attribution. This includes simple source mentions, inline citations, or expandable links that gesture toward provenance. These steps are welcome and are clearly better than complete erasure. But they often fall short of what publishers value most: engaged audiences, contextual credit, and meaningful traffic. As multi-source synthesis becomes the dominant mode of output, attribution increasingly becomes disconnected. It is often pushed to footnotes, hover states, or other low-engagement formats. The result is a growing sense that value is being siphoned, even when sources are nominally acknowledged.
Legal pressure can clarify boundaries. It can create case law. In some instances, it may be the only recourse. But as a long-term strategy for sustaining media in the age of AI, litigation alone is unlikely to deliver the results publishers need.
Copyright law, as it stands, was not designed for real-time, probabilistic systems that summarize and synthesize human expression at scale. Lawsuits may challenge training practices, but they offer few remedies for the way AI systems are already embedded into search, assistants, and everyday workflows. These systems increasingly provide answers rather than links, and they do so using content that has been written, curated, and maintained by others. Winning a court case might set a precedent. But it will not build the systems required to ensure that permission, attribution, and compensation are part of the default behavior of AI systems.
What is needed now is not just a legal strategy, but a technical one. The internet has always worked best when values are embedded into protocols and infrastructure. If we want publishers and creators to participate in the AI economy, we must give them systems that make that participation practical, reliable, and scalable.
This is where licensing infrastructure becomes essential.
Licensing infrastructure allows content owners to declare, in machine-readable terms, how their work may be accessed, by whom, and for what purpose. It enables structured negotiations between publishers and AI operators. It also supports enforcement mechanisms that function in real time, rather than through slow, retroactive claims.
Several promising building blocks are already emerging. The IETF AI Preferences draft provides a vocabulary for expressing allowed use cases, such as training, indexing, or real-time inference. The IAB Tech Lab has proposed standards for cost-per-crawl monetization and authenticated API access. Creative Commons has introduced new signaling for ethical reuse, encouraging credit, reciprocity, and ecosystem support.
Importantly, infrastructure providers are now beginning to implement these concepts in practice. Cloudflare, for example, recently launched tools that give publishers analytics on bot behavior and enable them to block or rate-limit specific AI crawlers. These developments show that the underlying mechanisms for policy enforcement—such as visibility, logging, and access control—are progressing in parallel with standards development. This alignment is essential if the emerging frameworks are to become enforceable at scale.
These standards matter. But by themselves, they are incomplete. Vocabulary without enforcement remains voluntary. Ethics without infrastructure cannot be audited. Protocols without deployment do not change outcomes.
What makes licensing infrastructure viable is the ability to integrate these ideas into real systems that operate at global scale.
At paywalls.net, we are building this layer. We provide tools that allow publishers to express usage preferences, detect access by AI agents, enforce intent-based rules, and monetize usage. These tools are built using standards-aligned, infrastructure-native components. We work with CDNs, proxies, and server environments to ensure that policies can be applied at the point of access, with minimal latency and full transparency.
This approach does not rely on black-box filters or adversarial blocking. It is built on open standards, auditable systems, and clear contractual agreements. It allows publishers to participate in the emerging AI economy without sacrificing control. It also gives AI companies a reliable and scalable way to license high-quality content under defined terms.
The future of content licensing will not be defined by individual court victories. It will be shaped by the systems we choose to build, and by our ability to make those systems align the interests of creators, platforms, and the public.
If we want AI systems to benefit from the richness of human knowledge, we must ensure that those who produce that knowledge are included in the value chain. Not occasionally. Not through litigation alone. But by default.
That outcome is achievable. The technical groundwork is being laid. The standards are in motion. What remains is to connect the pieces and to invest in the infrastructure that can support this shift at scale.
If you are a publisher, technologist, or platform architect navigating this transition, we invite you to collaborate. There is still time to shape the terms of AI–content interaction—not just through legal arguments, but through durable infrastructure that encodes fairness from the start.
Let’s build the infrastructure that enables a healthy ecosystem—one where creators are rewarded, platforms are accountable, and AI systems can serve society without undermining the people and institutions who inform it.
Member discussion