On March 18th, the New York Post ran a striking headline: “Google, OpenAI want ‘license to steal’ from publishers with AI proposals, newspapers warn in scathing editorial.” The article captured a rising wave of concern from news organizations and content creators, who believe that tech giants are pressuring the U.S. government to shut down copyright lawsuits through executive intervention. Their argument is based on national urgency: unless these legal challenges are halted, the U.S. risks falling behind in the global AI race — particularly to China. But critics see this effort as a backdoor attempt to sidestep copyright protections and gain sweeping permission to use creative works without consent. At the heart of this conflict is the idea of “fair use” — the legal doctrine AI companies invoke to justify training their models on vast amounts of publicly available, but often copyrighted, content.
In this article, we’ll explore the heart of the issue. What exactly is the “fair use” argument the AI industry is leaning on? What technical realities make this defense far more fragile than it seems — especially in the age of generative AI? Why are current legal systems unable to address the scale and ambiguity of this disruption? And finally, could Europe’s opt-out copyright regime offer a temporary solution — or is the only real answer a coordinated global framework? We’ll unpack these questions in plain language, with no legal or technical jargon, so that any reader can grasp what’s really at stake.