
Giving Away the Store to Putin
February 18, 2025
Young Snipers in Love Across The Gorge
February 18, 2025As Walt Whitman once wrote, The New York Times is large. It contains multitudes.
One part of the company is suing OpenAI and Microsoft for training their large language models on Times content. It seeks “billions of dollars in statutory and actual damages” for the companies’ “use of The Times’s uniquely valuable works.”
But as Semafor reported Monday, the newsroom is on board with using AI in the story production process — some AI tools, at least. And the green-lit list includes models from…OpenAI and Microsoft. Max Tani:
The New York Times is greenlighting the use of AI for its product and editorial staff, saying that internal tools could eventually write social copy, SEO headlines, and some code.In messages to newsroom staff, the company announced that it’s opening up AI training to the newsroom, and debuting a new internal AI tool called Echo to staff, Semafor has learned. The Times also shared documents and videos laying out editorial do’s and don’t for using AI, and shared a suite of AI products that staff could now use to develop web products and editorial ideas.
The allowed external tools include “GitHub Copilot programming assistant for coding, Google’s Vertex AI for product development, NotebookLM, the NYT’s ChatExplorer, some Amazon AI products, and OpenAI’s non-ChatGPT API through the New York Times’ business account (only with approval from the company’s legal department).”
Swapping the ChatGPT interface for OpenAI’s API doesn’t change what the underlying LLM was trained on — which includes a huge amount of what the legal side of the Times argues is not-to-be-used copyrighted material. Google’s NotebookLM is based on its Gemini models, for which it’s also facing lawsuits over scraping copyrighted material.
And GitHub Copilot is a product of Microsoft, the Times’ other legal opponent. The Times even singled out Copilot for criticism in its lawsuit, saying it seeks “to free-ride on The Times’s massive investment in its journalism by using it to build substitutive products without permission or payment.” (Though to be fair, Microsoft has rebranded its AI tools roughly 384 times since December 2023, and that was a somewhat different product. It would be surprising if GitHub Copilot — a tool for programmers — was trained on viral Times recipes for green pea guacamole. But it’s facing similar lawsuits from other coders, though.)
Among the things the Times suggests using AI for, according to Tani:
…to generate SEO headlines, summaries, and audience promos; suggest edits; brainstorm questions and ideas and ask questions about reporters’ own documents; engage in research; and analyze the Times’ own documents and images. In a training video shared with staff, the Times suggested using AI to come up with questions to ask the CEO of a startup during an interview. Times guidelines also said it could use AI to develop news quizzes, social copy, quote cards, and FAQs.
For the record, I think these are fine journalistic uses of AI. Current LLMs are nowhere near accurate enough to reliably produce news copy meant for humans. They make stuff up far too often. But it can be extremely useful for analyzing documents, brainstorming ideas, summarizing texts, and a host of other tasks that happen during the reporting and writing process, where a journalist can evaluate and refine the output. The new generation of “deep research” models looks much improved for a lot of journalism tasks, though it’s still slow and expensive. And they’ll keep getting better. A smart news organization should be open to using tools where they can help — and avoiding them where they can’t. That’s true no matter what your legal strategy is.
The @nytimes sends a signal to all journalism educators:
If you want to work there in the future, you’ll need to know how to use AI ethically + productively.
So ethical + professional AI usage needs to be integrated into journalism curricula (now).https://t.co/5xFdTFDLKF
— Michael Socolow (@MichaelSocolow) February 17, 2025
The desire to implement AI tools seems (to me) well ahead of where the products actually are. Most people I know that work with AI on a regular basis say that any time savings is lost to editing and fact checking the outputs. https://t.co/DKr4ntvUOP
— Evan DeSimone (@MediaEvan) February 17, 2025
The NY Times is going all-in on AI—on its own terms.
It’s rolling out AI tools for headlines, summaries, and interview prep.
The irony?
They’re suing OpenAI for training on their content—while paying to use the same AI models that trained on everyone’s data to power their newsroom.
Le New York Times accélère sur l’IA. Quelques exemples de prompts que les journalistes pourront utiliser https://t.co/zZRJFZinNM pic.twitter.com/HWTdHFvuCp
— Jean-Noël Buisson (@jnbuisson) February 17, 2025
Countless billions of investments and millions of credulous articles yet GenAI boosters still can’t really figure out any real uses… https://t.co/GuHfqejRNU pic.twitter.com/FeoOM6TRFb
— Lincoln Michel (@TheLincoln) February 17, 2025
“Can you revise this paragraph to make it tighter?”
The New York Times goes all-in on internal AI tools, by @MaxwellTani https://t.co/W6HGbSgxcc
“We view the technology not as some magical solution but as a powerful tool that, like many technological advances before, may be…
— Stephen Landry (@landryst) February 17, 2025
Beyond the fact that this is stupid because AI hallucinates and can’t even be a reliable source for how to cook eggs much less understand politics: this also puts the power of the press in whoever owns and builds the AI tools.Cancel your NYT subs.www.semafor.com/article/02/1…
— Jon Neimeister (@andantonius.bsky.social) February 17, 2025 at 11:29 AM
Great Job Joshua Benton & the Team @ Nieman Lab Source link for sharing this story.