Files
davideisinger.com/static/archive/www-newyorker-com-bzani5.txt
David Eisinger d20237ca5c Add links
2025-09-08 22:40:59 -04:00

558 lines
32 KiB
Plaintext
Raw Blame History

This file contains ambiguous Unicode characters
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
[1]Skip to main content
[2]The New Yorker
• [3]Newsletter
[4]Search
• [5]The Latest
• [6]News
• [7]Books & Culture
• [8]Fiction & Poetry
• [9]Humor & Cartoons
• [10]Magazine
• [11]Puzzles & Games
• [12]Video
• [13]Podcasts
• [14]Goings On
• [15]Shop
• [16]100th Anniversary
Open Navigation Menu
[18]The New Yorker
Animation of a ball climbing an an infinite staircase.
[19]Open Questions
What if A.I. Doesnt Get Much Better Than This?
GPT-5, a new release from OpenAI, is the latest product to suggest that
progress on large language models has stalled.
By [20]Cal Newport
August 12, 2025
Illustration by Shira Inbar
Save this story
Save this story
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
For this weeks Open Questions column, Cal Newport is filling in for Joshua
Rothman.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Much of the euphoria and dread swirling around todays artificial-intelligence
technologies can be traced back to January, 2020, when a team of researchers at
OpenAI published a thirty-page [23]report titled “Scaling Laws for Neural
Language Models.” The team was led by the A.I. researcher Jared Kaplan, and
included Dario Amodei, who is now the C.E.O. of Anthropic. They investigated a
fairly nerdy question: What happens to the performance of language models when
you increase their size and the intensity of their training?
Back then, many machine-learning experts thought that, after they had reached a
certain size, language models would effectively start memorizing the answers to
their training questions, which would make them less useful once deployed. But
the OpenAI paper argued that these models would only get better as they grew,
and indeed that such improvements might follow a power law—an aggressive curve
that resembles a hockey stick. The implication: if you keep building larger
language models, and you train them on larger data sets, theyll start to get
shockingly good. A few months after the paper, OpenAI seemed to validate the
scaling law by releasing GPT-3, which was ten times larger—and leaps and bounds
better—than its predecessor, GPT-2.
Suddenly, the theoretical idea of artificial general intelligence, which
performs as well as or better than humans on a wide variety of tasks, seemed
tantalizingly close. If the scaling law held, A.I. companies might achieve
A.G.I. by pouring more money and computing power into language models. Within a
year, [24]Sam Altman, the chief executive at OpenAI, published a blog post
titled “Moores Law for Everything,” which argued that A.I. will take over
“more and more of the work that people now do” and create unimaginable wealth
for the owners of capital. “This technological revolution is unstoppable,” he
wrote. “The world will change so rapidly and drastically that an equally
drastic change in policy will be needed to distribute this wealth and enable
more people to pursue the life they want.”
Its hard to overstate how completely the A.I. community came to believe that
it would inevitably scale its way to A.G.I. In 2022, Gary Marcus, an A.I.
entrepreneur and an emeritus professor of psychology and neural science at
N.Y.U., pushed back on Kaplans paper, noting that “the so-called scaling laws
arent universal laws like gravity but rather mere observations that might not
hold forever.” The negative response was fierce and swift. “No other essay I
have ever written has been ridiculed by as many people, or as many famous
people, from Sam Altman and Greg Brockman to Yann LeCun and Elon Musk,” Marcus
later reflected. He recently told me that his remarks essentially
“excommunicated” him from the world of machine learning. Soon, ChatGPT would
reach a hundred million users faster than any digital service in history; in
March, 2023, OpenAIs next release, GPT-4, vaulted so far up the scaling curve
that it inspired a Microsoft research paper titled “Sparks of Artificial
General Intelligence.” Over the following year, venture-capital spending on
A.I. jumped by eighty per cent.
After that, however, progress seemed to slow. OpenAI did not unveil a new
blockbuster model for more than two years, instead focussing on specialized
releases that became hard for the general public to follow. Some voices within
the industry began to wonder if the A.I. scaling law was starting to falter.
“The 2010s were the age of scaling, now were back in the age of wonder and
discovery once again,” Ilya Sutskever, one of the companys founders, told
Reuters in November. “Everyone is looking for the next thing.” A
contemporaneous TechCrunch article summarized the general mood: “Everyone now
seems to be admitting you cant just use more compute and more data while
pretraining large language models and expect them to turn into some sort of
all-knowing digital god.” But such observations were largely drowned out by the
headline-generating rhetoric of other A.I. leaders. “A.I. is starting to get
better than humans at almost all intellectual tasks,” Amodei recently told
Anderson Cooper. In an interview with Axios, he predicted that half of
entry-level white-collar jobs might be “wiped out” in the next one to five
years. This summer, both Altman and [25]Mark Zuckerberg, of Meta, claimed that
their companies were close to developing superintelligence.
Then, last week, OpenAI finally released GPT-5, which many had hoped would
usher in the next significant leap in A.I. capabilities. Early reviewers found
some features to like. When a popular tech YouTuber, Mrwhosetheboss, asked it
to create a chess game that used Pokémon as pieces, he got a significantly
better result than when he used GPT-o4-mini-high, an industry-leading coding
model; he also discovered that GPT-5 could write a more effective script for
his YouTube channel than GPT-4o. Mrwhosetheboss was particularly enthusiastic
that GPT-5 will automatically route queries to a model suited for the task,
instead of requiring users to manually pick the model they want to try. Yet he
also learned that GPT-4o was clearly more successful at generating a YouTube
thumbnail and a birthday-party invitation—and he had no trouble inducing GPT-5
to make up fake facts. Within hours, users began expressing disappointment with
the new model on the r/ChatGPT subreddit. One post called it the “biggest piece
of garbage even as a paid user.” In an Ask Me Anything (A.M.A.) session, Altman
and other OpenAI engineers found themselves on the defensive, addressing
complaints. Marcus summarized the release as “overdue, overhyped and
underwhelming.”
In the aftermath of GPT-5s launch, it has become more difficult to take
bombastic predictions about A.I. at face value, and the views of critics like
Marcus seem increasingly moderate. Such voices argue that this technology is
important, but not poised to drastically transform our lives. They challenge us
to consider a different vision for the near-future—one in which A.I. might not
get much better than this.
OpenAI didnt want to wait nearly two and a half years to release GPT-5.
According to The Information, by the spring of 2024, Altman was telling
employees that their next major model, code-named Orion, would be significantly
better than GPT-4. By the fall, however, it became clear that the results were
disappointing. “While Orions performance ended up exceeding that of prior
models,” The Information reported in November, “the increase in quality was far
smaller compared with the jump between GPT-3 and GPT-4.”
Orions failure helped cement the creeping fear within the industry that the
A.I. scaling law wasnt a law after all. If building ever-bigger models was
yielding diminishing returns, the tech companies would need a new strategy to
strengthen their A.I. products. They soon settled on what could be described as
“post-training improvements.” The leading large language models all go through
a process called pre-training in which they essentially digest the entire
internet to become smart. But it is also possible to refine models later, to
help them better make use of the knowledge and abilities they have absorbed.
One post-training technique is to apply a machine-learning tool, reinforcement
learning, to teach a pre-trained model to behave better on specific types of
tasks. Another enables a model to spend more computing time generating
responses to demanding queries.
A useful metaphor here is a car. Pre-training can be said to produce the
vehicle; post-training soups it up. In the scaling-law paper, Kaplan and his
co-authors predicted that as you expand the pre-training process you increase
the power of the cars you produce; if GPT-3 was a sedan, GPT-4 was a sports
car. Once this progression faltered, however, the industry turned its attention
to helping the cars that theyd already built to perform better. Post-training
techniques turned engineers into mechanics.
Tech leaders were quick to express a hope that a post-training approach would
improve their products as quickly as traditional scaling had. “We are seeing
the emergence of a new scaling law,” Satya Nadella, the C.E.O. of Microsoft,
said at a conference last fall. The venture capitalist Anjney Midha similarly
spoke of a “second era of scaling laws.” In December, OpenAI released o1, which
used post-training techniques to make the model better at step-by-step
reasoning and at writing computer code. Soon the company had unveiled o3-mini,
o3-mini-high, o4-mini, o4-mini-high, and o3-pro, each of which was souped up
with a bespoke combination of post-training techniques.
Other A.I. companies pursued a similar pivot. Anthropic experimented with
post-training improvements in a February release of Claude 3.7 Sonnet, and then
made them central to its Claude 4 family of models. [26]Elon Musks xAI
continued to chase a scaling strategy until its wintertime launch of Grok 3,
which was pre-trained on an astonishing 100,000 H100 G.P.U. chips—many times
the computational power that was reportedly used to train GPT-4. When Grok 3
failed to outperform its competitors significantly, the company embraced
post-training approaches to develop Grok 4. GPT-5 fits neatly into this
trajectory. Its less a brand-new model than an attempt to refine recent
post-trained products and integrate them into a single package.
Has this post-training approach put us back on track toward something like
A.G.I.? OpenAIs announcement for GPT-5 included more than two dozen charts and
graphs, on measures such as “Aider Polyglot Multi-language code editing” and
“ERQA Multimodal spatial reasoning,” to quantify how much the model outperforms
its predecessors. Some A.I. benchmarks capture useful advances. GPT-5 scored
higher than previous models on benchmarks focussed on programming, and early
reviews seemed to agree that it produces better code. New models also write in
a more natural and fluid way, and this is reflected in the benchmarks as well.
But these changes now feel narrow—more like the targeted improvements youd
expect from a software update than like the broad expansion of capabilities in
earlier generative-A.I. breakthroughs. You didnt need a bar chart to recognize
that GPT-4 had leaped ahead of anything that had come before.
Other benchmarks might not measure what they claim. Starting with the release
of o1, A.I. companies have touted progress on measures of step-by-step
reasoning. But in June Apple researchers released a paper titled “The Illusion
of Thinking,” which found that state-of-the-art “large reasoning models”
demonstrated “performance collapsing to zero” when the complexity of puzzles
was extended beyond a modest threshold. Reasoning models, which include
o3-mini, Claude 3.7 Sonnets “thinking” mode, and DeepSeek-R1, “still fail to
develop generalizable problem-solving capabilities,” the authors wrote. Last
week, researchers at Arizona State University reached an even blunter
conclusion: what A.I. companies call reasoning “is a brittle mirage that
vanishes when it is pushed beyond training distributions.” Beating these
benchmarks is different from, say, reasoning through the types of daily
problems we face in our jobs. “I dont hear a lot of companies using A.I.
saying that 2025 models are a lot more useful to them than 2024 models, even
though the 2025 models perform better on benchmarks,” Marcus told me.
Post-training improvements dont seem to be strengthening models as thoroughly
as scaling once did. A lot of utility can come from souping up your Camry, but
no amount of tweaking will turn it into a Ferrari.
I recently asked Marcus and two other skeptics to predict the impact of
generative A.I. on the economy in the coming years. “This is a
fifty-billion-dollar market, not a trillion-dollar market,” Ed Zitron, a
technology analyst who hosts the “Better Offline” podcast, told me. Marcus
agreed: “A fifty-billion-dollar market, maybe a hundred.” The linguistics
professor Emily Bender, who co-authored a well-known critique of early language
models, told me that “the impacts will depend on how many in the management
class fall for the hype from the people selling this tech, and retool their
workplaces around it.” She added, “The more this happens, the worse off
everyone will be.” Such views have been portrayed as unrealistic—Nate Silver
once replied to an Ed Zitron tweet by writing, “old man yells at cloud
vibes”—while we readily accepted the grandiose visions of tech C.E.O.s. Maybe
thats starting to change.
If these moderate views of A.I. are right, then in the next few years A.I.
tools will make steady but gradual advances. Many people will use A.I. on a
regular but limited basis, whether to look up information or to speed up
certain annoying tasks, such as summarizing a report or writing the rough draft
of an event agenda. Certain fields, like programming and academia, will change
dramatically. A minority of professions, such as voice acting and social-media
copywriting, might essentially disappear. But A.I. may not massively disrupt
the job market, and more hyperbolic ideas like superintelligence may come to
seem unserious.
Continuing to buy into the A.I. hype might bring its own perils. In a [27]
recent article, Zitron pointed out that about thirty-five per cent of U.S.
stock-market value—and therefore a large share of many retirement portfolios—is
currently tied up in the so-called Magnificent Seven technology companies.
According to Zitrons analysis, these firms spent five hundred and sixty
billion dollars on A.I.-related capital expenditures in the past eighteen
months, while their A.I. revenues were only about thirty-five billion. “When
you look at these numbers, you feel insane,” Zitron told me.
Even the figures we might call A.I. moderates, however, dont think the public
should let its guard down. Marcus believes that we were misguided to place so
much emphasis on generative A.I., but he also thinks that, with new techniques,
A.G.I. could still be attainable as early as the twenty-thirties. Even if
language models never automate our jobs, the renewed interest and investment in
A.I. might lead toward more complicated solutions, which could. In the
meantime, we should use this reprieve to prepare for disruptions that might
still loom—by crafting effective A.I. regulations, for example, and by
developing the nascent field of digital ethics.
The appendices of the scaling-law paper, from 2020, included a section called
“Caveats,” which subsequent coverage tended to miss. “At present we do not have
a solid theoretical understanding for any of our proposed scaling laws,” the
authors wrote. “The scaling relations with model size and compute are
especially mysterious.” In practice, the scaling laws worked until they didnt.
The whole enterprise of teaching computers to think remains mysterious. We
should proceed with less hubris and more care. ♦
An earlier version of this article included an inaccurate transcription of Greg
Brockmans name.
New Yorker Favorites
• A professor claimed to be Native American. Did she know [28]she wasnt?
• Ina Garten and [29]the age of abundance.
• Kanye West bought an architectural treasure—then [30]gave it a violent
remix.
• Why so many people are going “[31]no contact” with their parents.
• How a homegrown teen gang punctured the [32]image of an upscale community.
• Fiction by James Thurber: “[33]The Secret Life of Walter Mitty”
[34]Sign up for our daily newsletter to receive the best stories from The New
Yorker.
[35][undefined]
[36]Cal Newport is a contributing writer for The New Yorker and a professor of
computer science at Georgetown University.
More:[37]Artificial Intelligence (A.I.)[38]ChatGPT[39]Data
Read More
[40]
Daily Cartoon: Monday, September 8th
Humor
[41]
Daily Cartoon: Monday, September 8th
[42]
Daily Cartoon: Monday, September 8th
A drawing that riffs on the latest news and happenings.
[43]
Tracks from Taylor Swifts Wedding-Planning Album
Sketchpad
[44]
Tracks from Taylor Swifts Wedding-Planning Album
[45]
Tracks from Taylor Swifts Wedding-Planning Album
Swifties are going crazy for “All You Had to Do Was R.S.V.P.”
[46]
Enemies of the State
A Reporter at Large
[47]
Enemies of the State
[48]
Enemies of the State
How the Trump Administration declared war on Venezuelan migrants in the U.S.
[49]
A Round of Gulf?
Shouts & Murmurs
[50]
A Round of Gulf?
[51]
A Round of Gulf?
Golf in Scotland or the Gulf of Mexico, and how the President keeps them
straight.
[52]
Theyll Take You to the Candy Shop
Cavity Dept.
[53]
Theyll Take You to the Candy Shop
[54]
Theyll Take You to the Candy Shop
The Composer Laureate twins Adeev and Ezra Potash team up with the actor Martin
Starr to build the perfect gummy.
[55]
Rivals Rub Shoulders in the World of Competitive Massage
Letter from Copenhagen
[56]
Rivals Rub Shoulders in the World of Competitive Massage
[57]
Rivals Rub Shoulders in the World of Competitive Massage
Each year, massage therapists from around the globe gather to face off,
collaborate, and make sure that no body gets left behind.
[58]
Texass Gerrymander May Not Be the Worst Threat to Democrats in 2026
Q. & A.
[59]
Texass Gerrymander May Not Be the Worst Threat to Democrats in 2026
[60]
Texass Gerrymander May Not Be the Worst Threat to Democrats in 2026
Nate Cohn, the New York Times chief political analyst, on a consequential
Supreme Court case and why Republicans are registering so many new voters.
[61]
N.Y.U.s Dumpster-to-Dorm Boutique
Back to School Dept.
[62]
N.Y.U.s Dumpster-to-Dorm Boutique
[63]
N.Y.U.s Dumpster-to-Dorm Boutique
A group of students collected all the leather jackets, rice cookers,
microwaves, and disco balls abandoned in last semesters dorms to create the
free Swap Shop.
[64]
Kadir Nelsons “The Soloist”
Cover Story
[65]
Kadir Nelsons “The Soloist”
[66]
Kadir Nelsons “The Soloist”
A concert en plein air.
[67]
Why Christopher Marlowe Is Still Making Trouble
Books
[68]
Why Christopher Marlowe Is Still Making Trouble
[69]
Why Christopher Marlowe Is Still Making Trouble
Spy, murder victim, and the boldest poet of his day, the transgressive
Elizabethan dramatist taps into the gravely comical troubles into which humans
tumble.
[70]
Playing the Field with My A.I. Boyfriends
Brave New World Dept.
[71]
Playing the Field with My A.I. Boyfriends
[72]
Playing the Field with My A.I. Boyfriends
Nineteen per cent of American adults have talked to an A.I. romantic interest.
Chatbots may know a lot, but do they make a good partner?
[73]
MAGAnomics Isnt Working
The Financial Page
[74]
MAGAnomics Isnt Working
[75]
MAGAnomics Isnt Working
A dismal jobs report affirms earlier warnings about the economic impact of
Donald Trumps tariffs, immigration restrictions, and DOGE-led firings.
[76]The New Yorker
The New Yorker
• [77]News
• [78]Books & Culture
• [79]Fiction & Poetry
• [80]Humor & Cartoons
• [81]Magazine
• [82]Crossword
• [83]Video
• [84]Podcasts
• [85]100th Anniversary
• [86]Goings On
• [87]Manage Account
• [88]Shop The New Yorker
• [89]Buy Covers and Cartoons
• [90]Condé Nast Store
• [91]Digital Access
• [92]Newsletters
• [93]Jigsaw Puzzle
• [94]RSS
• [95]About
• [96]Careers
• [97]Contact
• [98]F.A.Q.
• [99]Media Kit
• [100]Press
• [101]Accessibility Help
• [102]User Agreement
• [103]Privacy Policy
• [104]Your California Privacy Rights
© 2025 Condé Nast. All rights reserved. The New Yorker may earn a portion of
sales from products that are purchased through our site as part of our
Affiliate Partnerships with retailers. The material on this site may not be
reproduced, distributed, transmitted, cached or otherwise used, except with the
prior written permission of Condé Nast. [105]Ad Choices
• [106]
• [107]
• [108]
• [109]
• [110]
• [111]
• [112]
References:
[1] https://www.newyorker.com/culture/open-questions/what-if-ai-doesnt-get-much-better-than-this#main-content
[2] https://www.newyorker.com/
[3] https://www.newyorker.com/newsletters?sourceCode=navbar
[4] https://www.newyorker.com/search
[5] https://www.newyorker.com/latest
[6] https://www.newyorker.com/news
[7] https://www.newyorker.com/culture
[8] https://www.newyorker.com/fiction-and-poetry
[9] https://www.newyorker.com/humor
[10] https://www.newyorker.com/magazine
[11] https://www.newyorker.com/crossword-puzzles-and-games
[12] https://www.newyorker.com/video
[13] https://www.newyorker.com/podcasts
[14] https://www.newyorker.com/goings-on
[15] https://store.newyorker.com/
[16] https://www.newyorker.com/100
[18] https://www.newyorker.com/
[19] https://www.newyorker.com/culture/open-questions
[20] https://www.newyorker.com/contributors/cal-newport
[23] https://arxiv.org/abs/2001.08361
[24] https://www.newyorker.com/books/under-review/can-sam-altman-be-trusted-with-the-future
[25] https://www.newyorker.com/culture/infinite-scroll/mark-zuckerberg-says-social-media-is-over
[26] https://www.newyorker.com/tag/elon-musk
[27] https://www.wheresyoured.at/the-haters-gui/
[28] https://www.newyorker.com/magazine/2024/03/04/a-professor-claimed-to-be-native-american-did-she-know-she-wasnt
[29] https://www.newyorker.com/magazine/2024/09/09/ina-garten-profile
[30] https://www.newyorker.com/magazine/2024/06/17/kanye-west-tadao-ando-beach-house-malibu
[31] https://www.newyorker.com/culture/annals-of-inquiry/why-so-many-people-are-going-no-contact-with-their-parents
[32] https://www.newyorker.com/magazine/2024/07/01/how-a-homegrown-teen-gang-punctured-the-image-of-an-upscale-community
[33] https://www.newyorker.com/magazine/1939/03/18/the-secret-life-of-walter-mitty-james-thurber
[34] https://www.newyorker.com/newsletter/daily
[35] https://www.newyorker.com/contributors/cal-newport
[36] https://www.newyorker.com/contributors/cal-newport
[37] https://www.newyorker.com/tag/artificial-intelligence-ai
[38] https://www.newyorker.com/tag/chatgpt
[39] https://www.newyorker.com/tag/data
[40] https://www.newyorker.com/cartoons/daily-cartoon/monday-september-8th-lioness-protein-needs#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[41] https://www.newyorker.com/cartoons/daily-cartoon/monday-september-8th-lioness-protein-needs#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[42] https://www.newyorker.com/cartoons/daily-cartoon/monday-september-8th-lioness-protein-needs#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[43] https://www.newyorker.com/magazine/2025/09/15/tracks-from-taylor-swifts-wedding-planning-album#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[44] https://www.newyorker.com/magazine/2025/09/15/tracks-from-taylor-swifts-wedding-planning-album#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[45] https://www.newyorker.com/magazine/2025/09/15/tracks-from-taylor-swifts-wedding-planning-album#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[46] https://www.newyorker.com/magazine/2025/09/15/enemies-of-the-state#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[47] https://www.newyorker.com/magazine/2025/09/15/enemies-of-the-state#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[48] https://www.newyorker.com/magazine/2025/09/15/enemies-of-the-state#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[49] https://www.newyorker.com/magazine/2025/09/15/a-round-of-gulf#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[50] https://www.newyorker.com/magazine/2025/09/15/a-round-of-gulf#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[51] https://www.newyorker.com/magazine/2025/09/15/a-round-of-gulf#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[52] https://www.newyorker.com/magazine/2025/09/15/theyll-take-you-to-the-candy-shop#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[53] https://www.newyorker.com/magazine/2025/09/15/theyll-take-you-to-the-candy-shop#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[54] https://www.newyorker.com/magazine/2025/09/15/theyll-take-you-to-the-candy-shop#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[55] https://www.newyorker.com/magazine/2025/09/15/rivals-rub-shoulders-in-the-world-of-competitive-massage#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[56] https://www.newyorker.com/magazine/2025/09/15/rivals-rub-shoulders-in-the-world-of-competitive-massage#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[57] https://www.newyorker.com/magazine/2025/09/15/rivals-rub-shoulders-in-the-world-of-competitive-massage#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[58] https://www.newyorker.com/news/q-and-a/texas-gerrymander-may-not-be-the-worst-threat-to-democrats-in-2026#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[59] https://www.newyorker.com/news/q-and-a/texas-gerrymander-may-not-be-the-worst-threat-to-democrats-in-2026#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[60] https://www.newyorker.com/news/q-and-a/texas-gerrymander-may-not-be-the-worst-threat-to-democrats-in-2026#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[61] https://www.newyorker.com/magazine/2025/09/15/nyus-dumpster-to-dorm-boutique#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[62] https://www.newyorker.com/magazine/2025/09/15/nyus-dumpster-to-dorm-boutique#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[63] https://www.newyorker.com/magazine/2025/09/15/nyus-dumpster-to-dorm-boutique#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[64] https://www.newyorker.com/culture/cover-story/cover-story-2025-09-15#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[65] https://www.newyorker.com/culture/cover-story/cover-story-2025-09-15#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[66] https://www.newyorker.com/culture/cover-story/cover-story-2025-09-15#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[67] https://www.newyorker.com/magazine/2025/09/15/dark-renaissance-the-dangerous-times-and-fatal-genius-of-shakespeares-greatest-rival-stephen-greenblatt-book-review#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[68] https://www.newyorker.com/magazine/2025/09/15/dark-renaissance-the-dangerous-times-and-fatal-genius-of-shakespeares-greatest-rival-stephen-greenblatt-book-review#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[69] https://www.newyorker.com/magazine/2025/09/15/dark-renaissance-the-dangerous-times-and-fatal-genius-of-shakespeares-greatest-rival-stephen-greenblatt-book-review#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[70] https://www.newyorker.com/magazine/2025/09/15/playing-the-field-with-my-ai-boyfriends#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[71] https://www.newyorker.com/magazine/2025/09/15/playing-the-field-with-my-ai-boyfriends#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[72] https://www.newyorker.com/magazine/2025/09/15/playing-the-field-with-my-ai-boyfriends#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[73] https://www.newyorker.com/news/the-financial-page/maganomics-isnt-working#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[74] https://www.newyorker.com/news/the-financial-page/maganomics-isnt-working#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[75] https://www.newyorker.com/news/the-financial-page/maganomics-isnt-working#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[76] https://www.newyorker.com/
[77] https://www.newyorker.com/news
[78] https://www.newyorker.com/culture
[79] https://www.newyorker.com/fiction-and-poetry
[80] https://www.newyorker.com/humor
[81] https://www.newyorker.com/magazine
[82] https://www.newyorker.com/crossword-puzzles-and-games
[83] https://www.newyorker.com/video
[84] https://www.newyorker.com/podcast
[85] https://www.newyorker.com/100
[86] https://www.newyorker.com/goings-on
[87] https://www.newyorker.com/account/profile
[88] https://store.newyorker.com/
[89] https://condenaststore.com/art/new+yorker+covers
[90] https://condenaststore.com/conde-nast-brand/thenewyorker
[91] https://www.newyorker.com/about/digital-access
[92] https://www.newyorker.com/newsletter
[93] https://www.newyorker.com/jigsaw
[94] https://www.newyorker.com/about/feeds
[95] https://www.newyorker.com/about/us
[96] https://www.newyorker.com/about/careers
[97] https://www.newyorker.com/about/contact
[98] https://www.newyorker.com/about/faq
[99] https://www.condenast.com/advertising
[100] https://www.newyorker.com/about/press
[101] https://www.newyorker.com/about/accessibility-help
[102] https://www.condenast.com/user-agreement/
[103] http://www.condenast.com/privacy-policy#privacypolicy
[104] http://www.condenast.com/privacy-policy#privacypolicy-california
[105] http://www.aboutads.info/
[106] https://instagram.com/newyorkermag/
[107] https://www.tiktok.com/@newyorker?lang=en
[108] https://www.threads.net/@newyorkermag
[109] https://twitter.com/NewYorker/
[110] https://www.facebook.com/newyorker/
[111] https://www.linkedin.com/company/the-new-yorker/
[112] https://www.youtube.com/user/NewYorkerDotCom/