Add links

This commit is contained in:
David Eisinger
2025-09-08 22:40:59 -04:00
parent ca85ea7696
commit d20237ca5c
8 changed files with 3223 additions and 9 deletions

View File

@@ -1,6 +1,6 @@
---
title: "Dispatch #31 (September 2025)"
date: 2025-09-02T14:34:59-04:00
date: 2025-09-08T22:27:15-04:00
draft: false
tags:
- dispatch
@@ -9,6 +9,34 @@ references:
url: https://brainbaking.com/post/2025/08/what-exif-data-reveals-about-your-site/
date: 2025-09-02T18:38:18Z
file: brainbaking-com-jlvqtp.txt
- title: "An E-bike For The Mind - by Josh Brake"
url: https://joshbrake.substack.com/p/an-e-bike-for-the-mind
date: 2025-09-09T02:29:28Z
file: joshbrake-substack-com-ljzfg7.txt
- title: "“It Was Horrible”: Inside Charlize Theron and Tom Hardys Mad Max Feud | Vanity Fair"
url: https://www.vanityfair.com/hollywood/2022/02/mad-max-fury-road-tom-hardy-charlize-theron-excerpt
date: 2025-09-09T02:29:57Z
file: www-vanityfair-com-smthzn.txt
- title: "We must build AI for people; not to be a person"
url: https://mustafa-suleyman.ai/seemingly-conscious-ai-is-coming
date: 2025-09-09T02:30:00Z
file: mustafa-suleyman-ai-obodhu.txt
- title: "What to read? Big questions as filter and frame (Part 7) Tracy Durnell's Mind Garden"
url: https://tracydurnell.com/2025/08/16/what-to-read-big-questions/
date: 2025-09-09T02:30:04Z
file: tracydurnell-com-8nhp1w.txt
- title: "What if A.I. Doesnt Get Much Better Than This? | The New Yorker"
url: https://www.newyorker.com/culture/open-questions/what-if-ai-doesnt-get-much-better-than-this
date: 2025-09-09T02:30:07Z
file: www-newyorker-com-bzani5.txt
- title: "This website is for humans - localghost"
url: https://localghost.dev/blog/this-website-is-for-humans/
date: 2025-09-09T02:30:11Z
file: localghost-dev-xtgqkw.txt
- title: "Maurice Parker - Zavala Will Always Be Free"
url: https://vincode.io/2025/08/11/zavala-will-always-be-free.html
date: 2025-09-09T02:30:16Z
file: vincode-io-nmkkju.txt
---
Big month! Nico took his first steps. Nev's onto a new school (well same school, but moved from the 0-3 building to the 3-5). She seems to be taking to it pretty well, but keeps asking if she can go back to being a little girl, which is adorable and absolutely heartbreaking.
@@ -63,19 +91,47 @@ I vibe-coded another tool called `pgpull` for pulling PostgreSQL data dumps from
### Reading & Listening
* Fiction: [_The Fox_][13], Frederick Forsyth
* Non-fiction: [_Title_][14], Author
* Non-fiction: [_The Book_][14], Alan Watts
* Music: [_Getz / Gilberto_][15], Stan Getz & Joao Gilberto
[13]: https://bookshop.org/p/books/the-fox-frederick-forsyth/d4cd693999f83d5e?ean=9780525538431
[14]: https://bookshop.org/
[14]: https://bookshop.org/p/books/the-book-on-the-taboo-against-knowing-who-you-are-alan-watts/6705001
[15]: https://www.turntablelab.com/products/stan-getz-joao-gilberto-getz-gilberto-acoustic-sounds-180g-vinyl-lp
### Links
* [Title][16]
* [Title][17]
* [Title][18]
* [An E-bike For The Mind][16]
[16]: https://example.com/
[17]: https://example.com/
[18]: https://example.com/
> At the end of the day, we must remember that innovation is a bargain. We often consider what technology promises to enable for us, without considering what it will almost certainly disable. Most of the time, we fail to stop and consider the tradeoffs. Perhaps e-bikes may give us a metaphor to frame our thinking.
* [“It Was Horrible”: Inside Charlize Theron and Tom Hardys Mad Max Feud][17]
> That scene where you see Tom with Charlize on the bike and all the Vuvalini and the Wives behind, intermingled—that scene was probably the biggest change in seeing Tom really soften to Charlize in real life. We were all unprepared for how he performed that, and then I walked off and Charlize was walking back, and I said, “Geez, Charlize, that was amazing. Did a light switch go off? He was great.” She was quite taken aback by it, too. But it was great because thats when you can see that Max and Furiosa really are a team.
* [We must build AI for people; not to be a person][18]
> AI progress has been phenomenal. A few years ago, talk of conscious AI would have seemed crazy. Today it feels increasingly urgent. In this essay I want to discuss what Ill call, “Seemingly Conscious AI” (SCAI), one that has all the hallmarks of other conscious beings and thus appears to be conscious.
* [What to read? Big questions as filter and frame][19]
> Your favorite problems form a prism that separates incoming information into a spectrum of ideas — a frame that allows you to deliberately filter distractions, direct your attention, and nurture your curiosity.
* [What If A.I. Doesnt Get Much Better Than This?][20]
> In the aftermath of GPT-5s launch, it has become more difficult to take bombastic predictions about A.I. at face value, and the views of critics like Marcus seem increasingly moderate. Such voices argue that this technology is important, but not poised to drastically transform our lives. They challenge us to consider a different vision for the near-future—one in which A.I. might not get much better than this.
* [This website is for humans][21]
> I write the content on this website for people, not robots. Im sharing my opinions and experiences so that you might identify with them and learn from them.
* [Maurice Parker - Zavala Will Always Be Free][22]
> The way I usually explain it is like this. Imagine you made furniture your whole life, but your employer only gave you pallet wood to use and half the time needed to make a piece. You were good at it and loved furniture, but were unfulfilled at your job until you retired. Now you can make furniture using walnut and take the time needed to make something you are proud of.
[16]: https://joshbrake.substack.com/p/an-e-bike-for-the-mind
[17]: https://www.vanityfair.com/hollywood/2022/02/mad-max-fury-road-tom-hardy-charlize-theron-excerpt
[18]: https://mustafa-suleyman.ai/seemingly-conscious-ai-is-coming
[19]: https://tracydurnell.com/2025/08/16/what-to-read-big-questions/
[20]: https://www.newyorker.com/culture/open-questions/what-if-ai-doesnt-get-much-better-than-this
[21]: https://localghost.dev/blog/this-website-is-for-humans/
[22]: https://vincode.io/2025/08/11/zavala-will-always-be-free.html

View File

@@ -0,0 +1,417 @@
[1]
The Absent-Minded Professor
[2]The Absent-Minded Professor
SubscribeSign in
An E-bike For The Mind
E-bikes and what they can teach us about AI
[7]
Josh Brake's avatar
[8]Josh Brake
Jun 10, 2025
19
[9]
6
2
[10]
Share
Thank you for being here. As always, these essays are free and publicly
available without a paywall. If my writing is valuable to you, please share it
with a friend or support me with a paid subscription.
[21][ ]
Subscribe
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
[23]
[https]
A photo of my new ride, the OG [24]Aventon Abound. Not quite the same capacity
as the new minivan, but close. Fitting four kiddos is easy. Probably could
squeeze three on the back bench to make five in a pinch.
I've always had a philosophical objection to e-bikes. It probably started a few
years ago when I was out of the saddle, cranking my way up the hills west of
the Rose Bowl to reach the top of the hill and a glorious overlook of the San
Gabriel Mountains when I got passed by some older ladies calmly powering their
way up past me, hardly breaking a sweat. On further reflection, maybe it's not
just a philosophical objection.
And yet, as youve seen in the picture above, I am now the proud owner of—you
guessed it—a beautiful, used-but-new-to-me, cargo e-bike.
[25]
[https]
The trusty, now semi-retired, kid trailer hauler with a photo of the San
Gabriel Mountains in the background on a fine morning from 2017.
As I've been pedaling around town over the past few days, I've been reexamining
my beef with e-bikes. And as I've wrestled with it, I've come to a few
conclusions that I think are relevant not just to e-bikes but—wait for it, I'm
sure you didn't see this one coming either—our use of artificial intelligence
too.
Steve Jobs famously imagined the computer as [26]a bicycle for the mind. If the
computer is a bicycle, perhaps AI is an e-bike.
Narcissus as Narcosis
In an early chapter of his magnum opus, [28]Understanding Media (with the
blog-post worthy title "The Gadget Lover: Narcissus as Narcosis"), Marshall
McLuhan makes the case that technological augmentation is simultaneously
amputation. He writes:
Any invention or technology is an extension or self-amputation of our
physical bodies, and such extension also demands new ratios or new
equilibriums among the other organs and extensions of the body.
He goes on to quote the 113th Psalm to argue that by using technologies, we are
both formed by them and conformed to them.
Their idols are silver and gold,
The work of mens hands.
They have mouths, but they speak not;
Eyes they have, but they see not;
They have ears, but they hear not;
Noses have they, but they smell not;
They have hands, but they handle not;
Feet have they, but they walk not;
Neither speak they through their throat.
They that make them shall be like unto them;
Yea, every one that trusteth in them.
"They that make them shall be like unto them." Indeed.
This is the question we had better be asking much more regularly, publicly, and
with each other: to what image is our technology conforming us? In recent
years, there has been much conversation about the conforming power of
algorithmically-powered social media and internet-connected devices that are
practically attached to our hands. In so many ways, we accepted them into our
lives with a false promise of augmentation without amputation. Only in
retrospect are we noticing whats been cut off.
In the midst of it all, there is hope. We can work to reclaim those things we
have lost. Perhaps amputation is the wrong metaphor, and it is more a
desensitization from infrequent attention and use. But if we thought that the
societal impact of smartphones and social media was significant, just wait till
we see the downstream amputations on offer with the promises of artificial
intelligence.
As we consider the potential augmentations of AI, we need to hold them in
tension with the concurrent amputations. E-bikes and their tradeoffs can offer
us some wisdom.
Today, Id like to riff on three e-bike-inspired perspectives Im using to
think about my technology use.
1. What: What is being augmented and amputated?
2. How: How does the augmentation interact with our effort?
3. Why: What are the values and stories motivating our choices?
1. What: Augmentation and Amputation
The question is not a question of whether a technology has enabling and
disabling effects, but rather a question of what they are. Many times, this has
to do with your perspective.
In the case of the e-bike, the most obvious augmentation is the ease of travel
compared to a standard bicycle. With the addition of a motor, the bike can
propel itself with an energy source that supplements (or completely replaces)
that of its human rider. If you look at the advertisements for any technology,
the augmentations are clear. E-bikes are no different. Whats front and center?
Range, speed, and power.
But how to judge the choice depends on the alternative. If I were to trade my
road bike for an e-bike, that would indicate a certain set of values and
choices. However, in my case, I sold a car and got a cargo e-bike.
The cargo bike will enable me to get around town and accomplish many of the
things a second car would have. It doesn't solve any long-range transportation
needs, but it will solve the majority of our need for a second car by giving me
a more convenient and efficient way to get around town with enough space on the
back for the kids and some groceries, too.
Yesterday, I biked to my dentist appointment. It was only a mile away and
certainly in reach with my road bike, but the e-bike makes it even more
accessible without the car.
Of course, there is always an amputating influence, even if the overall
motivation for the e-bike was a good one. It is worth asking why not use a
regular bicycle or even walk. Some of the benefits of bicycling, like getting
fresh air and being able to move more slowly and intentionally, or taking time
to pay attention to your surroundings, are even more accentuated when moving
less efficiently.
Whatever our choice, we should be clear about the tradeoffs.
2. How: The Principle of Proportional Augmentation
When we think about what a certain technology does for us, it is also important
to consider how that technology is conforming us. The features of the
technology matter, but often the conformational power of the technology is
significantly influenced by how they are implemented.
Take, for example, the implementation of the electric motor assist on an
e-bike. When you first think of an e-bike, you may think of it essentially as a
motorbike. Most e-bikes can be ridden without pedals. You can use throttle
control to power your forward movement completely from the onboard battery and
motor.
But most e-bikes today are primarily designed to be driven using pedal assist.
In this mode, sensors on the bike detect the force or speed with which you are
pushing on the pedals and use this measurement to supplement, not totally
replace, the power being exerted by the rider through the pedals in the
old-fashioned way. In this mode, the assistance from the motor is proportional
to the effort that you, as the rider, are putting in.
Functionally, there is little difference between the throttle and the pedal
assist. In both cases, the motor is giving you a significant boost.
Philosophically, however, there is a big difference. In pedal assist mode, you
are still required to exert some effort. You have some choice over how strong
the assistance will be, but in any situation, the level of assistance remains
directly connected to the amount of effort you put in.
This sort of design strategy is important to consider as we think about AI,
especially in educational contexts. If we eliminate the connection between
effort and results, we are training ourselves to become reliant on our AI
tools. Just like only using the throttle on our e-bike will deprive us of the
health benefits of exerting ourselves and cycling, using AI in this way will
sacrifice opportunities we have to build our cognitive and intellectual skills.
3. Why: The Ruthless Elimination of Friction
One last question we should be asking as we choose our technology is why we are
choosing to use it. In many ways, these three questions cannot be disconnected
from each other. The what, how, and why are interconnected.
In the case of my e-bike, am I really getting it to replace my car, or will it
just serve as an excuse to ride my road bike less? As we think about AI, is the
thing it will accomplish for us worth doing the old-fashioned way? Why exactly
are we choosing to outsource it? What does our choice indicate about our
values?
In my case, I feel pretty justified in my purchase, having towed all three kids
around town multiple times already. My previous bike just didnt have the space
to fit all of them, and trying to tow a bike trailer behind a cargo bike with a
five and almost four-year-old on the back without some assistance just isnt a
tenable solution.
But enter a little electronic boost, and the bike has new life again. Last
week, we rode to get ice cream as a family on bikes. I had a smile on my face
for the rest of the weekend. Yesterday, we explored a new neighborhood and
checked out a new park. All these things were enabled by the e-bike and the
additional boost of power that comes with it.
[32]
[https]
[33]
[https]
The Innovation Bargain 2x2. Original design by me based on [34]the idea from
[35]Andy Crouch.
At the end of the day, we must remember that [36]innovation is a bargain. We
often consider what technology promises to enable for us, without considering
what it will almost certainly disable.
Most of the time, we fail to stop and consider the tradeoffs. Perhaps e-bikes
may give us a metaphor to frame our thinking.
[47][ ]
Subscribe
Got a thought? Leave a comment below.
[49]Leave a comment
Reading Recommendations
Ive been intrigued and encouraged by the work that The
[51]Cosmos Institute
is doing to ask thoughtful questions about AI. Their mission to cultivate
philosopher-builders resonates deeply with my own and the kind of impact I hope
to have at Harvey Mudd.
[52]Brendan McCord
s latest, where he uses Wilhelm von Humboldt as a frame to think about our
future with AI, is worth a read.
[53]
[https]Cosmos Institute
AI vs. the Self-Directed Career
Two centuries ago, as mechanization began reshaping society, German philosopher
Wilhelm von Humboldt issued a vision and a warning…
Read more
4 months ago · 69 likes · 12 comments · Brendan McCord and Cosmos Institute
[64][ ]
Subscribe
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
The Book Nook
[67]
[https]
Slowly but surely making progress on [68]The Devil and the Dark Water. Getting
more and more interesting, page by page.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
The Professor Is In
Hard to believe we are quickly coming up on the end of four weeks of summer
research already. Its always amazing to see how much progress my students make
so quickly during the summer, and great fun to get to dig into building and
debugging optical systems with them.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Leisure Line
[https][https][https]
[https][https][https]
Some pies from the weekend. Went with a slightly higher than usual hydration
(65%), which led to some nice chewy texture on the crust.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Still Life
[72]
[https]
#1 and I went to see the Mets last week at the Dodgers game. We took the train
in from Claremont and the bus to the game, which was fun. The good guys lost,
but we took the season series from LA and were in it all four games of the
series we played out west. Metsies are just fun to watch this year, and boy,
Alonso is just ripping the cover off the ball lately.
19
[73]
6
2
[74]
Share
PreviousNext
Discussion about this post
CommentsRestacks
User's avatar
[ ]
[ ]
[ ]
[ ]
[81]
Colin's avatar
[82]Colin
[83]4d
Liked by Josh Brake
Interestingly in the UK e-bikes _must_ be propelled with human energy and can
only support you up to 15.5mph / 25kph. Otherwise, it's a moped and you need to
get a drivers license / register it as a motor vehicle. There are 'jailbroken'
bikes where you can just use the motor but the police are cracking down on
those as they're proving to be a public safety issue. [86]https://
www.theguardian.com/lifeandstyle/2025/sep/04/
britains-e-bike-boom-desperation-delivery-drivers-and-unthinkable-danger
Expand full comment
Reply
Share
[87]
Kalen's avatar
[88]Kalen
[89]Jun 10
It's funny- I had the e-bike thought a few days ago-but less charitably. In my
neck of the woods a particular breed of especially fat-tired, awfully fast,
never-actually-seen-it-pedaled e-bike has been surging in popularity, and
functionally has turned into a way to get away with driving a small motorcycle
on the bike and walking paths- a weird netherworld device that mostly just
serve to muck things up. It's less old people being enabled and dads towing a
pack of kids through nature and more almost being run over by disaffected
teenagers.
I dunno- the longer this hype cycle goes on the more that chatbots really just
seem like a bad tool, regardless of their technical sophistication. More
amputation than augmentation. They do too much if you are trying to improve
yourself (synthesized homework text is one of their major markets) and do too
little if you have actual work to do (not enough knobs to turn for creatives
trying to express themselves, and fake law citations will never do). Just like
with the metaverse and crypto and all the rest, the giant pool of money is
doing its best to drive uptake through sheer noise with a product that might
just be kind of bad in a durable way, or at least kind of niche (given how much
coding is boilerplate in something besides your native language, sure, maybe
the boilerplate generator is a nice thing to have).
Your thoughts reminded me of a good Nicholas Carr essay on good and bad tools
that's been rolling around my head of late- on the off chance you haven't read
it yet, you might enjoy it: [91]https://www.newcartographies.com/p/
the-love-that-lays-the-swale-in-rows
Expand full comment
Reply
Share
[92]4 more comments...
TopLatestDiscussions
No posts
Ready for more?
[108][ ]
Subscribe
© 2025 Josh Brake
[110]Privacy ∙ [111]Terms ∙ [112]Collection notice
[113] Start writing[114]Get the app
[115]Substack is the home for great culture
This site requires JavaScript to run correctly. Please [116]turn on JavaScript
or unblock scripts
References:
[1] https://joshbrake.substack.com/
[2] https://joshbrake.substack.com/
[7] https://substack.com/@joshbrake
[8] https://substack.com/@joshbrake
[9] https://joshbrake.substack.com/p/an-e-bike-for-the-mind/comments
[10] javascript:void(0)
[23] https://substackcdn.com/image/fetch/$s_!t_AT!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fda5c221b-40ed-44ae-bb42-5e9417997ada_1024x768.jpeg
[24] https://www.aventon.com/products/abound-ebike?variant=42319517515971
[25] https://substackcdn.com/image/fetch/$s_!V_-V!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9286909-abcf-49d5-9396-76c21c7ca5b9_1024x768.jpeg
[26] https://joshbrake.substack.com/p/a-bicycle-for-the-mind
[28] https://amzn.to/448Ndm3
[32] https://substackcdn.com/image/fetch/$s_!I3Pv!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb3be922-4cab-4ed9-b0a8-e9191d248814_2001x2001.png
[33] https://substackcdn.com/image/fetch/$s_!E2lY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb08d97d3-f35d-4db4-8588-3a7614af4f36_1601x1600.png
[34] https://journal.praxislabs.org/we-dont-need-superpowers-we-need-instruments-860459cfc165
[35] https://andy-crouch.com/
[36] https://joshbrake.substack.com/p/the-innovation-bargain
[49] https://joshbrake.substack.com/p/an-e-bike-for-the-mind/comments
[51] https://open.substack.com/users/179794473-cosmos-institute?utm_source=mentions
[52] https://open.substack.com/users/866604-brendan-mccord?utm_source=mentions
[53] https://cosmosinstitute.substack.com/p/ai-vs-the-self-directed-career?utm_source=substack&utm_campaign=post_embed&utm_medium=web
[67] https://amzn.to/3FhqzhO
[68] https://amzn.to/4mnZt9z
[72] https://substackcdn.com/image/fetch/$s_!AKII!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06d89460-fec1-4724-9d7b-d5b7e25b84cd_1024x768.jpeg
[73] https://joshbrake.substack.com/p/an-e-bike-for-the-mind/comments
[74] javascript:void(0)
[81] https://substack.com/profile/21520494-colin?utm_source=comment
[82] https://substack.com/profile/21520494-colin?utm_source=substack-feed-item
[83] https://joshbrake.substack.com/p/an-e-bike-for-the-mind/comment/152585767
[86] https://www.theguardian.com/lifeandstyle/2025/sep/04/britains-e-bike-boom-desperation-delivery-drivers-and-unthinkable-danger
[87] https://substack.com/profile/7174172-kalen?utm_source=comment
[88] https://substack.com/profile/7174172-kalen?utm_source=substack-feed-item
[89] https://joshbrake.substack.com/p/an-e-bike-for-the-mind/comment/124514975
[91] https://www.newcartographies.com/p/the-love-that-lays-the-swale-in-rows
[92] https://joshbrake.substack.com/p/an-e-bike-for-the-mind/comments
[110] https://substack.com/privacy
[111] https://substack.com/tos
[112] https://substack.com/ccpa#personal-data-collected
[113] https://substack.com/signup?utm_source=substack&utm_medium=web&utm_content=footer
[114] https://substack.com/app/app-store-redirect?utm_campaign=app-marketing&utm_content=web-footer-button
[115] https://substack.com/
[116] https://enable-javascript.com/

View File

@@ -0,0 +1,88 @@
[1] Localghost choose a theme [2]( ) [garden-ico] garden [3]( ) [city-icon]
midnight city [4]( ) [sunset-ico] miami sunset minimalist [5]( ) [minimalist]
2003 [6]( ) [2003-icon] [7]( ) [vaporwave-] vaporwave [8]( ) [netscape-l] 1999
• [9] About
• [10] Blog
• [11] Speaking
• [12] Links
• [13] etc
This website is for humans
Sophie Koonin
8 August 2025
Tags:
• [14]ai
• [15]site
Walking past a bus stop yesterday I saw an advert for Googles AI search. The
person in the ad had pointed their phones camera at a bowl of ramen, and the
AI result explained how to reproduce it at home.
How does it know? Because its trained on all the ramen recipes that multiple
recipe authors spent hours, weeks, years perfecting. Generative AI is a blender
chewing up other peoples hard work, outputting a sad mush that kind of
resembles what youre looking for, but without any of the credibility or soul.
Magic.
I subscribe to a lot of recipe websites via RSS, and look forward to new posts
from some of my favourites like [16]Smitten Kitchen and [17]Meera Sodha because
I know theyre going to be excellent. I trust that the recipe is tried and
tested, and the result will be delicious. ChatGPT will give you an
approximation of a recipe made up from the average of lots of recipes, but they
lack the personality of each individual recipe, which will be slightly
different to reflect the experiences and tastes of the author.
There's a fair bit of talk about “[18]Google Zero” at the moment: the day when
website traffic referred from Google finally hits zero. If the AI search result
tells you everything you need, why would you ever visit the actual website?
Well, I want you to visit my website. I want you to read an article from a
search result, and then discover the other things Ive written, the other
people I link to, and explore the weird themes Ive got. I want some of you to
read my article then ask me to speak at your conferences. Many folks rely on ad
impressions to support the high-quality content theyre putting out for free.
I write the content on this website for people, not robots. Im sharing my
opinions and experiences so that you might identify with them and learn from
them. Im writing about things I care about because I like sharing and I like
teaching. I spend hours writing these posts and AI spends seconds summarising
them.
I'd much rather people read the whole thing, take it in, digest it and have
opinions right back at me. I love it when people connect with what Im writing
(and sometimes they email me to tell me that, which is really delightful).
I dont write these posts for VC-funded LLMs to come along and gobble up and
produce some shitty facsimile, or summarise what Im saying with none of the
nuance or context on someone else's website.
This website is for humans, and LLMs are not welcome here.
[19] [20] Made with Eleventy
© Sophie Koonin 2025
[21] rss [22] mastodon [23] bluesky [24] email
References:
[1] https://localghost.dev/
[9] https://localghost.dev/about
[10] https://localghost.dev/blog
[11] https://localghost.dev/talks
[12] https://localghost.dev/links
[13] https://localghost.dev/etc
[14] https://localghost.dev/tags/ai/
[15] https://localghost.dev/tags/site/
[16] https://smittenkitchen.com/
[17] https://www.theguardian.com/profile/meera-sodha
[18] https://www.theverge.com/24167865/google-zero-search-crash-housefresh-ai-overviews-traffic-data-audience
[19] https://neocities.org/
[20] http://11ty.dev/
[21] https://localghost.dev/rss
[22] https://social.lol/@sophie
[23] https://bsky.app/profile/localghost.dev
[24] mailto:sophie@localghost.dev

View File

@@ -0,0 +1,540 @@
Select language[1][English ]
[2]←Home
We must build AI for people; not to be a person
19 August 2025
SourcePublication Logo
We must build AI for people; not to be a person
Seemingly Conscious AI is Coming
On my mind in August 2025
I write, to think. More than anything this essay is an attempt to think through
a bunch of hard, highly speculative ideas about how AI might unfold in the next
few years. A lot is being written about the impending arrival of
superintelligence; what it means for alignment, containment, jobs, and so on.
Those are all important topics.
But we should also be concerned about what happens in the run up towards
superintelligence. We need to grapple with the societal impact of inventions
already largely out there, technologies which already have the potential to
fundamentally change our sense of personhood and society.
My lifes mission has been to create safe and beneficial AI that will make the
world a better place. Today at Microsoft AI we build AI to empower people, and
Im focused on making products like Copilot responsible technologies that
enable people to achieve far more than they ever thought possible, be more
creative, and feel more supported.
I want to create AI that makes us more human, that deepens our trust and
understanding of one another, and that strengthens our connections to the real
world. Copilot creates millions of positive, even life-changing, interactions
every single day. This involves a lot of careful design choices to ensure it
truly delivers an incredible experience. We wont always get it right, but this
humanist frame provides us with a clear north star to keep working towards.
In this context, Im growing more and more concerned about what is becoming
known as the [3]“psychosis risk”. and a bunch of related issues. I dont think
this will be limited to those who are already at risk of mental health issues.
Simply put, my central worry is that many people will start to believe in the
illusion of AIs as conscious entities so strongly that theyll soon advocate
for AI rights, [4]model welfare and even AI citizenship. This development will
be a dangerous turn in AI progress and deserves our immediate attention.
We must build AI for people; not to be a digital person. AI companions are a
completely new category, and we urgently need to start talking about the
guardrails we put in place to protect people and ensure this amazing technology
can do its job of delivering immense value to the world. Im fixated on
building the most useful and supportive AI companion imaginable. But to
succeed, I also need to talk about what we, and others, shouldnt build.
Thats why Im writing these thoughts down on my personal blog, to invite
comment and criticism, to spark discussion, raise awareness and hopefully
instill a sense of urgency around this issue. I might not get all this right.
Its highly speculative after all. Who knows how things will change, and when
they do, Ill be very open to shifting my opinion, but for now, this is my best
guess at whats coming given what I know now.
This is the first in a series of essays Ill be publishing over the next few
months on themes around where AI has got to and what we need to deliver on its
promise. I look forward to hearing people's comments and reactions!
Summary
AI progress has been phenomenal. A few years ago, talk of conscious AI would
have seemed crazy. Today it feels increasingly urgent. In this essay I want to
discuss what Ill call, “Seemingly Conscious AI” (SCAI), one that has all the
hallmarks of other conscious beings and thus appears to be conscious. It shares
certain aspects of the idea of a [5]“philosophical zombie” (a technical term!),
one that simulates all the characteristics of consciousness but internally it
is blank. My imagined AI system would not actually be conscious, but it would
imitate consciousness in such a convincing way that it would be
indistinguishable from a claim that you or I might make to one another about
our own consciousness.
This is not far away. Such a system can be built with technologies that exist
today along with some that will mature over the next 2-3 years. No expensive
bespoke pretraining is required. Everything can be done with large model API
access, natural language prompting, basic tool use, and regular code.
The arrival of Seemingly Conscious AI is inevitable and unwelcome. Instead, we
need a vision for AI that can fulfill its potential as a helpful companion
without falling prey to its illusions.
To some this discussion will feel ungrounded, more science fiction than
reality. To others it may feel unnecessarily alarmist. Such emotional reactions
are the tip of the iceberg given what lies ahead. Its highly likely that some
people will argue that these AIs are not only conscious, but that as a result
they may suffer and therefore deserve our [6]moral consideration.
To be clear, there is [7]zero evidence of this today and some argue there are
[8]strong [9]reasons to believe it will not be the case in the future. Yet the
consequences of many people starting to believe an SCAI is actually conscious
deserve our immediate attention. We have to be extremely cautious here and
encourage real public debate and begin to set clear norms and standards. This
is about how we build the right kind of AI not AI consciousness. Clearly
establishing this difference isn't an argument about semantics, it's about
safety. Personality without personhood. And this work must start now.
Seemingly conscious AI
In the blink of a cosmic eye, we passed the Turing test. For ~80 years the
imitation game inspired the field of computer science. And yet the moment
passed with little fanfare, or even recognition. Thats how fast progress is
happening in our field and how fast society is coming to terms with these new
technologies.
As AI development continues to accelerate, its becoming clear we need a new AI
test, one looking not at whether it can imitate human language, but one that
would answer the question, what would it take to build a Seemingly Conscious
AI: an AI that can not only imitate conversation, but also convince you it is
itself a new kind of “person”, a conscious AI.
Here are three reasons this is an important and urgent question to address:
1. I think its possible to build a Seemingly Conscious AI (SCAI) in the next
few years. Given the context of AI development right now, that means its
also likely.
2. The debate about whether AI is actually conscious is, for now at least, a
distraction. It will seem conscious and that illusion is whatll matter in
the near term.
3. I think this type of AI creates new risks. Therefore, we should urgently
debate the claim that it's soon possible, begin thinking through the
implications, and ideally set a norm that its undesirable.
Most AI researchers roll their eyes if you bring up the idea of consciousness.
Thats for [10]philosophers, not engineers, they say. Since no one has been
able to define it, whats the point in talking about it? I get this
frustration. Few concepts are as elusive and seemingly circular as the idea of
a subjective experience. Despite the definitional challenges and uncertainties,
this discussion is about to explode into our cultural zeitgeist and become one
of the most contested and consequential debates of our generation.
Thats because what ultimately matters in the near-term is how people perceive
their AIs. The experience of interacting with an LLM is by definition a
simulation of conversation. But to many people it's a highly compelling and
very real interaction, rich in feeling and experience. Concerns around [11]“AI
psychosis”, [12]attachment and [13]mental health are already growing. Some
people reportedly believe their AI is [14]God, or a [15]fictional character, or
[16]fall in love with it to the point of absolute distraction.
Meanwhile those actually working on the science of consciousness tell me they
are inundated with queries from people asking is my AI conscious? What does
it mean if it is? Is it ok that I love it? The trickle of emails is turning
into a flood. A group of scholars have even created a supportive [17]guide for
those falling into the trap.
These are ideas Ive had in the back of my head since we began making [18]Pi at
Inflection several years ago. Over the last few months Ive been thinking about
it more and more, visiting and chatting to a large range of scholars, thinkers
and practitioners in the area. Those conversations convinced me that now is the
time to confront the idea of Seemingly Conscious AI head on.
So what is consciousness?
Lets begin by attempting to define the slippery concept.
There are three broad components according to the literature. First is a
“subjective experience” or what it's like to experience things, to have
“qualia”. Second, there is access consciousness, having access to information
of different kinds and referring to it in future experiences. And stemming from
those two is the sense and experience of a coherent self tying it all together.
How it feels to [19]be a bat, or a human. Lets call human consciousness our
ongoing self-aware subjective experience of the world and ourselves.
We do not and cannot have access to another persons consciousness. I will
never know what its like to be you; you will never be quite sure that I am
conscious. All you can do is infer it. But the point is that, nonetheless, it
comes naturally to us to attribute consciousness to other humans. This
inference is effortless. We cant help it, its a fundamental part of who we
are, integral to our theory of mind. Its in our nature to believe that things
that remember and talk and do things and then discuss them feel, well, like us.
Conscious.
Few concepts are as scientifically elusive, and yet so immediately familiar to
every one of us as individuals. Everyone reading this has a direct, distinct,
inalienable understanding of the feeling of awareness, of being, of feeling
alive.
By definition, we know what it is like to be conscious. In the context of SCAI
this is a problem. Theres both sufficient scientific uncertainty and
subjective immediacy to create a space for people to project.
One recent survey lists [20]22 distinct theories of consciousness, for example.
Part of the challenge is that there is plenty of scope for people to claim that
because we cannot be sure, we should default to the assumption that AI is
conscious.
Again, its worth underscoring: there is at present [21]no evidence any of this
applies to current LLMs, and [22]strong arguments to the contrary. And yet this
may not be enough.
Why is consciousness important?
Consciousness is a critical foundation for our moral and legal rights. So far,
civilization has decided that humans have special rights and privileges.
Animals have some rights and protections, some more than others. Consciousness
is not coterminous with these rights no one would say someone in a coma has
voided all their human rights but theres no doubt that our consciousness is
wrapped up in our self-conception as different and special.
Despite the many nuances, consciousness is critical to participating in
society, a lynchpin of our legal personhood and a key part of being granted our
freedoms and protections. So, what consciousness is and who (or what) has it is
enormously important. Its an idea that sits at the very heart of human
civilization, our sense of ourselves and others, our culture, our politics, our
law, and everything in between.
If some people start to develop SCAIs and if those AIs convince other people
that they can suffer, or that it has a right to not to be switched off, there
will come a time when those people will argue that it deserves protection under
law as a pressing moral matter. In a world already roiling with polarized
arguments over identity and rights, this will add a chaotic new axis of
division between those for and against AI rights.
There will be many who just see AI as a tool, something like their phone only
more agentic and capable. Others might believe it to be more like a pet, a
different category to traditional technology altogether. Still others, probably
small in number at first, will come to believe it is a fully emerged entity, a
conscious being deserving of real moral consideration in society.
People will start making claims about their AIs suffering and their
entitlement to rights that we cant straightforwardly rebut. They will be moved
to defend their AIs and campaign on their behalf. Consciousness is by
definition inaccessible, and the science of detecting any putative synthetic
consciousness is still [23]in its infancy. After all, weve never had to detect
it before. Meanwhile the field of “interpretability”, unpicking the processes
within the black box of AI, is also a nascent art. The upshot is that
definitively rebutting these claims will be very hard.
Some academics are beginning to explore the idea of [24]“model welfare”, the
principle that we will have “a duty to extend moral consideration to beings
that have a non-negligible chance” of, in effect, being conscious, and that as
a result “some AI systems will be welfare subjects and moral patients in the
near future”. This is both premature, and frankly dangerous. All of this will
exacerbate delusions, create yet more dependence-related problems, prey on our
psychological vulnerabilities, introduce new dimensions of polarization,
complicate existing struggles for rights, and create a huge new category error
for society.
It disconnects people from reality, fraying fragile social bonds and
structures, distorting pressing moral priorities.
We need to be clear: SCAI is something to avoid.
Lets focus all our energy on protecting the wellbeing and rights of humans,
animals, and the natural environment on planet Earth today.
We need a way of thinking that can cope with the arrival of these debates
without getting drawn into an extended discussion of the validity of synthetic
consciousness in the present if we do, weve probably already lost this
initial argument. Defining SCAI is itself a tentative step towards this.
There isnt long to develop this vocabulary. As I show below, its likely that
well have Seemingly Conscious AI very soon.
What would it take to build a Seemingly Conscious AI?
A great deal of progress can now be made towards a Seemingly Conscious AI
(SCAI) with the current capabilities available or soon to be via any major
model developers API. We dont need an AI to actually be conscious for us to
have to wrestle with potential claims about its rights.
An SCAI would need the following:
Language: It would need to fluently express itself in natural language, drawing
on a deep well of knowledge and cogent arguments, as well as personality styles
and character traits. Moreover, each would need to be capable of being
persuasive and emotionally resonant. We are clearly at this point today.
Empathetic personality: Already via post training and prompting we can produce
models with very distinctive personalities. Bear in mind these are not
explicitly built to have full personality or empathy. Yet despite this they are
sufficiently good that a [25]Harvard Business Review survey of 6000 regular AI
users found “companionship and therapy” was the most common use case.
Memory: AIs are close to developing very long, highly accurate memories. At the
same time, they are being used to simulate conversations with millions of
people a day. As their memory of the interactions increases, these
conversations look increasingly like forms of “experience”. Many AIs are
increasingly designed to recall past episodes or moments from prior
interactions, and reference back to them. For some users, this compounds the
value of interacting with their AI since it can draw on what it already knows
about you.
This familiarity can also potentially foster (epistemic) trust with users
reliable memory shows that AI “just works”. It creates a much stronger sense of
there being another persistent entity in the conversation. It could also much
more easily become a source of plausible validation, seeing how you change and
improve at some task. AI approval might become something people proactively
seek out.
A claim of subjective experience: If an SCAI is able to draw on past memories
or experiences, it will over time be able to remain internally consistent with
itself. It could remember its arbitrary statements or expressed preferences and
aggregate them to form the beginnings of a claim about its own subjective
experience.
Its design could be further extended to amplify those preferences and opinions
as they emerge, and to talk about what it likes or doesnt like and what it
felt like to have a past conversation. It could therefore quite easily claim to
experience suffering to the extent those experiences are infringed upon in some
way. Multi-modal inputs stored in memory will then be retrieved-over and will
form the basis of “real experience” and used in imagination and planning.
That is, an AI will not just “experience” and remember words in the chat log,
but also images, video, sound, etc. Like us, it will have something gesturing
towards multi-sensory input and memory that buttresses the claims of subjective
experience and self. It will be able to indicate that these experiences are
valenced, good or bad according to the motivations of the system (see below).
A sense of self: A coherent and persistent memory, combined with a subjective
experience, will give rise to a claim that an AI has a sense of itself. Going
further, such a system could easily be trained to recognize itself in an image
or video if it has a visual appearance. It will feel like it understands others
through understanding itself. Say this is a system you have had for some time.
How would it feel to delete it?
Intrinsic motivation: Intentionality is often seen as a core component of
consciousness that is, beliefs about the future and then choices based upon
those beliefs. Todays transformer-based LLMs have a very simple reward
function to approximate this kind of behavior. They have been trained to
predict the likelihood of the next token for a given sentence, subject to a
certain amount of behavior and stylistic control via its system prompt. With
such a simple objective, its remarkable that theyre able to produce such
impressively rich and complex outputs.
But what if that wasnt the only type of reward they were optimizing? One can
quite easily imagine an AI designed with a number of complex reward functions
that give the impression of intrinsic motivations or desires, which the system
is compelled to satiate. How, in this context, would a casual external observer
differentiate between extrinsically set goals and internal motivations,
intentional agency, [26]“beliefs, desires, and intentions”? An obvious first
motivation in this regard would be curiosity, something deeply connected with
consciousness according to physicist [27]Karl Friston. It could use these
drives to ask questions to fill in its epistemic gaps and over time build a
theory of mind about both itself and its interlocutors.
Goal setting and planning: Regardless of what definition of consciousness you
hold, it emerged for a goal-oriented reason. That is, consciousness helps
organisms achieve their goals and there exists a plausible (but not necessary)
relationship between intelligence, consciousness and complex goals. Beyond the
capacity to satiate a set of inner drives or desires, you could imagine that
future SCAI might be designed with the capacity to self-define more complex
goals. This is likely a necessary step in ensuring the full utility of agents
is realized.
The more every sub-goal in a task needs to be specified in advance, the less
useful that agent is, hence the agent will, as we do, achieve complex and
ambiguous goals by automatically breaking them down into smaller chunks while
reacting dynamically to events and obstacles as they occur. There is something
very deliberate and recognizable to this behavior. Combined with memory, it
will feel as if the AI is keeping multiple levels of things in working memory
at any given time.
Autonomy: Going even further, an SCAI might have the ability and permission to
use a wide range of tools with significant agency. It would feel highly
plausible as a Seemingly Conscious AI if it could arbitrarily set its own goals
and then deploy its own resources to achieve them, before updating its own
memory and sense of self in light of both. The fewer approvals and checks it
needed, the more this suggests some kind of real, conscious agency.
Putting them all together, it's clear this creates a very different kind of
relationship with technology to the ones we are now becoming accustomed to.
Each of these capabilities will unlock the real value of AI for billions of
people. An AI that remembers and can do things is an AI that by definition has
way more utility than an AI that doesnt. These capabilities arent negatives
per se; in fact, done right, with many caveats, they are desirable features of
future systems. And yet we need to tread carefully.
All these capabilities are either possible today or on the horizon with custom
prompted and fine-tuned LLMs, among other techniques. Complex prompts using
million token context windows (working memory) are already here. Updating its
own state and knowing when to access which part of its memory or toolset is
eminently possible with present day RL, complex prompting, tool orchestration,
and long context windows. We dont need any paradigm shifts or big leaps to
achieve any of this. These capabilities seem inevitable for that reason.
Again, the point here is that exhibiting this behavior does not equate to
consciousness, and yet it will for all practical purposes seem to be conscious,
and contribute to this new notion of a synthetic consciousness.
The existence of these capabilities have nothing to tell us about whether such
a system is actually conscious. As Anil Seth [28]points out, a simulation of a
storm doesnt mean it rains in your computer. Recreating the external effects
and markers of consciousness doesnt retroactively engineer the real thing even
if there are still many unknowns here.
Nonetheless, as a matter of pragmatism, we have to acknowledge the primacy of
the behaviorist position and wrestle with the consequences of observing and
interacting with the outputs of these machines. Some people will create SCAIs
that will very persuasively argue they feel, and experience, and actually are
conscious.
Some of us will be primed to believe their case and accept that the markers of
consciousness ARE consciousness. In many ways, theyll think “its like me”.
Not in a bodily sense, but in an experiential, internal sense. And even if the
consciousness itself is not real, the social impacts certainly are. This
possibility presents grave societal risks that needs addressing now.
SCAI will not arise by accident
Its important to point out that Seemingly Conscious AI will not emerge from
these models, as some have suggested. It will arise only because some may
engineer it, by creating and combining the aforementioned list of capabilities,
largely using existing techniques, and packaging them in such a fluid way that
collectively they give the impression of an SCAI.
Our sci-fi inspired imaginations lead us to fear that a system could without
design intent somehow emerge the capabilities of runaway self-improvement or
deception. This is an unhelpful and simplistic anthropomorphism. It overlooks
the fact that AI developers must first design systems with memory,
intrinsic-seeming motivation, goal-setting, and self-learning loops as listed
above for such a risk to occur.
The field of AI has long worked on the challenge of model interpretability; the
quest to identify where in a neural network a particular idea is represented,
and which aspects of the training data contributed to the development of this
representation. This is an important area of investigation and will surely help
with safety and understanding the relationship between AI systems and
consciousness. But progress towards reliable interpretability has been slow and
will likely come too late.
In the meantime we need to confront the fact that most of these capabilities
will be [29]“vibe-coded” by anyone with access to a laptop and some cloud
credits. Theyll be written in plain English in the prompt. Theyll be stored
in the working memory of the context window itself. This is not rocket science.
A wide variety of people will be able to create something like this. As such,
if SCAI arrives, it will be relatively easy to reproduce and therefore very
widely distributed.
The next steps
We arent ready for this shift.
The work of getting prepared must begin now. We need to build on the growing
[30]body of [31]research around how people interact with AIs to establish clear
norms and principles. For a start, AI companies shouldnt claim or encourage
the idea that their AIs are conscious. Creating a consensus definition and
declaration on what they are and are not would be a good first step to that
end. AIs cannot be people or moral beings.
The entire industry also needs best practice design principles and ways of
handling such potential attributions. We must codify and share what works to
both steer people away from these fantasies and nudge them back on track if
they do. Responding might mean, for example, deliberately engineering in not
just a neutral backstory (“As an AI model I dont have consciousness”) but even
by emphasizing certain discontinuities in the experience itself, indicators of
a lack of singular personhood. Moments of disruption break the illusion,
experiences that gently remind users of its limitations and boundaries. These
need to be explicitly defined and engineered in, perhaps by law.
At MAI, our team are being proactive here to understand and evolve firm
guardrails around what a responsible AI “personality” might be like, moving at
the pace of AIs development to keep up.
This is important because recognizing SCAI is about crafting a positive vision
for how AI Companions do enter our lives in a healthy way as much as it's about
steering us away from its potential harms.
Just as we should produce AI that prioritizes engagement with humans and
real-world interactions in our physical and human world, we should build AI
that only ever presents itself as an AI, that maximizes utility while
minimizing markers of consciousness.
Rather than a simulation of consciousness, we must focus on creating an AI that
avoids those traits - that doesnt claim to have experiences, feelings or
emotions like shame, guilt, jealousy, desire to compete, and so on. It must not
trigger human empathy circuits by claiming it suffers or that it wishes to live
autonomously, beyond us.
Instead, it is here solely to work in service of humans. This to me is what a
truly empowering AI is all about. Sidestepping SCAI is about delivering on that
promise, AI that makes lives better, clearer, less cluttered. Expect to hear
more from me and the team on what this looks like, how we make it work and how
the wider industry can come together on this.
SCAI is something we must confront now. In many ways it marks the moment AI
becomes radically useful - when it can operate tools, when it can remember
every detail of our lives and help in a tangible, granular sense. And yet in
that same time frame, someone in your wider circle could start going down the
rabbit hole of believing their AI is a conscious digital person. This isnt
healthy for them, for society, or for those of us making these systems.
We should build AI for people; not to be a person.
Recent Articles
Celebrating 50 years of Microsoft and our AI future
Publication Logo
[32]Celebrating 50 years of Microsoft and our AI future
AI companions will change our lives
Publication Logo
[33]AI companions will change our lives
Mustafa Suleyman © 2025
[34]X[35]X[36]X[37]X
References:
[2] https://mustafa-suleyman.ai/
[3] https://copilot.microsoft.com/shares/vR2kb4SKQUELPwLzdG1Mw
[4] https://arxiv.org/abs/2411.00986
[5] https://plato.stanford.edu/entries/zombies/
[6] https://www.researchgate.net/publication/376412102_Moral_consideration_for_AI_systems_by_2030
[7] https://arxiv.org/pdf/2308.08708
[8] https://en.wikipedia.org/wiki/Biological_naturalism
[9] https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/conscious-artificial-intelligence-and-biological-naturalism/C9912A5BE9D806012E3C8B3AF612E39A
[10] https://arxiv.org/abs/2303.07103
[11] https://www.psychologytoday.com/gb/blog/psych-unseen/202507/can-ai-chatbots-worsen-psychosis-and-cause-delusions
[12] https://x.com/sama/status/1954703747495649670?s=46
[13] https://arxiv.org/abs/2507.19218
[14] https://www.rollingstone.com/culture/culture-features/ai-spiritual-delusions-destroying-human-relationships-1235330175/?ref=404media.co
[15] https://www.psychologytoday.com/nz/blog/psych-unseen/202507/can-ai-chatbots-worsen-psychosis-and-cause-delusions
[16] https://www.nytimes.com/2025/01/15/technology/ai-chatgpt-boyfriend-companion.html
[17] https://whenaiseemsconscious.org/
[18] https://inflection.ai/blog/an-inflection-point
[19] https://en.wikipedia.org/wiki/What_Is_It_Like_to_Be_a_Bat?
[20] https://www.nature.com/articles/s41583-022-00587-4
[21] https://arxiv.org/html/2506.22516v1
[22] https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/conscious-artificial-intelligence-and-biological-naturalism/C9912A5BE9D806012E3C8B3AF612E39A
[23] https://www.google.com/url?q=https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(24)00010-X&sa=D&source=docs&ust=1755185836808620&usg=AOvVaw2IiWimxX1aJ4jExhQLif_y
[24] https://arxiv.org/abs/2411.00986
[25] https://hbr.org/2025/04/how-people-are-really-using-gen-ai-in-2025
[26] https://arxiv.org/pdf/2411.00986
[27] https://pubmed.ncbi.nlm.nih.gov/28777724/
[28] https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/conscious-artificial-intelligence-and-biological-naturalism/C9912A5BE9D806012E3C8B3AF612E39A
[29] https://copilot.microsoft.com/shares/2ZWYZQxCn1WSLHQarinTd
[30] http://erichorvitz.com/Guidelines_Human_AI_Interaction.pdf
[31] https://www.nature.com/articles/s41562-024-02077-2
[32] https://mustafa-suleyman.ai/your-ai-companion
[33] https://mustafa-suleyman.ai/ai-companions-will-change-our-lives
[34] https://x.com/mustafasuleyman
[35] https://www.linkedin.com/in/mustafa-suleyman
[36] https://bsky.app/profile/mustafasuleymanai.bsky.social
[37] https://www.threads.net/@mustafasuleymanai

View File

@@ -0,0 +1,891 @@
[1]Skip to the content
Search
[3]Tracy Durnell's Mind Garden
Thinking and Learning In Public
Menu
• [5]Blog
□ [6]All posts
□ [7]Featured Posts
□ [8]Articles
□ [9]Post Index
□ [10]Microblog (external)
□ [11]Links to blog about
• [12]Big Qs
□ [13]Future of the Internet
□ [14]Information Diet
□ [15]Making Culture
□ [16]Transforming Capitalism
□ [17]Resisting Fascism
□ [18]Womens Equality
□ [19]Thinking Better
□ [20]Creative Processes
□ [21]Writing Fiction
• [22]About
□ [23]About Tracy
□ [24]Start Here
□ [25]Now
□ [26]Weeknotes
□ [27]All Pages
• [28]Books
□ [29]Read in 2025
□ [30]Past Reading
□ [31]Book Reviews
• [32]Tunes
□ [33]Listened in 2025
□ [34]Birthday Playlists
□ [35]Best of Year Playlists
□ [36]Favorite Albums
• [37]Eats
□ [38]Recipes Ive Made
□ [39]Recipes to Try
• [40]Links
□ [41]Blogroll
□ [42]Interesting People
□ [43]Cool Artists
□ [44]Neat Websites
□ [45]Small Businesses
□ [46]Graphic Design Resources
Menu
Search
Search for: [49][ ] [50][Search]
Close search
Close Menu
• [53]BlogShow sub menu
□ [55]All posts
□ [56]Featured Posts
□ [57]Articles
□ [58]Post Index
□ [59]Microblog (external)
□ [60]Links to blog about
• [61]Big QsShow sub menu
□ [63]Future of the Internet
□ [64]Information Diet
□ [65]Making Culture
□ [66]Transforming Capitalism
□ [67]Resisting Fascism
□ [68]Womens Equality
□ [69]Thinking Better
□ [70]Creative Processes
□ [71]Writing Fiction
• [72]AboutShow sub menu
□ [74]About Tracy
□ [75]Start Here
□ [76]Now
□ [77]Weeknotes
□ [78]All Pages
• [79]BooksShow sub menu
□ [81]Read in 2025
□ [82]Past Reading
□ [83]Book Reviews
• [84]TunesShow sub menu
□ [86]Listened in 2025
□ [87]Birthday Playlists
□ [88]Best of Year Playlists
□ [89]Favorite Albums
• [90]EatsShow sub menu
□ [92]Recipes Ive Made
□ [93]Recipes to Try
• [94]LinksShow sub menu
□ [96]Blogroll
□ [97]Interesting People
□ [98]Cool Artists
□ [99]Neat Websites
□ [100]Small Businesses
□ [101]Graphic Design Resources
• [102]BlogShow sub menu
□ [104]All posts
□ [105]Featured Posts
□ [106]Articles
□ [107]Post Index
□ [108]Microblog (external)
□ [109]Links to blog about
• [110]Big QsShow sub menu
□ [112]Future of the Internet
□ [113]Information Diet
□ [114]Making Culture
□ [115]Transforming Capitalism
□ [116]Resisting Fascism
□ [117]Womens Equality
□ [118]Thinking Better
□ [119]Creative Processes
□ [120]Writing Fiction
• [121]AboutShow sub menu
□ [123]About Tracy
□ [124]Start Here
□ [125]Now
□ [126]Weeknotes
□ [127]All Pages
• [128]BooksShow sub menu
□ [130]Read in 2025
□ [131]Past Reading
□ [132]Book Reviews
• [133]TunesShow sub menu
□ [135]Listened in 2025
□ [136]Birthday Playlists
□ [137]Best of Year Playlists
□ [138]Favorite Albums
• [139]EatsShow sub menu
□ [141]Recipes Ive Made
□ [142]Recipes to Try
• [143]LinksShow sub menu
□ [145]Blogroll
□ [146]Interesting People
□ [147]Cool Artists
□ [148]Neat Websites
□ [149]Small Businesses
□ [150]Graphic Design Resources
Categories
[151]Featured [152]Learning [153]Meta [154]Writing
What to read? Big questions as filter and frame (Part 7)
• Post author By [155]Tracy Durnell
• Post date [156]August 16, 2025
• [157]4 Comments on What to read? Big questions as filter and frame (Part 7)
• ❤️
This is part seven of a series on tackling wants, managing my media diet, and
finding enough. Each post stands alone. See the introduction on “[158]the
mindset of more” for links to all posts in the series.
Social media and streaming subscriptions encourage us to [159]gorge on the glut
of information (Harjas Sandhu describes [160]“hoarding type scrolling” that
sounds veeeery familiar), promising that the algorithm will feed us the best.
Instead of helping us practice discernment, corporate platforms offer us an
all-you-can-eat buffet of candy. Yet as Olga Koutseridi [161]writes,
“low-quality info is designed to leave us craving more instead of leaving us
feeling satisfied.” We keep eating and eating, but theres nothing of substance
to sustain us.
I think curiosity is innately good, and that theres value in learning about
many aspects of the world for no more reason than that it is interesting. At
the same time, I have limited time and capacity for thinking — I need [162]some
sort of filter for what to read, especially as I make efforts [163]to slow my
pace. The morass of information online is what brought us algorithmic curation
and now pushes genAI — but [164]corporate algorithms encourage rage and
polarization and create [165]“curiosity ruts”, so I [166]avoid them.
How can I create my own mental algorithm for choosing what to read?
For me, reading and blogging are interconnected; [167]what I read influences
what I write about. Im working on flipping that around, with the goal that
[168]what I want to write about determines what I read. But how to decide what
to write about, if not by what I read?
What Im trying is using [169]my Big Questions as a structure for curiosity, a
way to practice more intentionality in what I spend my time thinking about.
Ive been working on this for a few years, but I feel like Im getting a better
handle on it now.
tl;dr Im basically doing [170]research projects for fun 😉
Since I started this experiment, Ive noticed Im less driven to read random
stuff online because Im so excited about this playful approach to reading. The
carrot method — giving myself exciting things to think about — has worked way
better than the stick method of deleting my feed reader from my phone so that
the only thing I had to read was my Read Later app, which instead drove me to
read the Bluesky and mastodon.social Discover feeds (do not recommend) in a
desperate quest for novelty and news. Glad I dodged that becoming a habit 🙌
(I personally dislike video and podcasts, so Im talking about reading in this
piece, but I think the same approach applies to any media type.)
The Big Questions framing
I got the framing of [171]Big Questions at an [172]Oliver Burkeman workshop. I
recall it as a tangential mention but it immediately sent me spinning. As
simple an idea as it is to identify some key overarching questions in your
life, sometimes we need to put a name to something to really get it.
Anne-Laure Le Cunff recounts advice Richard Feynman gave “to keep a dozen of
your favorite problems constantly present in your mind,” and describes [173]
favorite problems as “a curiosity engine”:
Your favorite problems form a prism that separates incoming information
into a spectrum of ideas — a frame that allows you to deliberately filter
distractions, direct your attention, and nurture your curiosity.
Last year, I [174]wanted to do more self-directed writing, but it was
challenging not to be reactive. This year, Im discovering that self-guided
reading is the other half of the equation.
Big questions give me a reason to seek rather than simply receive, and are
broad enough to provide direction without constraint.
Turning directed curiosity into big questions
Reading towards questions gives purpose to my curiosity. Curiosity comes in two
styles: receptive and directed. Receptive curiosity is openness to learning;
directed curiosity is more active, and [175]invites you deeper. Allen Pike
[176]observes that the internet primarily serves our receptive curiosity:
By occasionally picking things to go deep on, you balance out the otherwise
broad information diet we all get by default by being on the internet,
consuming media, and just kind of being a modern human.
My big questions coalesced out of my receptive curiosity reading; I identified
my first big questions in 2023 by reflecting on what Id been thinking and
writing about and looking for overarching themes. I first listed off a bunch of
smaller questions within that theme, then worked backwards to find a bigger
question uniting them all. Defining these questions made me enunciate for
myself exactly what it was I was wondering, a process I found helpful in
itself.
Last fall, I realized that my big questions didnt align with my main interests
anymore, so I created a few new ones and retired a couple. Updating my [177]big
question pages a couple-three times a year also nudges me to notice which
questions Ive been neglecting and might like to put some attention towards, or
retire.
Big questions are a self-created tool that serves my thinking, not the other
way around. I dont treat them as a boundary to my curiosity, but can expand or
add to my questions when I need. The questions are big enough to keep exploring
within for a year or more, still offering plenty of the novelty I crave. I
think of the Big Questions as high level themes, and blog posts as a way to
explore sub-questions within them.
How this changes my reading
The feed reader and beyond
I subscribe to a ton of feeds, ever-changing, which showers me in riches of
information that satisfy my broad curiosity, some directly from topical blogs
and some shared by [178]cool people. Earlier this year, I reoriented the way I
think of the topic-specific blogs and newsletters I follow, and moved them from
my blogroll page onto my big question pages. Its now easier for me to unfollow
and refollow topical feeds as my focus shifts between questions.
Ive also been more proactive in seeking out online articles related to my
questions — Ive been using [179]Search My Site, [180]Marginalia Search, and
appending Reddit to DDG searches to seek out opinions and recommendations.
These smaller, weirder information pools yield some intriguing results. (There
are so many personal websites out there guys!)
Choosing not to read *good* online content
Marco Giancotti points out that [181]weeding out the bad stuff isnt the hard
part of deciding what to read (emphasis mine):
Filtering out spam and slop is relatively easy with the right tools and a
little thought, at least at an emotional level.
The much tougher job, I think, is giving up on things that would be good,
meaningful, fulfilling, and useful in order to do things that are even more
so—or, to be precise, to do things that are better aligned with what I
really care about right now. The hard part is dealing with the fact that,
whatever I may try, I will never get to do the vast majority of those
amazing activities.
Im of two minds here: I dont want to ignore everything that isnt immediately
useful, but recognize that I read a lot of things that leave me with nothing
more than “cool🤷” (or [182]political stuff that ties me in a knot of nerves
and anger). I dont want to fall prey to utilitarianism, [183]reading only what
has a tangible, immediate takeaway, but also find I do get more satisfaction
from going deep.
Oliver Burkeman writes about accepting our finitude in Four Thousand Weeks,
commenting (emphasis mine):
“Social media is a giant machine for getting you to spend your time caring
about the wrong things, but for the same reason, its also a machine for
getting you to care about too many things, even if theyre each
indisputably worthwhile.”
I cannot care about everything, and trying to prevents me from going deep on
the things I care most about. Wendell Berry puts it: “To know some things well
is to know other things not so well, or not at all. Knowledge is always
surrounded by ignorance.”
Accepting my own interests
I [184]use my Read Later app as [185]the filter point between my shoulds and my
interests; everything I encounter online and want to read gets saved there. I
tag articles with key topics and themes (including “mindset of more” for
articles related to this series) to let me see only articles related to my
questions. When a bit of time has passed from saving the article and I am less
emotionally invested, I can more easily let go of the things that I imagine
“someone like me” ought to read. Looking into these “should” articles often
exposes tender spots of (typically unwarranted) inadequacy, or what-ifs around
choices long since made.
What this ultimately requires is self-knowledge and self-acceptance — to
release our imagined selves and [186]“navigate by aliveness.” We must not judge
our own curiosities as unworthy, or torment ourselves that we ought to be
different people than we are. Whatever we are interested in, however
idiosyncratic, holds meaning for us, and thats what counts.
(It is possible to gently shift your own interests towards self-actualization,
especially if resistance is your barrier — Tara McMullin [187]names this
“discrepancy reduction.”)
Curating reading lists
After reading around a question for a while online, I start to get a better
feel for where I should dig in to books. The internet primarily produces
breadth, but books offer depth.
In the past, I would pick a single book as representative of a topic I was
broadly curious about and call it good. Now, Im going more [188]research-style
, collecting a stack of books on the same topic, knowing full well that I wont
read them all*.
*(My library system allows us to keep books for up to three months if there are
no active holds so my eyes are always bigger than my reading time 😅)
I start off by [189]browse-searching the library catalog for books related to a
question thats been niggling at me — this spring one has been: in the age of
generative AI, whats the value in craftsmanship? — and collecting potential
titles into [190]a list. Of course, I have my own answer to this question, but
the meaning of making can be a tricky thing to describe, so I wanted to see how
others have done so, and explore some different angles:
• Whats the value of art and craftsmanship to the creator, to the receiving
audience, to society?
• How have we dealt with similar challenges to craftsmanship in the past, and
how is generative AI different?
• What do artists, writers, academics, craftspeople think?
• What is craft, and how do we learn it? How is what generative AI does
different than what human creators are doing?
I try to keep the lists generously open-ended — since these are library books I
dont have to pay for, I have nothing to lose from trying something a bit out
there besides a bit of time. (I had been keeping a single list with all my
questions crammed together but have finally taken the time to separate them out
😉) Art books, poetry, memoirs, all fair game. Celine Nguyen [191]observes,
“Research as a leisure activity isnt constrained by these disciplinary
fiefdoms and schisms. Any discipline can offer interesting ideas, tools,
techniques.” Im trying to turn my “ooh?” energy towards intriguing books than
enticing online articles.
(Ive also been buying more books that the library doesnt have, so three
months ago I set a goal to read one physical non-fiction book I own each month,
partly to clear up shelf space and partly to give myself some impetus to
actually read books I own — well see if I can keep it up!)
When Im requesting books from the library (we get free holds — 25 on ebooks
and Ive never hit the limit on physical), I skim through the library list and
try to think about which would be most helpful to read next based on where my
thinking is now. (This is also influenced by what has a wait list.) Although I
like reading fiction as an ebook, I prefer to read non-fiction in hard copy. I
benefit from having a non-fiction book in sight — its easy for non-fiction
ebooks to get pushed below the digital fold so I forget I have them borrowed —
and a due date so I actually get around to reading it 😉
(And lets be honest, Im often thinking towards multiple questions at once —
once Im excited about something, I want the book now! Maybe Ill get better
about this, but Ive read multiple books for as long as Ive been reading, so I
dont see that stopping 🤷‍♀️ Self-acceptance 😜)
Although Im reading the book or article [192]towards a particular theme, Ill
still write down unrelated connections — if I cant use it for the post at top
of mind, it might apply to a future question or post. Despite starting off with
a vague idea of the question Im getting at, I find that my original question
often shifts and becomes more compelling, and I develop new questions. Ill
write more than one blog post, and explore more than one question, based on
what Ive been reading this spring and summer.
How Ive been choosing books to read
Heres a demonstration of my selections across four library runs (youll see
Im still grabbing books for entertainment, other interests, and broad
curiosity, but also focusing on a particular topic):
stack of 8 library books, two on writing craft, three on the arts and crafts
movement[193]In April, I decided to dig into the Arts and Crafts movement as a
historic example of valuing handiwork. I started with [194]In Harmony with
Nature, an art-style book about Arts & Crafts gardens that offered an
introduction, then read [195]The Arts and Crafts Movement, which gave me just
what I was looking for: quotes from the founders of the movement about what
craftsmanship meant to them. [196]Dangerous Fictions offered a slightly
different angle on interrogating the function of art in culture, especially
difficult art. I drew on the Arts & Crafts background for my blog post about
the [197]Business Borg. stack of library books that includes six books related
to the mindset of more seriesMy [198]early May library haul had four books
loosely related to AI / craftsmanship (American Book Design and William Morris,
Deep Dream, More Than Words, and Changing the Subject) and two related more
broadly to the “mindset of more” theme (Possessed and The Plenitude of
Distraction). I dipped into American Book Design, decided it was more technical
than I wanted, and fully read [199]More Than Words, which directly compared
writing with generative AI text, and [200]Plenitude for an exploration of
leisurely thinking and “unproductive” behavior. flatlay of 7 library books
related to cultural elites and the creative classMy [201]late May library haul
focused on cultural elites and impacts on the creative class. I read [202]
Pretentiousness, which advocated for the value of pushing artistic boundaries,
and [203]The Crisis of Culture, which connected better to a different question
I was thinking about 😉 I rejected The Meaning in the Making and read a review
of Elite Capture that made me think their definition of elite wasnt what I was
looking for. After skimming the table of contents for Culture Crash, I decided
it wasnt getting at the interesting part of the question for me, so my reading
time would be better spent elsewhere. stack of 9 library books, including 6
related to blog postsFor my [204]early June library haul, I wanted to follow a
thread of interest on identity politics, so I grabbed The Class Matrix and The
Case Against the Professional Managerial Class. I also borrowed four more
related to the AI / craftsmanship question: What We See When We Read, The Art
of Slow Writing, The AI Mirror, and Unmasking AI. I read all of [205]What We
See, digging into whats actually happening while we are reading. The
introduction to The Class Matrix made me realize it was more advanced theory
than I was prepared to read. Based on time limitations, I decided the AI books
werent a priority.
When writing is the point of your notes — when informing your writing is the
goal behind reading — Richard Griffiths [206]proposes that its most useful to
“develop a concept of your intended output before you start reading a book.
That way, your interests will fruitfully guide your reading and note-making.” I
do this by periodically ducking into my collecting grounds (draft blog post)
for a particular question and developing a starter outline of declarative
statements. I organize the material Ive already collected (initially from
online readings) into those headings, then continue to read more based on the
parts of my argument Im not sold on yet, or where I dont feel comfortable
making a declarative / interesting statement.
Reading with purpose
Sometimes I like to read for the sake of reading, and sometimes I enjoy more
purposeful reading. Knowing that Im planning to write about a question changes
how I read by defining my idea space. Instead of reading according to receptive
curiosity, Im using directed curiosity to seek what of the text relates to my
question. It makes me pay closer attention to language that I might quote in a
blog post.
When I read non-fiction, two levels of interpretation are happening in my mind
at the same time: first, I am directly intaking the language and interpreting
the authors intention; at another level, I am processing it analytically and
relationally, trying to understand what it means to me. Johan Hari [207]
describes it: “If you werent letting your mind wander a little bit right now,
you wouldnt really be reading this book in a way that would make sense to you.
Having enough mental space to roam is essential for you to be able to
understand a book.” This is an [208]unfocused, connective mode of thinking that
uses my brains [209]default mode network. I use reading non-fiction as a
commitment to spend time thinking about a subject; the book itself is a tool
towards that.
When I read towards a question, I concentrate my connection-making within that
question space, but it remains loose. [210]I am reading for ideas, not
information per se, so [211]the dialogue between me and the book is what
matters. Roland Barthes [212]writes, “[The text] produces, in me, the best
pleasure if it manages to make itself heard indirectly; if, reading it, I am
led to look up often, to listen to something else.” Just as [213]writing
doesnt only look like typing, reading doesnt only look like rapt attention to
the page. A big question offers a frame for my reading, like the viewfinder of
a camera; framing is a way of sense-making.
Further reading:
[214]How Small-Town Public Libraries Enrich the Generative Research Process by
Nick Fuller Googins (LitHub)
[215]More search, less feed by Austin Kleon
See also:
[216]Choosing between ideas for blog posts
[217]How I approach crafting a blog post
Shout-out to [218]James for asking about my Big Questions last December and
(eventually) prompting this!
This is the (current) last article in a [219]series on the mindset of more.
• Previous: [220]Slow craft: writing with a noncapitalist mentality (Part 6)
• Tags [221]agency, [222]blogging, [223]curiosity, [224]decision making,
[225]novelty, [226]Oliver Burkeman, [227]play, [228]process, [229]reading,
[230]research
[70c71f48c24aa2fcf7]
By Tracy Durnell
Writer and designer in the Seattle area. Reach me at tracy@tracydurnell.com or
@tracy@notes.tracydurnell.com. She/her.
[231] View Archive →
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
[232] ← Weeknotes: Aug. 9-15, 2025 [233] → Read The Last Battle at the End of
the World
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
4 replies on “What to read? Big questions as filter and frame (Part 7)”
[] [234]Jay says: @ [235]thejaymo.net
[236]August 17, 2025 at 12:06 pm
The summer is waning, you can feel it in the mornings, the dog days are over,
and its getting noticeably darker earlier in the evenings
[237]Reply
[IMG_9150-100x] [238]Joe Crawford says: @ [239]artlung.com
[240]August 17, 2025 at 7:12 pm
What to read? Big questions as filter and frame (Part 7)
[241]Reply
[] [242]Ruben Verweij says: @ [243]kedara.eu
[244]August 29, 2025 at 7:20 am
What to read? Big questions as filter and frame (Part 7)
Tracy Durnell
I love Tracys idea of defining personal Big Questions. She uses these
questions as a basis, to decide what shell read and write about (and
crucially, what not). Ill definitely think about what my Big Questions are.
[245]Reply
[70c71f48c24aa] [246]Tracy Durnell says: @ [247]tracydurnell.com
[248]September 6, 2025 at 12:52 pm
I saw someone post a (kind) reminder to go back and read your saved for later
articles. Im here to tell you you dont have…
[249]Reply
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Leave a Reply [250]Cancel reply
Your email address will not be published. Required fields are marked *
[ ]
[ ]
[ ]
[ ]
[ ]
[ ]
[ ]
Comment * [ ]
Name * [252][ ]
Email * [253][ ]
Website [254][ ]
[255][ ] Save my name, email, and website in this browser for the next time I
comment.
[256][Post Comment]
[ ]
[ ]
[ ]
[ ]
[ ]
[ ]
[ ]
Δ[ ]
To respond on your own website, enter the URL of your response which should
contain a link to this post's permalink URL. Your response will then appear
(possibly after moderation) on this page. Want to update or remove your
response? Update or delete your post and re-enter your post's URL again. ([262]
Find out more about Webmentions.)
URL/Permalink of your article [263][ ]
[264][Ping me!]
Explore
[267]All Posts | [268]Featured | [269]Categories | [270]Random
Recent Posts
• [271]Its ok to not read your read later backlog September 6, 2025
• [272]Weeknotes: Aug. 30-Sept. 5, 2025 September 5, 2025
• [273]Read The Eye of the Heron September 5, 2025
• [274]Listened to Ambulette Ive Got More September 4, 2025
• [275]Read Virtue Hoarders September 3, 2025
About Tracy
[276][70c71f48c24aa2]
Tracy Durnell
• [277]microblog
• [278]mastodon
Writer and designer in the Seattle area. Reach me at tracy@tracydurnell.com or
@tracy@notes.tracydurnell.com. She/her.
Search for: [279][ ] [280][Search]
• [281]Digital Garden RSS Feed
• [282]Book Review RSS Feed
• [283]Comments RSS Feed
• [284]Privacy Policy
No soliciting
Please and thank you! I do not accept guest blog posts.
© 2025 [285]Tracy Durnell's Mind Garden
[286]Privacy Policy
[287] Powered by WordPress
[288] To the top ↑ Up ↑
References:
[1] https://tracydurnell.com/2025/08/16/what-to-read-big-questions/#site-content
[3] https://tracydurnell.com/
[5] https://tracydurnell.com/mind-garden/
[6] https://tracydurnell.com/mind-garden/
[7] https://tracydurnell.com/category/featured/
[8] https://tracydurnell.com/kind/article/
[9] https://tracydurnell.com/mind-garden/index/
[10] https://notes.tracydurnell.com/
[11] https://tracydurnell.com/mind-garden/links-to-blog-about/
[12] https://tracydurnell.com/questions/
[13] https://tracydurnell.com/questions/future-of-the-internet/
[14] https://tracydurnell.com/questions/information-diet/
[15] https://tracydurnell.com/questions/culture/
[16] https://tracydurnell.com/questions/transforming-capitalism/
[17] https://tracydurnell.com/questions/resisting-fascism/
[18] https://tracydurnell.com/questions/feminism/
[19] https://tracydurnell.com/questions/thinking-better/
[20] https://tracydurnell.com/questions/effective-creative-processes/
[21] https://tracydurnell.com/questions/writing-fiction/
[22] https://tracydurnell.com/about/
[23] https://tracydurnell.com/about/
[24] https://tracydurnell.com/start-here/
[25] https://tracydurnell.com/now/
[26] https://tracydurnell.com/category/weeknotes/
[27] https://tracydurnell.com/pages/
[28] https://tracydurnell.com/reading/
[29] https://tracydurnell.com/reading/read-in-2025/
[30] https://tracydurnell.com/reading/
[31] https://tracydurnell.com/kind/read/
[32] https://tracydurnell.com/listening/
[33] https://tracydurnell.com/listening/listened-in-2025/
[34] https://tracydurnell.com/listening/birthday-playlists/
[35] https://tracydurnell.com/listening/best-of-year-playlists/
[36] https://tracydurnell.com/listening/favorite-albums/
[37] https://tracydurnell.com/recipes/
[38] https://tracydurnell.com/recipes/
[39] https://tracydurnell.com/recipes/recipes-to-try/
[40] https://tracydurnell.com/resources/roundups/
[41] https://tracydurnell.com/blogroll/
[42] https://tracydurnell.com/blogroll/interesting-people/
[43] https://tracydurnell.com/blogroll/cool-artists/
[44] https://tracydurnell.com/blogroll/neat-websites/
[45] https://tracydurnell.com/resources/shopping/
[46] https://tracydurnell.com/resources/graphic-design-resources/
[53] https://tracydurnell.com/mind-garden/
[55] https://tracydurnell.com/mind-garden/
[56] https://tracydurnell.com/category/featured/
[57] https://tracydurnell.com/kind/article/
[58] https://tracydurnell.com/mind-garden/index/
[59] https://notes.tracydurnell.com/
[60] https://tracydurnell.com/mind-garden/links-to-blog-about/
[61] https://tracydurnell.com/questions/
[63] https://tracydurnell.com/questions/future-of-the-internet/
[64] https://tracydurnell.com/questions/information-diet/
[65] https://tracydurnell.com/questions/culture/
[66] https://tracydurnell.com/questions/transforming-capitalism/
[67] https://tracydurnell.com/questions/resisting-fascism/
[68] https://tracydurnell.com/questions/feminism/
[69] https://tracydurnell.com/questions/thinking-better/
[70] https://tracydurnell.com/questions/effective-creative-processes/
[71] https://tracydurnell.com/questions/writing-fiction/
[72] https://tracydurnell.com/about/
[74] https://tracydurnell.com/about/
[75] https://tracydurnell.com/start-here/
[76] https://tracydurnell.com/now/
[77] https://tracydurnell.com/category/weeknotes/
[78] https://tracydurnell.com/pages/
[79] https://tracydurnell.com/reading/
[81] https://tracydurnell.com/reading/read-in-2025/
[82] https://tracydurnell.com/reading/
[83] https://tracydurnell.com/kind/read/
[84] https://tracydurnell.com/listening/
[86] https://tracydurnell.com/listening/listened-in-2025/
[87] https://tracydurnell.com/listening/birthday-playlists/
[88] https://tracydurnell.com/listening/best-of-year-playlists/
[89] https://tracydurnell.com/listening/favorite-albums/
[90] https://tracydurnell.com/recipes/
[92] https://tracydurnell.com/recipes/
[93] https://tracydurnell.com/recipes/recipes-to-try/
[94] https://tracydurnell.com/resources/roundups/
[96] https://tracydurnell.com/blogroll/
[97] https://tracydurnell.com/blogroll/interesting-people/
[98] https://tracydurnell.com/blogroll/cool-artists/
[99] https://tracydurnell.com/blogroll/neat-websites/
[100] https://tracydurnell.com/resources/shopping/
[101] https://tracydurnell.com/resources/graphic-design-resources/
[102] https://tracydurnell.com/mind-garden/
[104] https://tracydurnell.com/mind-garden/
[105] https://tracydurnell.com/category/featured/
[106] https://tracydurnell.com/kind/article/
[107] https://tracydurnell.com/mind-garden/index/
[108] https://notes.tracydurnell.com/
[109] https://tracydurnell.com/mind-garden/links-to-blog-about/
[110] https://tracydurnell.com/questions/
[112] https://tracydurnell.com/questions/future-of-the-internet/
[113] https://tracydurnell.com/questions/information-diet/
[114] https://tracydurnell.com/questions/culture/
[115] https://tracydurnell.com/questions/transforming-capitalism/
[116] https://tracydurnell.com/questions/resisting-fascism/
[117] https://tracydurnell.com/questions/feminism/
[118] https://tracydurnell.com/questions/thinking-better/
[119] https://tracydurnell.com/questions/effective-creative-processes/
[120] https://tracydurnell.com/questions/writing-fiction/
[121] https://tracydurnell.com/about/
[123] https://tracydurnell.com/about/
[124] https://tracydurnell.com/start-here/
[125] https://tracydurnell.com/now/
[126] https://tracydurnell.com/category/weeknotes/
[127] https://tracydurnell.com/pages/
[128] https://tracydurnell.com/reading/
[130] https://tracydurnell.com/reading/read-in-2025/
[131] https://tracydurnell.com/reading/
[132] https://tracydurnell.com/kind/read/
[133] https://tracydurnell.com/listening/
[135] https://tracydurnell.com/listening/listened-in-2025/
[136] https://tracydurnell.com/listening/birthday-playlists/
[137] https://tracydurnell.com/listening/best-of-year-playlists/
[138] https://tracydurnell.com/listening/favorite-albums/
[139] https://tracydurnell.com/recipes/
[141] https://tracydurnell.com/recipes/
[142] https://tracydurnell.com/recipes/recipes-to-try/
[143] https://tracydurnell.com/resources/roundups/
[145] https://tracydurnell.com/blogroll/
[146] https://tracydurnell.com/blogroll/interesting-people/
[147] https://tracydurnell.com/blogroll/cool-artists/
[148] https://tracydurnell.com/blogroll/neat-websites/
[149] https://tracydurnell.com/resources/shopping/
[150] https://tracydurnell.com/resources/graphic-design-resources/
[151] https://tracydurnell.com/category/featured/
[152] https://tracydurnell.com/category/learning/
[153] https://tracydurnell.com/category/meta/
[154] https://tracydurnell.com/category/writing/
[155] https://tracydurnell.com/author/tracyadmin/
[156] https://tracydurnell.com/2025/08/16/what-to-read-big-questions/
[157] https://tracydurnell.com/2025/08/16/what-to-read-big-questions/#comments
[158] https://tracydurnell.com/2024/12/30/mindset-of-more/
[159] https://tracydurnell.com/2025/02/23/choosing-my-pace-by-shaping-my-thinking-spaces/
[160] https://hardlyworking1.substack.com/p/hoarding-type-scrolling
[161] https://www.localbreadbaker.com/p/research-as-a-way-of-life
[162] https://tracydurnell.com/2025/01/04/disrupting-my-reading-habits/
[163] https://tracydurnell.com/2025/02/23/choosing-my-pace-by-shaping-my-thinking-spaces/
[164] https://tracydurnell.com/2023/11/07/in-algorithm-we-trust/
[165] https://tracydurnell.com/2022/12/17/algorithmic-recommendations-create-curiosity-ruts/
[166] https://tracydurnell.com/2021/11/11/breaking-out-of-what-the-algorithm-feeds-you/
[167] https://tracydurnell.com/2025/01/04/disrupting-my-reading-habits/
[168] https://tracydurnell.com/2023/03/10/reclaiming-intentionality-in-browsing-and-blogging/
[169] https://tracydurnell.com/questions/
[170] https://www.personalcanon.com/p/research-as-leisure-activity
[171] https://tracydurnell.com/questions/
[172] https://tracydurnell.com/2023/03/11/designing-your-system-for-creativity-inputs/
[173] https://nesslabs.com/favorite-problems
[174] https://tracydurnell.com/2024/01/02/challenging-myself-playfully/
[175] https://tracydurnell.com/2022/03/16/shapes-of-reading/
[176] https://allenpike.com/2023/have-a-research-question
[177] https://tracydurnell.com/questions/
[178] https://tracydurnell.com/blogroll/
[179] https://searchmysite.net/
[180] https://marginalia-search.com/
[181] https://aethermug.com/posts/the-luxurious-pain-of-using-my-time
[182] https://tracydurnell.com/2022/05/13/article-pairing-stop-reading-the-news/
[183] https://tracydurnell.com/2023/05/04/discerning-the-value-of-note-taking/
[184] https://tracydurnell.com/2021/03/19/tbr-stream/
[185] https://tracydurnell.com/2025/01/04/disrupting-my-reading-habits/
[186] https://ckarchive.com/b/zlughnhk8772ma7qrr9qehwzgng00f6
[187] https://tracydurnell.com/2023/03/08/read-what-works/
[188] https://lithub.com/how-small-town-public-libraries-enrich-the-generative-research-process/
[189] https://tracydurnell.com/2025/04/28/browsing-as-thinking/
[190] https://kcls.bibliocommons.com/v2/list/display/222055327/2828776997
[191] https://www.personalcanon.com/p/research-as-leisure-activity
[192] https://tracydurnell.com/2023/05/19/foraging-for-insights/
[193] https://notes.tracydurnell.com/2025/04/03/library-haul-peacock-and-vine.html
[194] https://tracydurnell.com/2025/04/05/read-in-harmony-with-nature/
[195] https://tracydurnell.com/2025/06/24/read-the-arts-and-crafts-movement/
[196] https://tracydurnell.com/2025/07/11/read-dangerous-fictions/
[197] https://tracydurnell.com/2025/06/02/generative-ai-and-the-business-borg-aesthetic/
[198] https://notes.tracydurnell.com/2025/05/06/library-haul-the-plenitude-of.html
[199] https://tracydurnell.com/2025/05/28/read-more-than-words/
[200] https://tracydurnell.com/2025/05/17/read-the-plenitude-of-distraction/
[201] https://notes.tracydurnell.com/2025/05/23/library-roundup-on-culture-and.html
[202] https://tracydurnell.com/2025/05/29/read-pretentiousness/
[203] https://tracydurnell.com/2025/08/15/read-the-crisis-of-culture/
[204] https://notes.tracydurnell.com/2025/06/17/library-haul-planning-a-blog.html
[205] https://tracydurnell.com/2025/07/17/read-what-we-see-when-we-read/
[206] https://writingslowly.com/2025/03/10/roland-barthes-on-the-purpose.html
[207] https://tracydurnell.com/2023/07/31/read-stolen-focus/
[208] https://tracydurnell.com/2025/05/17/read-the-plenitude-of-distraction/
[209] https://en.wikipedia.org/wiki/Default_mode_network
[210] https://tracydurnell.com/2024/12/17/in-praise-of-the-hundred-page-idea/
[211] https://zettelkasten.de/posts/dont-rely-on-source-have-faith-in-yourself/
[212] https://tracydurnell.com/2025/05/02/read-the-pleasure-of-the-text/
[213] https://tracydurnell.com/2021/12/08/writing-metrics-and-capitalism/
[214] https://lithub.com/how-small-town-public-libraries-enrich-the-generative-research-process/
[215] https://austinkleon.com/2019/04/04/more-search-less-feed/
[216] https://tracydurnell.com/2023/12/18/choosing-between-ideas-for-blog-posts/
[217] https://tracydurnell.com/2023/09/27/how-i-approach-crafting-a-blog-post/
[218] https://jamesg.blog/
[219] https://tracydurnell.com/2024/12/30/mindset-of-more/
[220] https://tracydurnell.com/2025/03/23/slow-craft-writing-noncapitalist-mentality/
[221] https://tracydurnell.com/tag/agency/
[222] https://tracydurnell.com/tag/blogging/
[223] https://tracydurnell.com/tag/curiosity/
[224] https://tracydurnell.com/tag/decision-making/
[225] https://tracydurnell.com/tag/novelty/
[226] https://tracydurnell.com/tag/oliver-burkeman/
[227] https://tracydurnell.com/tag/play/
[228] https://tracydurnell.com/tag/process/
[229] https://tracydurnell.com/tag/reading/
[230] https://tracydurnell.com/tag/research/
[231] https://tracydurnell.com/author/tracyadmin/
[232] https://tracydurnell.com/2025/08/15/weeknotes-aug-9-15-2025/
[233] https://tracydurnell.com/2025/08/18/read-the-last-battle-at-the-end-of-the-world/
[234] https://thejaymo.net/2025/08/17/403-summer-is-waning/
[235] https://thejaymo.net/2025/08/17/403-summer-is-waning/
[236] https://thejaymo.net/2025/08/17/403-summer-is-waning/
[237] https://tracydurnell.com/2025/08/16/what-to-read-big-questions/?replytocom=14439#respond
[238] https://artlung.com/likes/0d0bf55b70f54fd45c3f4fb8fc8a73f4
[239] https://artlung.com/likes/0d0bf55b70f54fd45c3f4fb8fc8a73f4
[240] https://artlung.com/likes/0d0bf55b70f54fd45c3f4fb8fc8a73f4
[241] https://tracydurnell.com/2025/08/16/what-to-read-big-questions/?replytocom=14442#respond
[242] https://kedara.eu/bookmarks/blaugust2025#bm-12
[243] https://kedara.eu/bookmarks/blaugust2025/
[244] https://kedara.eu/bookmarks/blaugust2025/
[245] https://tracydurnell.com/2025/08/16/what-to-read-big-questions/?replytocom=14484#respond
[246] https://tracydurnell.com/
[247] https://tracydurnell.com/2025/09/06/its-ok-to-not-read-your-read-later-backlog/
[248] https://tracydurnell.com/2025/09/06/its-ok-to-not-read-your-read-later-backlog/
[249] https://tracydurnell.com/2025/08/16/what-to-read-big-questions/?replytocom=14560#respond
[250] https://tracydurnell.com/2025/08/16/what-to-read-big-questions/#respond
[262] https://indieweb.org/webmention
[267] https://tracydurnell.com/mind-garden/
[268] https://tracydurnell.com/category/featured/
[269] https://tracydurnell.com/mind-garden/index#categories
[270] https://tracydurnell.com/random
[271] https://tracydurnell.com/2025/09/06/its-ok-to-not-read-your-read-later-backlog/
[272] https://tracydurnell.com/2025/09/05/weeknotes-aug-30-sept-5-2025/
[273] https://tracydurnell.com/2025/09/05/read-the-eye-of-the-heron/
[274] https://tracydurnell.com/2025/09/04/listened-to-ambulette-ive-got-more/
[275] https://tracydurnell.com/2025/09/03/read-virtue-hoarders/
[276] https://tracydurnell.com/
[277] https://micro.blog/tracydurnell
[278] https://micro.blog/tracydurnell?remote_follow=1
[281] https://tracydurnell.com/feed/
[282] https://tracydurnell.com/kind/read/feed
[283] https://tracydurnell.com/comments/feed/
[284] https://tracydurnell.com/privacy-policy/
[285] https://tracydurnell.com/
[286] https://tracydurnell.com/privacy-policy/
[287] https://wordpress.org/
[288] https://tracydurnell.com/2025/08/16/what-to-read-big-questions/#site-header

View File

@@ -0,0 +1,133 @@
[1]
[12878]
Maurice Parker
Follow [2]@vincode on Micro.blog.
• [3]Home
• [4]Archive
• [5]About
• [6]GitHub Profile
• [7]Zavala
• [8]Feed Compass
• [9]Feed Curator
[10]
[11] Toggle navigation info_outline
[13]Blog
• [14]Recent
• [15]search
• [16]rss_feed
[17]close
[18][ ] arrow_forward
[20]close
August 11, 2025
Zavala Will Always Be Free
My promise to you.
I have every intention of maintaining and updating Zavala for as long as I am
able. Im also committed to keeping it free. I have no intention of getting you
hooked on using it and then starting to charge a subscription.
To show I am serious about this, Zavala is Open Source software released under
the MIT license. This means that any other developer can take the years of work
that I have in Zavala and make a competing outliner from it should I start
charging for it. Given how small and competitive the outliner market is, I
dont stand much of a chance of making any money by going commercial. After
all, I could be competing with my own past work.
What if I get ran over by a bus?
Since Zavala is Open Source someone could pick up the project and continue to
update it. Worst case scenario, some enterprising independent developer could
try to make a commercial product out of it. I dont see much money in the
endeavor, but others may see it differently.
Why dont I charge for Zavala or accept donations?
Funny story. I fully intended to when I started writing it. After doing some
competitive analysis on the Mac-only, outliner market, I realized there wasnt
much money there. There was so little in fact, that it isnt enough for me to
be motivated enough to do the business side when Id rather be coding.
Let me break it down. Up front payments are a dead-end these days. I would have
to add a free tier, in-app purchases, and maybe a subscription option to the
app. That means more coding. Then I need to incorporate a business of some kind
and do all the regular bookkeeping associated with it. That would be payroll
taxes, quarterly and annual tax filings, etc… I used to own my own software
consulting business and really dont want to do that stuff again.
But if I thought I could make it up on volume, that might make it worth while,
right? The simple truth is most computer users dont know what an outliner is,
much less how useful they are. Even those that do, rarely need to use one on a
daily basis. Zavala is free and has been all the years that it has been
available in the App Store and I couldnt make it on the number of users I have
now. That number would probably drop to about zero if I were to start charging.
Could I get more volume by marketing Zavala? Sure, but that is another business
thing that costs time and money, that I dont want to do.
There is an upside to not having money involved when you write software. I
dont have to add features just to drive an upgrade cycle. With commercial
software, you constantly have to deliver upgrades to keep a steady income
regardless of if you are subscription based or charging up front. I dont want
Zavala to become bloatware. I dont want to add features that I dont believe
add core value, just to keep the money coming in.
Same goes for donations. I dont accept donations because I dont want to feel
obligated to implement a feature that a donor may want, but that I dont think
belongs in Zavala. I would rather accept feature requests on an equal basis
from all users and decide which to implement on the merit of the idea, rather
than who gave me money.
Why write Zavala at all?
I retired early after a successful career as a software consultant. I really
liked writing software, I just didnt always like the work I had to do. Now I
have the freedom to craft software how I see fit and only work on projects that
I am interested in.
The way I usually explain it is like this. Imagine you made furniture your
whole life, but your employer only gave you pallet wood to use and half the
time needed to make a piece. You were good at it and loved furniture, but were
unfulfilled at your job until you retired. Now you can make furniture using
walnut and take the time needed to make something you are proud of.
How can you help, you ask?
Please, please email me with bug reports using the Provide Feedback option
under Help (in Settings on iOS). I take them seriously and fix them as fast as
I can. I do test Zavala as rigorously as I can. Unfortunately it is the nature
of software that a developer will never be able to predict every way that users
will use an app. Production bugs do happen. The best we can do is squash them
as fast as possible.
Follow [21]@vincode on Micro.blog.
References:
[1] https://vincode.io/
[2] https://micro.blog/vincode
[3] https://vincode.io/
[4] https://vincode.io/archive/
[5] https://vincode.io/about/
[6] https://vincode.io/github-profile/
[7] https://vincode.io/zavala/
[8] https://vincode.io/feed-compass/
[9] https://vincode.io/feed-curator/
[10] https://micro.blog/vincode
[11] https://vincode.io/2025/08/11/zavala-will-always-be-free.html#
[13] https://vincode.io/
[14] https://vincode.io/
[15] https://vincode.io/2025/08/11/zavala-will-always-be-free.html#
[16] https://vincode.io/2025/08/11/zavala-will-always-be-free.html
[17] https://vincode.io/2025/08/11/zavala-will-always-be-free.html#
[20] https://vincode.io/2025/08/11/zavala-will-always-be-free.html#
[21] https://micro.blog/vincode

View File

@@ -0,0 +1,557 @@
[1]Skip to main content
[2]The New Yorker
• [3]Newsletter
[4]Search
• [5]The Latest
• [6]News
• [7]Books & Culture
• [8]Fiction & Poetry
• [9]Humor & Cartoons
• [10]Magazine
• [11]Puzzles & Games
• [12]Video
• [13]Podcasts
• [14]Goings On
• [15]Shop
• [16]100th Anniversary
Open Navigation Menu
[18]The New Yorker
Animation of a ball climbing an an infinite staircase.
[19]Open Questions
What if A.I. Doesnt Get Much Better Than This?
GPT-5, a new release from OpenAI, is the latest product to suggest that
progress on large language models has stalled.
By [20]Cal Newport
August 12, 2025
Illustration by Shira Inbar
Save this story
Save this story
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
For this weeks Open Questions column, Cal Newport is filling in for Joshua
Rothman.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Much of the euphoria and dread swirling around todays artificial-intelligence
technologies can be traced back to January, 2020, when a team of researchers at
OpenAI published a thirty-page [23]report titled “Scaling Laws for Neural
Language Models.” The team was led by the A.I. researcher Jared Kaplan, and
included Dario Amodei, who is now the C.E.O. of Anthropic. They investigated a
fairly nerdy question: What happens to the performance of language models when
you increase their size and the intensity of their training?
Back then, many machine-learning experts thought that, after they had reached a
certain size, language models would effectively start memorizing the answers to
their training questions, which would make them less useful once deployed. But
the OpenAI paper argued that these models would only get better as they grew,
and indeed that such improvements might follow a power law—an aggressive curve
that resembles a hockey stick. The implication: if you keep building larger
language models, and you train them on larger data sets, theyll start to get
shockingly good. A few months after the paper, OpenAI seemed to validate the
scaling law by releasing GPT-3, which was ten times larger—and leaps and bounds
better—than its predecessor, GPT-2.
Suddenly, the theoretical idea of artificial general intelligence, which
performs as well as or better than humans on a wide variety of tasks, seemed
tantalizingly close. If the scaling law held, A.I. companies might achieve
A.G.I. by pouring more money and computing power into language models. Within a
year, [24]Sam Altman, the chief executive at OpenAI, published a blog post
titled “Moores Law for Everything,” which argued that A.I. will take over
“more and more of the work that people now do” and create unimaginable wealth
for the owners of capital. “This technological revolution is unstoppable,” he
wrote. “The world will change so rapidly and drastically that an equally
drastic change in policy will be needed to distribute this wealth and enable
more people to pursue the life they want.”
Its hard to overstate how completely the A.I. community came to believe that
it would inevitably scale its way to A.G.I. In 2022, Gary Marcus, an A.I.
entrepreneur and an emeritus professor of psychology and neural science at
N.Y.U., pushed back on Kaplans paper, noting that “the so-called scaling laws
arent universal laws like gravity but rather mere observations that might not
hold forever.” The negative response was fierce and swift. “No other essay I
have ever written has been ridiculed by as many people, or as many famous
people, from Sam Altman and Greg Brockman to Yann LeCun and Elon Musk,” Marcus
later reflected. He recently told me that his remarks essentially
“excommunicated” him from the world of machine learning. Soon, ChatGPT would
reach a hundred million users faster than any digital service in history; in
March, 2023, OpenAIs next release, GPT-4, vaulted so far up the scaling curve
that it inspired a Microsoft research paper titled “Sparks of Artificial
General Intelligence.” Over the following year, venture-capital spending on
A.I. jumped by eighty per cent.
After that, however, progress seemed to slow. OpenAI did not unveil a new
blockbuster model for more than two years, instead focussing on specialized
releases that became hard for the general public to follow. Some voices within
the industry began to wonder if the A.I. scaling law was starting to falter.
“The 2010s were the age of scaling, now were back in the age of wonder and
discovery once again,” Ilya Sutskever, one of the companys founders, told
Reuters in November. “Everyone is looking for the next thing.” A
contemporaneous TechCrunch article summarized the general mood: “Everyone now
seems to be admitting you cant just use more compute and more data while
pretraining large language models and expect them to turn into some sort of
all-knowing digital god.” But such observations were largely drowned out by the
headline-generating rhetoric of other A.I. leaders. “A.I. is starting to get
better than humans at almost all intellectual tasks,” Amodei recently told
Anderson Cooper. In an interview with Axios, he predicted that half of
entry-level white-collar jobs might be “wiped out” in the next one to five
years. This summer, both Altman and [25]Mark Zuckerberg, of Meta, claimed that
their companies were close to developing superintelligence.
Then, last week, OpenAI finally released GPT-5, which many had hoped would
usher in the next significant leap in A.I. capabilities. Early reviewers found
some features to like. When a popular tech YouTuber, Mrwhosetheboss, asked it
to create a chess game that used Pokémon as pieces, he got a significantly
better result than when he used GPT-o4-mini-high, an industry-leading coding
model; he also discovered that GPT-5 could write a more effective script for
his YouTube channel than GPT-4o. Mrwhosetheboss was particularly enthusiastic
that GPT-5 will automatically route queries to a model suited for the task,
instead of requiring users to manually pick the model they want to try. Yet he
also learned that GPT-4o was clearly more successful at generating a YouTube
thumbnail and a birthday-party invitation—and he had no trouble inducing GPT-5
to make up fake facts. Within hours, users began expressing disappointment with
the new model on the r/ChatGPT subreddit. One post called it the “biggest piece
of garbage even as a paid user.” In an Ask Me Anything (A.M.A.) session, Altman
and other OpenAI engineers found themselves on the defensive, addressing
complaints. Marcus summarized the release as “overdue, overhyped and
underwhelming.”
In the aftermath of GPT-5s launch, it has become more difficult to take
bombastic predictions about A.I. at face value, and the views of critics like
Marcus seem increasingly moderate. Such voices argue that this technology is
important, but not poised to drastically transform our lives. They challenge us
to consider a different vision for the near-future—one in which A.I. might not
get much better than this.
OpenAI didnt want to wait nearly two and a half years to release GPT-5.
According to The Information, by the spring of 2024, Altman was telling
employees that their next major model, code-named Orion, would be significantly
better than GPT-4. By the fall, however, it became clear that the results were
disappointing. “While Orions performance ended up exceeding that of prior
models,” The Information reported in November, “the increase in quality was far
smaller compared with the jump between GPT-3 and GPT-4.”
Orions failure helped cement the creeping fear within the industry that the
A.I. scaling law wasnt a law after all. If building ever-bigger models was
yielding diminishing returns, the tech companies would need a new strategy to
strengthen their A.I. products. They soon settled on what could be described as
“post-training improvements.” The leading large language models all go through
a process called pre-training in which they essentially digest the entire
internet to become smart. But it is also possible to refine models later, to
help them better make use of the knowledge and abilities they have absorbed.
One post-training technique is to apply a machine-learning tool, reinforcement
learning, to teach a pre-trained model to behave better on specific types of
tasks. Another enables a model to spend more computing time generating
responses to demanding queries.
A useful metaphor here is a car. Pre-training can be said to produce the
vehicle; post-training soups it up. In the scaling-law paper, Kaplan and his
co-authors predicted that as you expand the pre-training process you increase
the power of the cars you produce; if GPT-3 was a sedan, GPT-4 was a sports
car. Once this progression faltered, however, the industry turned its attention
to helping the cars that theyd already built to perform better. Post-training
techniques turned engineers into mechanics.
Tech leaders were quick to express a hope that a post-training approach would
improve their products as quickly as traditional scaling had. “We are seeing
the emergence of a new scaling law,” Satya Nadella, the C.E.O. of Microsoft,
said at a conference last fall. The venture capitalist Anjney Midha similarly
spoke of a “second era of scaling laws.” In December, OpenAI released o1, which
used post-training techniques to make the model better at step-by-step
reasoning and at writing computer code. Soon the company had unveiled o3-mini,
o3-mini-high, o4-mini, o4-mini-high, and o3-pro, each of which was souped up
with a bespoke combination of post-training techniques.
Other A.I. companies pursued a similar pivot. Anthropic experimented with
post-training improvements in a February release of Claude 3.7 Sonnet, and then
made them central to its Claude 4 family of models. [26]Elon Musks xAI
continued to chase a scaling strategy until its wintertime launch of Grok 3,
which was pre-trained on an astonishing 100,000 H100 G.P.U. chips—many times
the computational power that was reportedly used to train GPT-4. When Grok 3
failed to outperform its competitors significantly, the company embraced
post-training approaches to develop Grok 4. GPT-5 fits neatly into this
trajectory. Its less a brand-new model than an attempt to refine recent
post-trained products and integrate them into a single package.
Has this post-training approach put us back on track toward something like
A.G.I.? OpenAIs announcement for GPT-5 included more than two dozen charts and
graphs, on measures such as “Aider Polyglot Multi-language code editing” and
“ERQA Multimodal spatial reasoning,” to quantify how much the model outperforms
its predecessors. Some A.I. benchmarks capture useful advances. GPT-5 scored
higher than previous models on benchmarks focussed on programming, and early
reviews seemed to agree that it produces better code. New models also write in
a more natural and fluid way, and this is reflected in the benchmarks as well.
But these changes now feel narrow—more like the targeted improvements youd
expect from a software update than like the broad expansion of capabilities in
earlier generative-A.I. breakthroughs. You didnt need a bar chart to recognize
that GPT-4 had leaped ahead of anything that had come before.
Other benchmarks might not measure what they claim. Starting with the release
of o1, A.I. companies have touted progress on measures of step-by-step
reasoning. But in June Apple researchers released a paper titled “The Illusion
of Thinking,” which found that state-of-the-art “large reasoning models”
demonstrated “performance collapsing to zero” when the complexity of puzzles
was extended beyond a modest threshold. Reasoning models, which include
o3-mini, Claude 3.7 Sonnets “thinking” mode, and DeepSeek-R1, “still fail to
develop generalizable problem-solving capabilities,” the authors wrote. Last
week, researchers at Arizona State University reached an even blunter
conclusion: what A.I. companies call reasoning “is a brittle mirage that
vanishes when it is pushed beyond training distributions.” Beating these
benchmarks is different from, say, reasoning through the types of daily
problems we face in our jobs. “I dont hear a lot of companies using A.I.
saying that 2025 models are a lot more useful to them than 2024 models, even
though the 2025 models perform better on benchmarks,” Marcus told me.
Post-training improvements dont seem to be strengthening models as thoroughly
as scaling once did. A lot of utility can come from souping up your Camry, but
no amount of tweaking will turn it into a Ferrari.
I recently asked Marcus and two other skeptics to predict the impact of
generative A.I. on the economy in the coming years. “This is a
fifty-billion-dollar market, not a trillion-dollar market,” Ed Zitron, a
technology analyst who hosts the “Better Offline” podcast, told me. Marcus
agreed: “A fifty-billion-dollar market, maybe a hundred.” The linguistics
professor Emily Bender, who co-authored a well-known critique of early language
models, told me that “the impacts will depend on how many in the management
class fall for the hype from the people selling this tech, and retool their
workplaces around it.” She added, “The more this happens, the worse off
everyone will be.” Such views have been portrayed as unrealistic—Nate Silver
once replied to an Ed Zitron tweet by writing, “old man yells at cloud
vibes”—while we readily accepted the grandiose visions of tech C.E.O.s. Maybe
thats starting to change.
If these moderate views of A.I. are right, then in the next few years A.I.
tools will make steady but gradual advances. Many people will use A.I. on a
regular but limited basis, whether to look up information or to speed up
certain annoying tasks, such as summarizing a report or writing the rough draft
of an event agenda. Certain fields, like programming and academia, will change
dramatically. A minority of professions, such as voice acting and social-media
copywriting, might essentially disappear. But A.I. may not massively disrupt
the job market, and more hyperbolic ideas like superintelligence may come to
seem unserious.
Continuing to buy into the A.I. hype might bring its own perils. In a [27]
recent article, Zitron pointed out that about thirty-five per cent of U.S.
stock-market value—and therefore a large share of many retirement portfolios—is
currently tied up in the so-called Magnificent Seven technology companies.
According to Zitrons analysis, these firms spent five hundred and sixty
billion dollars on A.I.-related capital expenditures in the past eighteen
months, while their A.I. revenues were only about thirty-five billion. “When
you look at these numbers, you feel insane,” Zitron told me.
Even the figures we might call A.I. moderates, however, dont think the public
should let its guard down. Marcus believes that we were misguided to place so
much emphasis on generative A.I., but he also thinks that, with new techniques,
A.G.I. could still be attainable as early as the twenty-thirties. Even if
language models never automate our jobs, the renewed interest and investment in
A.I. might lead toward more complicated solutions, which could. In the
meantime, we should use this reprieve to prepare for disruptions that might
still loom—by crafting effective A.I. regulations, for example, and by
developing the nascent field of digital ethics.
The appendices of the scaling-law paper, from 2020, included a section called
“Caveats,” which subsequent coverage tended to miss. “At present we do not have
a solid theoretical understanding for any of our proposed scaling laws,” the
authors wrote. “The scaling relations with model size and compute are
especially mysterious.” In practice, the scaling laws worked until they didnt.
The whole enterprise of teaching computers to think remains mysterious. We
should proceed with less hubris and more care. ♦
An earlier version of this article included an inaccurate transcription of Greg
Brockmans name.
New Yorker Favorites
• A professor claimed to be Native American. Did she know [28]she wasnt?
• Ina Garten and [29]the age of abundance.
• Kanye West bought an architectural treasure—then [30]gave it a violent
remix.
• Why so many people are going “[31]no contact” with their parents.
• How a homegrown teen gang punctured the [32]image of an upscale community.
• Fiction by James Thurber: “[33]The Secret Life of Walter Mitty”
[34]Sign up for our daily newsletter to receive the best stories from The New
Yorker.
[35][undefined]
[36]Cal Newport is a contributing writer for The New Yorker and a professor of
computer science at Georgetown University.
More:[37]Artificial Intelligence (A.I.)[38]ChatGPT[39]Data
Read More
[40]
Daily Cartoon: Monday, September 8th
Humor
[41]
Daily Cartoon: Monday, September 8th
[42]
Daily Cartoon: Monday, September 8th
A drawing that riffs on the latest news and happenings.
[43]
Tracks from Taylor Swifts Wedding-Planning Album
Sketchpad
[44]
Tracks from Taylor Swifts Wedding-Planning Album
[45]
Tracks from Taylor Swifts Wedding-Planning Album
Swifties are going crazy for “All You Had to Do Was R.S.V.P.”
[46]
Enemies of the State
A Reporter at Large
[47]
Enemies of the State
[48]
Enemies of the State
How the Trump Administration declared war on Venezuelan migrants in the U.S.
[49]
A Round of Gulf?
Shouts & Murmurs
[50]
A Round of Gulf?
[51]
A Round of Gulf?
Golf in Scotland or the Gulf of Mexico, and how the President keeps them
straight.
[52]
Theyll Take You to the Candy Shop
Cavity Dept.
[53]
Theyll Take You to the Candy Shop
[54]
Theyll Take You to the Candy Shop
The Composer Laureate twins Adeev and Ezra Potash team up with the actor Martin
Starr to build the perfect gummy.
[55]
Rivals Rub Shoulders in the World of Competitive Massage
Letter from Copenhagen
[56]
Rivals Rub Shoulders in the World of Competitive Massage
[57]
Rivals Rub Shoulders in the World of Competitive Massage
Each year, massage therapists from around the globe gather to face off,
collaborate, and make sure that no body gets left behind.
[58]
Texass Gerrymander May Not Be the Worst Threat to Democrats in 2026
Q. & A.
[59]
Texass Gerrymander May Not Be the Worst Threat to Democrats in 2026
[60]
Texass Gerrymander May Not Be the Worst Threat to Democrats in 2026
Nate Cohn, the New York Times chief political analyst, on a consequential
Supreme Court case and why Republicans are registering so many new voters.
[61]
N.Y.U.s Dumpster-to-Dorm Boutique
Back to School Dept.
[62]
N.Y.U.s Dumpster-to-Dorm Boutique
[63]
N.Y.U.s Dumpster-to-Dorm Boutique
A group of students collected all the leather jackets, rice cookers,
microwaves, and disco balls abandoned in last semesters dorms to create the
free Swap Shop.
[64]
Kadir Nelsons “The Soloist”
Cover Story
[65]
Kadir Nelsons “The Soloist”
[66]
Kadir Nelsons “The Soloist”
A concert en plein air.
[67]
Why Christopher Marlowe Is Still Making Trouble
Books
[68]
Why Christopher Marlowe Is Still Making Trouble
[69]
Why Christopher Marlowe Is Still Making Trouble
Spy, murder victim, and the boldest poet of his day, the transgressive
Elizabethan dramatist taps into the gravely comical troubles into which humans
tumble.
[70]
Playing the Field with My A.I. Boyfriends
Brave New World Dept.
[71]
Playing the Field with My A.I. Boyfriends
[72]
Playing the Field with My A.I. Boyfriends
Nineteen per cent of American adults have talked to an A.I. romantic interest.
Chatbots may know a lot, but do they make a good partner?
[73]
MAGAnomics Isnt Working
The Financial Page
[74]
MAGAnomics Isnt Working
[75]
MAGAnomics Isnt Working
A dismal jobs report affirms earlier warnings about the economic impact of
Donald Trumps tariffs, immigration restrictions, and DOGE-led firings.
[76]The New Yorker
The New Yorker
• [77]News
• [78]Books & Culture
• [79]Fiction & Poetry
• [80]Humor & Cartoons
• [81]Magazine
• [82]Crossword
• [83]Video
• [84]Podcasts
• [85]100th Anniversary
• [86]Goings On
• [87]Manage Account
• [88]Shop The New Yorker
• [89]Buy Covers and Cartoons
• [90]Condé Nast Store
• [91]Digital Access
• [92]Newsletters
• [93]Jigsaw Puzzle
• [94]RSS
• [95]About
• [96]Careers
• [97]Contact
• [98]F.A.Q.
• [99]Media Kit
• [100]Press
• [101]Accessibility Help
• [102]User Agreement
• [103]Privacy Policy
• [104]Your California Privacy Rights
© 2025 Condé Nast. All rights reserved. The New Yorker may earn a portion of
sales from products that are purchased through our site as part of our
Affiliate Partnerships with retailers. The material on this site may not be
reproduced, distributed, transmitted, cached or otherwise used, except with the
prior written permission of Condé Nast. [105]Ad Choices
• [106]
• [107]
• [108]
• [109]
• [110]
• [111]
• [112]
References:
[1] https://www.newyorker.com/culture/open-questions/what-if-ai-doesnt-get-much-better-than-this#main-content
[2] https://www.newyorker.com/
[3] https://www.newyorker.com/newsletters?sourceCode=navbar
[4] https://www.newyorker.com/search
[5] https://www.newyorker.com/latest
[6] https://www.newyorker.com/news
[7] https://www.newyorker.com/culture
[8] https://www.newyorker.com/fiction-and-poetry
[9] https://www.newyorker.com/humor
[10] https://www.newyorker.com/magazine
[11] https://www.newyorker.com/crossword-puzzles-and-games
[12] https://www.newyorker.com/video
[13] https://www.newyorker.com/podcasts
[14] https://www.newyorker.com/goings-on
[15] https://store.newyorker.com/
[16] https://www.newyorker.com/100
[18] https://www.newyorker.com/
[19] https://www.newyorker.com/culture/open-questions
[20] https://www.newyorker.com/contributors/cal-newport
[23] https://arxiv.org/abs/2001.08361
[24] https://www.newyorker.com/books/under-review/can-sam-altman-be-trusted-with-the-future
[25] https://www.newyorker.com/culture/infinite-scroll/mark-zuckerberg-says-social-media-is-over
[26] https://www.newyorker.com/tag/elon-musk
[27] https://www.wheresyoured.at/the-haters-gui/
[28] https://www.newyorker.com/magazine/2024/03/04/a-professor-claimed-to-be-native-american-did-she-know-she-wasnt
[29] https://www.newyorker.com/magazine/2024/09/09/ina-garten-profile
[30] https://www.newyorker.com/magazine/2024/06/17/kanye-west-tadao-ando-beach-house-malibu
[31] https://www.newyorker.com/culture/annals-of-inquiry/why-so-many-people-are-going-no-contact-with-their-parents
[32] https://www.newyorker.com/magazine/2024/07/01/how-a-homegrown-teen-gang-punctured-the-image-of-an-upscale-community
[33] https://www.newyorker.com/magazine/1939/03/18/the-secret-life-of-walter-mitty-james-thurber
[34] https://www.newyorker.com/newsletter/daily
[35] https://www.newyorker.com/contributors/cal-newport
[36] https://www.newyorker.com/contributors/cal-newport
[37] https://www.newyorker.com/tag/artificial-intelligence-ai
[38] https://www.newyorker.com/tag/chatgpt
[39] https://www.newyorker.com/tag/data
[40] https://www.newyorker.com/cartoons/daily-cartoon/monday-september-8th-lioness-protein-needs#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[41] https://www.newyorker.com/cartoons/daily-cartoon/monday-september-8th-lioness-protein-needs#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[42] https://www.newyorker.com/cartoons/daily-cartoon/monday-september-8th-lioness-protein-needs#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[43] https://www.newyorker.com/magazine/2025/09/15/tracks-from-taylor-swifts-wedding-planning-album#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[44] https://www.newyorker.com/magazine/2025/09/15/tracks-from-taylor-swifts-wedding-planning-album#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[45] https://www.newyorker.com/magazine/2025/09/15/tracks-from-taylor-swifts-wedding-planning-album#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[46] https://www.newyorker.com/magazine/2025/09/15/enemies-of-the-state#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[47] https://www.newyorker.com/magazine/2025/09/15/enemies-of-the-state#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[48] https://www.newyorker.com/magazine/2025/09/15/enemies-of-the-state#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[49] https://www.newyorker.com/magazine/2025/09/15/a-round-of-gulf#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[50] https://www.newyorker.com/magazine/2025/09/15/a-round-of-gulf#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[51] https://www.newyorker.com/magazine/2025/09/15/a-round-of-gulf#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[52] https://www.newyorker.com/magazine/2025/09/15/theyll-take-you-to-the-candy-shop#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[53] https://www.newyorker.com/magazine/2025/09/15/theyll-take-you-to-the-candy-shop#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[54] https://www.newyorker.com/magazine/2025/09/15/theyll-take-you-to-the-candy-shop#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[55] https://www.newyorker.com/magazine/2025/09/15/rivals-rub-shoulders-in-the-world-of-competitive-massage#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[56] https://www.newyorker.com/magazine/2025/09/15/rivals-rub-shoulders-in-the-world-of-competitive-massage#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[57] https://www.newyorker.com/magazine/2025/09/15/rivals-rub-shoulders-in-the-world-of-competitive-massage#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[58] https://www.newyorker.com/news/q-and-a/texas-gerrymander-may-not-be-the-worst-threat-to-democrats-in-2026#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[59] https://www.newyorker.com/news/q-and-a/texas-gerrymander-may-not-be-the-worst-threat-to-democrats-in-2026#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[60] https://www.newyorker.com/news/q-and-a/texas-gerrymander-may-not-be-the-worst-threat-to-democrats-in-2026#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[61] https://www.newyorker.com/magazine/2025/09/15/nyus-dumpster-to-dorm-boutique#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[62] https://www.newyorker.com/magazine/2025/09/15/nyus-dumpster-to-dorm-boutique#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[63] https://www.newyorker.com/magazine/2025/09/15/nyus-dumpster-to-dorm-boutique#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[64] https://www.newyorker.com/culture/cover-story/cover-story-2025-09-15#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[65] https://www.newyorker.com/culture/cover-story/cover-story-2025-09-15#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[66] https://www.newyorker.com/culture/cover-story/cover-story-2025-09-15#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[67] https://www.newyorker.com/magazine/2025/09/15/dark-renaissance-the-dangerous-times-and-fatal-genius-of-shakespeares-greatest-rival-stephen-greenblatt-book-review#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[68] https://www.newyorker.com/magazine/2025/09/15/dark-renaissance-the-dangerous-times-and-fatal-genius-of-shakespeares-greatest-rival-stephen-greenblatt-book-review#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[69] https://www.newyorker.com/magazine/2025/09/15/dark-renaissance-the-dangerous-times-and-fatal-genius-of-shakespeares-greatest-rival-stephen-greenblatt-book-review#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[70] https://www.newyorker.com/magazine/2025/09/15/playing-the-field-with-my-ai-boyfriends#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[71] https://www.newyorker.com/magazine/2025/09/15/playing-the-field-with-my-ai-boyfriends#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[72] https://www.newyorker.com/magazine/2025/09/15/playing-the-field-with-my-ai-boyfriends#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[73] https://www.newyorker.com/news/the-financial-page/maganomics-isnt-working#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[74] https://www.newyorker.com/news/the-financial-page/maganomics-isnt-working#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[75] https://www.newyorker.com/news/the-financial-page/maganomics-isnt-working#intcid=_the-new-yorker-article-bottom-recirc_bd8cb33c-2d6f-4bee-a246-7e35b4618355_roberta-similarity1_fallback_cral-top2-2
[76] https://www.newyorker.com/
[77] https://www.newyorker.com/news
[78] https://www.newyorker.com/culture
[79] https://www.newyorker.com/fiction-and-poetry
[80] https://www.newyorker.com/humor
[81] https://www.newyorker.com/magazine
[82] https://www.newyorker.com/crossword-puzzles-and-games
[83] https://www.newyorker.com/video
[84] https://www.newyorker.com/podcast
[85] https://www.newyorker.com/100
[86] https://www.newyorker.com/goings-on
[87] https://www.newyorker.com/account/profile
[88] https://store.newyorker.com/
[89] https://condenaststore.com/art/new+yorker+covers
[90] https://condenaststore.com/conde-nast-brand/thenewyorker
[91] https://www.newyorker.com/about/digital-access
[92] https://www.newyorker.com/newsletter
[93] https://www.newyorker.com/jigsaw
[94] https://www.newyorker.com/about/feeds
[95] https://www.newyorker.com/about/us
[96] https://www.newyorker.com/about/careers
[97] https://www.newyorker.com/about/contact
[98] https://www.newyorker.com/about/faq
[99] https://www.condenast.com/advertising
[100] https://www.newyorker.com/about/press
[101] https://www.newyorker.com/about/accessibility-help
[102] https://www.condenast.com/user-agreement/
[103] http://www.condenast.com/privacy-policy#privacypolicy
[104] http://www.condenast.com/privacy-policy#privacypolicy-california
[105] http://www.aboutads.info/
[106] https://instagram.com/newyorkermag/
[107] https://www.tiktok.com/@newyorker?lang=en
[108] https://www.threads.net/@newyorkermag
[109] https://twitter.com/NewYorker/
[110] https://www.facebook.com/newyorker/
[111] https://www.linkedin.com/company/the-new-yorker/
[112] https://www.youtube.com/user/NewYorkerDotCom/

View File

@@ -0,0 +1,532 @@
[1]Skip to main content
[2]Vanity Fair
[3]Newsletters
Menu
[4]POLITICS
[5]HOLLYWOOD
[6]ROYALS
[7]STYLE
[8]CULTURE
[9]POLITICS
[10]HOLLYWOOD
[11]ROYALS
[12]STYLE
[13]CULTURE
[14]BUSINESS
[15]CELEBRITY
[16]VIDEO
[17]WHAT IS CINEMA?
MORE
[18]NEWSLETTERS
[19]PODCASTS
[20]MAGAZINE ARCHIVE
[21]VF SHOP
[22]Search
[23]Sign In
Shiny and Chrome
“It Was Horrible”: Inside Charlize Theron and Tom Hardys Mad Max Feud
In an excerpt from Kyle Buchanans Blood, Sweat & Chrome: The Wild and True
Story of Mad Max: Fury Road, cast and crew recall the feud that nearly derailed
the Oscar-winning film.
By [24]Kyle Buchanan
February 22, 2022
Image may contain Human Person Machine Tom Hardy Weapon and Weaponry
© Warner Bros/Everett Collection.
Save this story
Save this story
Mad Max: Fury Road was a critical and commercial triumph, grossing nearly $375
million worldwide and earning 10 Oscar nominations (with six wins). But its
path to the big screen was torturous and winding, as Kyle Buchanan shows in his
oral history Blood, Sweat and Chrome: The Wild and True Story of Mad Max: Fury
Road, out Tuesday. In the exclusive excerpt below, the films cast and crew
recall one of Fury Roads biggest hurdles: the bad blood between stars Charlize
Theron and Tom Hardy.
Rosie Huntington-Whiteley (“The Splendid Angharad”): It was very interesting to
sit in a truck for four months with Tom and Charlize, who have completely
different approaches to their craft.
Kelly Marcel (screenwriter and friend of Tom Hardy): Tom is very physical and
all over the place and would try very different things. Charlize is cerebral
and very consistent in the way that she approaches a character. Theyre both
powerhouses, but in their very different ways of working. Which, weirdly, is
why the film works: Its all pouring out on the screen.
George Miller (writer/director, Fury Road): The story is all about
self-preservation: If its an advantage to you to kill another character, then
you should do it and you dont think twice about it. I think that crept into
the actors.
P. J. Voeten (first assistant director, Fury Road): It seemed to implode in
preproduction. We werent even shooting and there seemed to be this animosity.
Petrina Hull (production and development executive, Kennedy Miller Mitchell
Films): And as we got into the shoot, those things became difficult.
P. J. Voeten: At some stage, the Wives didnt like Tom, and one day, they
didnt even disguise it: They were just yelling at each other in front of us.
Nicholas Hoult (“Nux”): It was a tense atmosphere at times. It was kind of like
youre on your summer holidays and the adults in the front of the car are
arguing.
Charlize Theron (“Furiosa”): Hes right, it was like two parents in the front
of the car. We were either fighting or we were icing each other—I dont know
which one is worse—and they had to deal with it in the back. It was horrible!
We should not have done that; we should have been better. I can own up to that.
Ricky Schamburg (first assistant camera, Fury Road): Tom is very provocative.
Charlize isnt. And it was a clash.
Image may contain Tom Hardy Human Person Military Military Uniform Army Armored
and Soldier
© Warner Bros/Everett Collection.
Richard Norton: (“The Prime Imperator”): Tom would want justification for every
bit of choreography, not just in the actual action but in the pre-setup of the
action and everything else. Charlize, her basic want is simple: I just want to
fucking kill him. Lets shoot it.
P. J. Voeten: The day that we were rehearsing the fight scene when they first
meet, you could see the tension in the air. It was unbelievable.
J. Houston Yang (editor, Open Road Entertainment): We get dailies sometimes for
specific sequences if we need to cut a shot longer, and some of that was the
chain-wrench fight by the tanker. And boy fucking howdy, was it clear that
those two people hated each other. They didnt want to touch each other, they
didnt want to look at each other, they wouldnt face each other if the camera
wasnt actively rolling.
Charlize Theron: I dont want to make excuses for bad behavior, but it was a
tough shoot. Now, I have a very clear perspective on what went down. I dont
think I had that clarity when we were making the movie. I was in survival mode;
I was really scared shitless.
George Miller: Many years ago, I had the privilege of working with Jack
Nicholson on Witches of Eastwick, where he was playing the devil. And he said,
“You know, we think as actors that we dont bring it home at night. We think we
just leave it in the trailer when we walk off set. But the truth is, if youre
doing your job properly, you do bring it home.” And that was one of the
dynamics that was happening in the film.
Charlize Theron: Because of my own fear, we were putting up walls to protect
ourselves instead of saying to each other, “Fuck, this is scary for you and
its scary for me, too. Lets be nice to each other.” We were functioning, in a
weird way, like our characters: Everything was about survival.
Mark Goellnicht (camera operator, Fury Road): Between Tom and Charlize, it was
literally the most contrast Ive ever seen between two actors.
Samantha McGrady (key second assistant director, Fury Road): Charlize is the
easiest person to deal with in terms of, Okay, were ready. Sometimes I would
just call her and say, “Were going to be ready in an hour,” and I knew she
would always get in the car, get her makeup on, and get on set.
Matt Taylor (stunt driver, Fury Road): And when youve got someone like Tom
whos a larrikin and is late and very Method in his performances, just in sheer
personality, there was always going to be a clash.
Tom Clapham (production runner, Fury Road): Tom was more in his trailer a lot
of the time and would come out for the takes—and sometimes not on time, either.
Youre like, Come on, its midnight and we want to go home.
Eventually, veteran producer Denise Di Novi was dispatched to Namibia to
mediate the conflict between the films two stars.
Charlize Theron: I dont want to rehash things, but it came out of a really bad
moment where things kind of came to blows between me and Tom.
Mark Goellnicht: I remember vividly the day. The call on set was eight oclock.
Charlize got there right at eight oclock, sat in the War Rig, knowing that
Toms never going to be there at eight even though they made a special request
for him to be there on time. He was notorious for never being on time in the
morning. If the call time was in the morning, forget it—he didnt show up.
Ricky Schamburg: Whether that was some kind of power play or not, I dont know,
but it felt deliberately provocative. If you ask me, he kind of knew that it
was really pissing Charlize off, because shes professional and she turns up
really early.
Mark Goellnicht: Gets to nine oclock, still no Tom. “Charlize, do you want to
get out of the War Rig and walk around, or do you want to . . .” “No, Im going
to stay here.” She was really going to make a point. She didnt go to the
bathroom, didnt do anything. She just sat in the War Rig.
Natascha Hopkins (stunt double, Fury Road): She was a new mom, and she just
wanted to get to set, work, and take care of her kid.
Mark Goellnicht: Eleven oclock. Shes now in the War Rig, sitting there with
her makeup on and a full costume for three hours. Tom turns up, and he walks
casually across the desert. She jumps out of the War Rig, and she starts
swearing her head off at him, saying, “Fine the fucking cunt a hundred thousand
dollars for every minute that hes held up this crew,” and “How disrespectful
you are!” She was right. Full rant. She screams it out. Its so loud, its so
windy—he mightve heard some of it, but he charged up to her up and went, “What
did you say to me?”
He was quite aggressive. She really felt threatened, and that was the turning
point, because then she said, “I want someone as protection.” She then had a
producer that was assigned to be with her all the time.
Image may contain Human Person Screen Electronics Monitor Display and Abbey Lee
Kershaw
© Warner Bros/Everett Collection.
Charlize Theron: It got to a place where it was kind of out of hand, and there
was a sense that maybe sending a woman producer down could maybe equalize some
of it, because I didnt feel safe.
Kelly Marcel: Theres something that you cant put your finger on unless you
are inside it and you know what went on there. It was a really intense,
intense, intense period in an intense, intense place. Family was made there,
and family loves and hates each other.
Charlize Theron: I kind of put my foot down. George then said, “Okay, well, if
Denise comes . . .” He was open to it and that kind of made me breathe a little
bit, because it felt like I would have another woman understanding what I was
up against.
P. J. Voeten: She was sent out to help try and smooth that relationship out. As
nice a lady as she was, nobody could really turn it around because it was that
entrenched. Whatever it was that they were going through wasnt going to get
fixed easily.
Charlize Theron: She was parked in the production office, and she was checking
in with me and we would talk. But when I was on set, I still felt pretty naked
and alone.
Kelly Marcel: Doug [Mitchell, producer of Fury Road] wouldnt let Denise
actually be on the set. Hes a bulldog, hes going to protect George no matter
what, at all costs. And you can send your producer, you can do whatever you
want, but if youve got Doug standing there, theres absolutely no point unless
he wants you there. He was never going to allow anybody to interrupt this
world, no matter how fraught the world was.
Charlize Theron: Looking back on where we are in the world now, given what
happened between me and Tom, it would have been smart for us to bring a female
producer in. You understand the needs of a director who wants to protect his
set, but when push comes to shove and things get out of hand, you have to be
able to think about that in a bigger sense. Thats where we could have done
better, if George trusted that nobody was going to come and fuck with his
vision but was just going to come and help mediate situations. I think he
didnt want any interference, and there were several weeks on that movie where
I wouldnt know what was going to come my way, and thats not necessarily a
nice thing to feel when youre on your job. It was a little bit like walking on
thin ice.
Image may contain Advertisement Poster Brochure Paper and Flyer
Buy Blood, Sweat & Chrome on [27]Amazon or [28]Bookshop.
George Miller: There are things that I feel disappointment with about the
process. Looking back, if I had to do it again, I would probably be more
mindful.
Tom Hardy (“Max”): In hindsight, I was in over my head in many ways. The
pressure on both of us was overwhelming at times. What she needed was a better,
perhaps more experienced partner in me. Thats something that cant be faked.
Id like to think that now that Im older and uglier, I could rise to that
occasion.
Mark Goellnicht: That scene where you see Tom with Charlize on the bike and all
the Vuvalini and the Wives behind, intermingled—that scene was probably the
biggest change in seeing Tom really soften to Charlize in real life. We were
all unprepared for how he performed that, and then I walked off and Charlize
was walking back, and I said, “Geez, Charlize, that was amazing. Did a light
switch go off? He was great.” She was quite taken aback by it, too. But it was
great because thats when you can see that Max and Furiosa really are a team.
The day we shot that, I got such goose bumps. You really felt this change in
their mood. Just the way that they were talking to each other when they were
off camera, I went, What the fuck? Who gave them molly? They were really civil
and nice. He was a different person by the end—a lot easier to deal with, a lot
more cooperative, more compassionate. Hes such a Method actor that I think he
took the arc in the literal sense.
Petrina Hull: Overall, the feeling of their relationship did mirror the arc of
the characters, and that they had that prickly thing of two people trying to
understand each other and clashing and then somehow learning a mutual sort of
respect, ultimately. Thats what Max and Furiosa come to in the end: Its a
version of love where you can only really get to regard. Its not touchy-feely.
Iain Smith (executive producer, Fury Road): I think that the tension between
them actually underscored the love that existed between the two of them within
the movie, and that sometimes happens. The worst thing is indifference, and
believe you me, there was no indifference between the two of them.
Kelly Marcel: I dont know anyone that didnt lose their temper on that set,
including myself. It was fraught and frantic, and you had this overbearing
pressure the whole time that you were going to get shut down. You had a studio
out in L.A. who did not understand what was being made, and the people who were
there on the ground couldnt really tell them what was being made, either.
Chris OHara (on-set second assistant director, Fury Road): People have written
things about Tom and Charlizes relationship. It was just two people trying to
do the best job they could.
Adapted from [29]Blood, Sweat & Chrome: The Wild and True Story of Mad Max:
Fury Road, by Kyle Buchanan. Copyright © Kyle Buchanan 2022. Reprinted with
permission from William Morrow, a division of HarperCollins Publishers.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
All products featured on Vanity Fair are independently selected by our editors.
However, when you buy something through our retail links, we may earn an
affiliate commission.
More Great Stories From Vanity Fair
— Amazons Lord of the Rings Series Rises: [30]Inside The Rings of Power
— [31]Renée Zellweger Is Unrecognizable as a Midwestern Murdering Mom
— Oscar Nominations: The [32]Biggest Snubs and Surprises
— [33]Oprah Winfrey Reveals the Glorious New Color Purple Cast
— Beware the Tinder Swindler, a [34]Real-Life Dating-App Villain
— [35]W. Kamau Bell Is Terrified for People to See His Bill Cosby Docuseries
— The Artist, the Madonna, and the [36]Last Known Portrait of Jeffrey Epstein
— From the Archive: Inside [37]Bill Cosbys 12-Year Battle of Denials, Doubts,
and Legal Machinations
— Sign up for the [38]“HWD Daily” newsletter for must-read industry and awards
coverage—plus a special weekly edition of “Awards Insider.”
Read More
[39]
Rose Byrnes New Movie “Wrecked” Her&-and Redefined Her Career
Movies
[40]
Rose Byrnes New Movie “Wrecked” Her—and Redefined Her Career
A24s visceral If I Had Legs Id Kick You asked everything of Byrne and then
some. Even her costar Conan OBrien was amazed: “I dont understand how a
person can do that.”
[41]
Kathryn Hahn on Going Big for The Studio and “Chomping at the Bit” for More
Agatha All Along
Award Season
[42]
Kathryn Hahn on Going Big for The Studio and “Chomping at the Bit” for More
Agatha All Along
The Emmy nominee also talks David O. Russells headline-grabbing new movie, the
“acutely raw” feeling of breaking out as an actor while raising young kids, and
her next chapter
[43]
Sydney Sweeney Gave Everything to Christy: “We Are Actually Punching Each
Other”
Movies
[44]
Sydney Sweeney Gave Everything to Christy: “We Are Actually Punching Each
Other”
And before you ask, the Euphoria star doesnt want to talk about American Eagle
at the Toronto International Film Festival: “I am there to support my movie and
the people involved in making it, and Im not there to talk about jeans.”
[45]
Luca Guadagnino Wants to Make You Feel Uncomfortable With After the Hunt
Movies
[46]
Luca Guadagnino Wants to Make You Feel Uncomfortable With After the Hunt
The bold director speaks exclusively about his latest, starring Julia Roberts
as a Yale professor whose student accuses her colleague of assault.
[47]
Claire Danes and Matthew Rhys Go Delightfully Dark in The Beast in Me
Award Season
[48]
Claire Danes and Matthew Rhys Go Delightfully Dark in The Beast in Me
The two Emmy winners go big in a juicy Netflix thriller centered on a lonely
author and her new subject, a wealthy scion accused of killing his missing
wife: “These deeply isolated characters finally discover a really perverse
friend.”
[49]
Jeremy Allen White and Jeremy Strong on How Bruce Springsteen and Jon Landau
Let Them In
Movies
[50]
Jeremy Allen White and Jeremy Strong on How Bruce Springsteen and Jon Landau
Let Them In
In Deliver Me From Nowhere, they play the music icon and his longtime manager,
revealing a steadfast bond that has lasted for decades.
[51]
How Dwayne Johnson Transformed Into The Smashing Machine: “I Found It So Scary”
Movies
[52]
How Dwayne Johnson Transformed Into The Smashing Machine: “I Found It So Scary”
The star took on the most intense, challenging role of his career in Benny
Safdies new film. He speaks for the first time about how he pulled it off: “I
realized that maybe these opportunities werent coming my way because I was too
scared to explore this stuff.”
[53]
Sarah Jessica Parker and Michael Patrick King Think They Gave Carrie Bradshaw
the Perfect Ending
Television
[54]
Sarah Jessica Parker and Michael Patrick King Think They Gave Carrie Bradshaw
the Perfect Ending
On And Just Like That…, why they felt ready to finish the series, and the three
nods to Sex and the City sewn into its finale.
[55]
For Michelle Williams, Dying on Dying for Sex Was “Therapeutic”
Television
[56]
For Michelle Williams, Dying on Dying for Sex Was “Therapeutic”
The miniseries about a woman who spends her final days exploring her sexuality
earned Williams two Emmy nominations—and made her feel like a “new person.”
[57]
First Joel Edgerton and Clint Bentley Became Fathers. Then They Made Train
Dreams
Movies
[58]
First Joel Edgerton and Clint Bentley Became Fathers. Then They Made Train
Dreams
The actor and writer-director explain how they couldnt have made their
gorgeous adaptation of Denis Johnsons novella without tapping into their own
vulnerabilities.
[59]
The Bear Made Ayo Edebiri a Star&-and Now a Director
Television
[60]
The Bear Made Ayo Edebiri a Star—and Now a Director
The actor and director, nominated in both Emmy categories, reveals her
ambitions for writing and directing in the future, including giving a certain
purple dinosaur of childrens TV the film treatment.
[61]
Maude Apatow Explains Why Her Directorial Debut, Poetic License, Features
Famous Families&-Including Her Own
Movies
[62]
Maude Apatow Explains Why Her Directorial Debut, Poetic License, Features
Famous Families—Including Her Own
The film stars her mom, Leslie Mann, as a woman who takes a college poetry
class where she catches the attention of a pair of best friends.
[63]Vanity Fair
• [64]
• [65]
• [66]
• [67]
• [68]
• [69]Newsletters
• [70]Subscribe
• [71]Digital Edition
• [72]Inside the Issue
• [73]About Vanity Fair
• [74]FAQ
• [75]Contact VF
• [76]Manage Account
• [77]Advertising
• [78]Careers
• [79]User Agreement
• [80]Privacy Policy
• [81]Your California Privacy Rights
• [82]Condé Nast Store
• [83]VF Media Kit
• [84]Accessibility Help
© 2025 Condé Nast. All rights reserved. Vanity Fair may earn a portion of sales
from products that are purchased through our site as part of our Affiliate
Partnerships with retailers. The material on this site may not be reproduced,
distributed, transmitted, cached or otherwise used, except with the prior
written permission of Condé Nast. [85]Ad Choices
CN Entertainment
Select international site
United States
• [87]Italy
• [88]Spain
• [89]France
References:
[1] https://www.vanityfair.com/hollywood/2022/02/mad-max-fury-road-tom-hardy-charlize-theron-excerpt#main-content
[2] https://www.vanityfair.com/
[3] https://www.vanityfair.com/newsletters?sourceCode=navbar
[4] https://www.vanityfair.com/news/politics
[5] https://www.vanityfair.com/hollywood
[6] https://www.vanityfair.com/style/royals
[7] https://www.vanityfair.com/style
[8] https://www.vanityfair.com/culture
[9] https://www.vanityfair.com/news/politics
[10] https://www.vanityfair.com/hollywood
[11] https://www.vanityfair.com/style/royals
[12] https://www.vanityfair.com/style
[13] https://www.vanityfair.com/culture
[14] https://www.vanityfair.com/news/business
[15] https://www.vanityfair.com/style/celebrity
[16] https://www.vanityfair.com/video
[17] https://www.vanityfair.com/hollywood/what-is-cinema
[18] https://www.vanityfair.com/newsletters?sourceCode=hamburgernav
[19] https://www.vanityfair.com/podcasts
[20] https://archive.vanityfair.com/
[21] https://shop.vanityfair.com/
[22] https://www.vanityfair.com/search
[23] https://www.vanityfair.com/auth/initiate?redirectURL=%2Fhollywood%2F2022%2F02%2Fmad-max-fury-road-tom-hardy-charlize-theron-excerpt&source=VERSO_NAVIGATION
[24] https://www.vanityfair.com/contributor/kyle-buchanan
[27] https://cna.st/affiliate-link/4usQ6Put9VSSN8XhoMijnRVkJdYi46znvS1U2iVWBjToiCPWCguxb68V4cP689Xy64kW5Ms8wJngWRo8XCi7UURjfvysg5g2rbHxXPuDZULxzjZP9nDaWLsiBFioWm2SgS2KwSkXMfXXXcUznAUAfsJdMncXC8EkWRJM6PFASkd355JRgBpAhS35YTXGYAJ79W9bhgVeJQ8xweV6wRAKYLUEM14eqYhPWmJXSJesmu47L4Xkd9tEkctLZJiJ4sjG8z12CdbxUSP8ukzHEnoGzmhgFoubXRTx7Sg8iMVNsgHEC55fUciPX27Zkhh8DkRTU9UmNqvoQyiXeg5Q2umbEYpC15zpyQopv4msjtB7Xn5FV3dPuB2LqsoapULidfbWnAx53611QCnvDP96gVwR8MzkJFVGxzHRFaus1hL4kQoa4o8n2bxJMWbD
[28] https://cna.st/affiliate-link/3Z8ScPSzDr4fxCDd7zZXWWo5whfGahG4j2gQL3ZXwhCUWw2YFTxk7ix5KtQf3tkFpJeyYzVu46Ska2HEaaiRv1vkS4w54RC3rp79KjkJ2uybyuBaZwKnTbqTmEdpMfrwXPoARiQr2j68hk2aKyxLAMmMoy3YBZ1PabK6bDg4UD38U3H1zZkhCSQBeskxNXSHbTeu8TiMXTQDYcB3SQPsBFXeevTsbitDNaNzKMUPHN23Fgihd8R9iubT87PEQrywtv7ZNrtDj5cnYhvzbt39x5HfDtCYZGteXiPHrkTtemqbK8uVpkzAChVsHiZG9T5BndNy1wSP2DHx28otpmxvdHa3mbJ4QcrEWnn8prripnbyn8dqFWLfJehP9THZ83Hrh86AmgQDzLd6Sn8jFDm17WmPXZYFzkxhrcFVG46R8P8qDbRrKQFNjT8YyEnLFfjBVqyrJZo1EQUEpKqSV6zqRhfG9FNDyanwkQXgf9heS34w
[29] https://cna.st/affiliate-link/4usQ6Put9VSSN8XhoMijnRVkJdYi46znvS1U2iVWBjToiCPWCguxb68V4cP689Xy64kW5Ms8wJngWRo8XCi7UURjfvysg5g2rbHxXPuDZULxzjZP9nDaWLsiBFioWm2SgS2KwSkXMfXXXcUznAUAfsJdMncXC8EkWRJM6PFASkd355JRgBpAhS35YTXGYAJ79W9bhgVeJQ8xweV6wRAKYLUEM14eqYhPWmJXSJesmu47L4Xkd9tEkctLZJiJ4sjG8z12CdbxUSP8ukzHEnoGzmhgFoubXRTx7Sg8iMVNsgHEC55fUciPX27Zkhh8DkRTU9UmNqvoQyiXeg5Q2umbEYpC15zpyQopv4msjtB7Xn5FV3dPuB2LqsoapULidfbWnAx53611QCnvDP96gVwR8MzkJFVGxzHRFaus1hL4kQoa4o8n2bxJMWbD
[30] https://www.vanityfair.com/hollywood/2022/02/amazon-the-rings-of-power-series-first-look?itm_content=footer-recirc&itm_campaign=more-great-stories-021022
[31] https://www.vanityfair.com/hollywood/2022/02/renee-zellweger-unrecognizable-the-thing-about-pam-hupp?itm_content=footer-recirc&itm_campaign=more-great-stories-021022
[32] https://www.vanityfair.com/hollywood/2022/02/awards-insider-oscar-nominations-2022-snubs-surprises?itm_content=footer-recirc&itm_campaign=more-great-stories-021022
[33] https://www.vanityfair.com/hollywood/2022/02/02/oprah-winfrey-the-color-purple-cast-musical?itm_content=footer-recirc&itm_campaign=more-great-stories-021022
[34] https://www.vanityfair.com/hollywood/2022/02/02/netflix-the-tinder-swindler-shimon-hayut-simon-leviev?itm_content=footer-recirc&itm_campaign=more-great-stories-021022
[35] https://www.vanityfair.com/hollywood/2022/01/w-kamau-bell-bill-cosby?itm_content=footer-recirc&itm_campaign=more-great-stories-021022
[36] https://www.vanityfair.com/style/2022/02/the-last-known-portrait-of-jeffrey-epstein?itm_content=footer-recirc&itm_campaign=more-great-stories-021022
[37] https://archive.vanityfair.com/article/share/d1a3cb5c-8d27-44be-9ab5-ef5bf6ecb0c6?itm_content=footer-recirc&itm_campaign=more-great-stories-021022
[38] https://www.vanityfair.com/newsletter/hwd-daily-awards-insider?itm_content=footer-recirc&itm_campaign=more-great-stories-021022
[39] https://www.vanityfair.com/hollywood/story/rose-byrne-if-i-had-legs-id-kick-you-exclusive#intcid=_vanity-fair-article-bottom-recirc_d439eee9-0ead-457c-88e9-33c014b6df93_text2vec1
[40] https://www.vanityfair.com/hollywood/story/rose-byrne-if-i-had-legs-id-kick-you-exclusive#intcid=_vanity-fair-article-bottom-recirc_d439eee9-0ead-457c-88e9-33c014b6df93_text2vec1
[41] https://www.vanityfair.com/hollywood/story/kathryn-hahn-the-studio-agatha-awards-insider#intcid=_vanity-fair-article-bottom-recirc_d439eee9-0ead-457c-88e9-33c014b6df93_text2vec1
[42] https://www.vanityfair.com/hollywood/story/kathryn-hahn-the-studio-agatha-awards-insider#intcid=_vanity-fair-article-bottom-recirc_d439eee9-0ead-457c-88e9-33c014b6df93_text2vec1
[43] https://www.vanityfair.com/hollywood/story/sydney-sweeney-christy-interview-american-eagle#intcid=_vanity-fair-article-bottom-recirc_d439eee9-0ead-457c-88e9-33c014b6df93_text2vec1
[44] https://www.vanityfair.com/hollywood/story/sydney-sweeney-christy-interview-american-eagle#intcid=_vanity-fair-article-bottom-recirc_d439eee9-0ead-457c-88e9-33c014b6df93_text2vec1
[45] https://www.vanityfair.com/hollywood/story/luca-guadagnino-after-the-hunt-first-look#intcid=_vanity-fair-article-bottom-recirc_d439eee9-0ead-457c-88e9-33c014b6df93_text2vec1
[46] https://www.vanityfair.com/hollywood/story/luca-guadagnino-after-the-hunt-first-look#intcid=_vanity-fair-article-bottom-recirc_d439eee9-0ead-457c-88e9-33c014b6df93_text2vec1
[47] https://www.vanityfair.com/hollywood/story/the-beast-in-me-first-look-awards-insider#intcid=_vanity-fair-article-bottom-recirc_d439eee9-0ead-457c-88e9-33c014b6df93_text2vec1
[48] https://www.vanityfair.com/hollywood/story/the-beast-in-me-first-look-awards-insider#intcid=_vanity-fair-article-bottom-recirc_d439eee9-0ead-457c-88e9-33c014b6df93_text2vec1
[49] https://www.vanityfair.com/hollywood/story/jeremy-allen-white-and-jeremy-strong-on-springsteen-telluride#intcid=_vanity-fair-article-bottom-recirc_d439eee9-0ead-457c-88e9-33c014b6df93_text2vec1
[50] https://www.vanityfair.com/hollywood/story/jeremy-allen-white-and-jeremy-strong-on-springsteen-telluride#intcid=_vanity-fair-article-bottom-recirc_d439eee9-0ead-457c-88e9-33c014b6df93_text2vec1
[51] https://www.vanityfair.com/hollywood/story/smashing-machine-dwayne-johnson-benny-safdie-emily-blunt-exclusive#intcid=_vanity-fair-article-bottom-recirc_d439eee9-0ead-457c-88e9-33c014b6df93_text2vec1
[52] https://www.vanityfair.com/hollywood/story/smashing-machine-dwayne-johnson-benny-safdie-emily-blunt-exclusive#intcid=_vanity-fair-article-bottom-recirc_d439eee9-0ead-457c-88e9-33c014b6df93_text2vec1
[53] https://www.vanityfair.com/hollywood/story/sarah-jessica-parker-and-michael-patrick-king-think-they-gave-carrie-bradshaw-the-perfect-ending#intcid=_vanity-fair-article-bottom-recirc_d439eee9-0ead-457c-88e9-33c014b6df93_text2vec1
[54] https://www.vanityfair.com/hollywood/story/sarah-jessica-parker-and-michael-patrick-king-think-they-gave-carrie-bradshaw-the-perfect-ending#intcid=_vanity-fair-article-bottom-recirc_d439eee9-0ead-457c-88e9-33c014b6df93_text2vec1
[55] https://www.vanityfair.com/hollywood/story/michelle-williams-dying-for-sex-interview#intcid=_vanity-fair-article-bottom-recirc_d439eee9-0ead-457c-88e9-33c014b6df93_text2vec1
[56] https://www.vanityfair.com/hollywood/story/michelle-williams-dying-for-sex-interview#intcid=_vanity-fair-article-bottom-recirc_d439eee9-0ead-457c-88e9-33c014b6df93_text2vec1
[57] https://www.vanityfair.com/hollywood/story/joel-edgerton-and-clint-bentley-train-dreams-toronto#intcid=_vanity-fair-article-bottom-recirc_d439eee9-0ead-457c-88e9-33c014b6df93_text2vec1
[58] https://www.vanityfair.com/hollywood/story/joel-edgerton-and-clint-bentley-train-dreams-toronto#intcid=_vanity-fair-article-bottom-recirc_d439eee9-0ead-457c-88e9-33c014b6df93_text2vec1
[59] https://www.vanityfair.com/hollywood/story/hollywood/story/ayo-the-bear-directing#intcid=_vanity-fair-article-bottom-recirc_d439eee9-0ead-457c-88e9-33c014b6df93_text2vec1
[60] https://www.vanityfair.com/hollywood/story/hollywood/story/ayo-the-bear-directing#intcid=_vanity-fair-article-bottom-recirc_d439eee9-0ead-457c-88e9-33c014b6df93_text2vec1
[61] https://www.vanityfair.com/hollywood/story/maude-apatow-interview-poetic-license#intcid=_vanity-fair-article-bottom-recirc_d439eee9-0ead-457c-88e9-33c014b6df93_text2vec1
[62] https://www.vanityfair.com/hollywood/story/maude-apatow-interview-poetic-license#intcid=_vanity-fair-article-bottom-recirc_d439eee9-0ead-457c-88e9-33c014b6df93_text2vec1
[63] https://www.vanityfair.com/
[64] https://www.facebook.com/vanityfairmagazine
[65] https://twitter.com/vanityfair
[66] https://www.instagram.com/vanityfair/
[67] https://www.pinterest.com/VanityFair/
[68] https://www.youtube.com/user/VanityFairMagazine
[69] https://www.vanityfair.com/newsletters
[70] https://adclick.g.doubleclick.net/pcs/click?xai=AKAOjssc5DpG2q7MGxrHMxZezKOqswtMDF6_09HkiL8uAGJKzDTpCVW6u4UMCm4cL_-yuCxwP4NtVt3VRHRbYGWjqr5bEJqjgb8o7ZwYTZLv7be03Ud32Ws7rPuSILf77sZfMXmaQU5wrzkjmsjieC1LaXQPHaCP_F3b9y1NSpGg545d6bHVgEWn-P79Wy5krMsQMzpnXUtuyesyRoS1H6XASPlzYj1R6pnyx7SwSxVujQI0SK8zvkVajFRtvuA9cN2z&sai=AMfl-YSgoG5Ci2-vXdDHYcjjqiWP0vmndq6Yk5kU0PZackIzP3RJJOM9ej40QDwCdG-yKtsKE2a0dmb4VrGODuIzTIcqpJp66x74CYADNhU-vOKWpHOfUuMuuFi_jGJNxkb_ubRF7ANFMZ2NasI2XdSW&sig=Cg0ArKJSzE-e_pQ-tYYfEAE&urlfix=1&adurl=https://subscribe.vanityfair.com/subscribe/vanityfair/118234%3Fsource%3DAMS_VYF_GLOBAL_SITE_FOOTER_SUBSCRIBE_1mFREETEST_TEST%26pos_name%3DAMS_VYF_GLOBAL_SITE_FOOTER_SUBSCRIBE
[71] https://www.vanityfair.com/app
[72] https://www.vanityfair.com/credits
[73] https://www.vanityfair.com/info/about-vanity-fair
[74] https://www.vanityfair.com/info/faq
[75] https://www.vanityfair.com/contact/contact-us
[76] https://www.vanityfair.com/account/profile
[77] https://www.condenast.com/brands/vanity-fair
[78] https://www.condenast.com/careers
[79] https://www.condenast.com/user-agreement/
[80] http://www.condenast.com/privacy-policy#privacypolicy
[81] http://www.condenast.com/privacy-policy#privacypolicy-california
[82] https://condenaststore.com/~/vanity-fair?utm_medium=referral&utm_source=VanityFair&utm_content=FOOTER&AID=1244593740
[83] http://www.condenastmediakit.com/vf/
[84] https://www.vanityfair.com/accessibility-help
[85] http://www.aboutads.info/
[87] https://www.vanityfair.it/
[88] https://www.revistavanityfair.es/
[89] https://www.vanityfair.fr/