441 lines
24 KiB
Plaintext
441 lines
24 KiB
Plaintext
[1] [header-mar] > Westenberg. [MENU] [3] 1. Home [4] 2. About [5] 3. RSS [6]
|
||
4. Tools [7] 5. YouTube [8] 6. Forum | [9] / Search | [10] → Sign in [11] +
|
||
Subscribe
|
||
STATUS // operational
|
||
Westenberg. | v1.0 | 2026
|
||
2026-02-25 // 13 min read
|
||
|
||
Everything is awesome (why I'm an optimist)
|
||
|
||
AUTHOR // [12]JA Westenberg ACCESS // true
|
||
Everything is awesome (why I'm an optimist)
|
||
|
||
February is the month the internet decided we're all going to die.
|
||
|
||
In the span of about two weeks, Matt Shumer's [13]Something Big is Happening
|
||
racked up over 80 million views on X with its breathless comparison of AI to
|
||
the early days of COVID, telling his non-tech friends and family that we're in
|
||
the "this seems overblown" phase of something much, much bigger than a
|
||
pandemic. Before anyone had finished arguing about that, Citrini Research
|
||
published [14]THE 2028 GLOBAL INTELLIGENCE CRISIS (all caps) a fictional
|
||
dispatch from June 2028 in which unemployment has hit 10.2%, the S&P 500 has
|
||
crashed 38% from its highs, and the consumer economy has been hollowed out by
|
||
what they coined "Ghost GDP": output that shows up in the national accounts but
|
||
never circulates through the real economy, because, as Citrini helpfully
|
||
observed, machines spend zero dollars on discretionary goods. Michael Burry
|
||
signal-boosted it. [15]Bloomberg covered it. IBM fell 13%. Software and
|
||
payments stocks shed over $200 billion in market cap in a single day,
|
||
apparently because a Substack post called upon them by name and investors
|
||
decided that constituted news.
|
||
|
||
The doom loop Citrini described is simple: AI capabilities improve, companies
|
||
need fewer workers, white-collar layoffs increase, displaced workers spend
|
||
less, margin pressure pushes firms to invest more in AI, AI capabilities
|
||
improve. Repeat until civilization unravels. Shumer, meanwhile, told people to
|
||
get their financial houses in order because the permanent underclass is
|
||
imminent.
|
||
|
||
Both pieces went stratospherically viral, and both, I believe, are entirely
|
||
wrong about where this is heading.
|
||
|
||
I want to make a case for optimism.
|
||
|
||
For anyone who read those pieces and felt the dread, whether you're building AI
|
||
and worrying about what it means, or you've absorbed the pessimist consensus
|
||
and started treating decline as a foregone conclusion, or you’re in the bucket
|
||
of people Shumer insists are fucked; I'm going to argue that the pessimists
|
||
have the best narratives and the worst track record. The doom scenarios require
|
||
assumptions that don't survive contact with economic history, and the
|
||
psychological posture you bring to this moment actually matters for how it
|
||
turns out.
|
||
|
||
Why the doom loop feels so right
|
||
|
||
The central mechanism of the Citrini thesis: when you make intelligence
|
||
abundant and cheap, you destroy the income that 70% of GDP depends on. A single
|
||
GPU cluster in North Dakota generating the output previously attributed to
|
||
10,000 white-collar workers in midtown Manhattan is, in their framing, "more
|
||
economic pandemic than economic panacea." The velocity of money flatlines. The
|
||
consumer economy withers. Ghost GDP accumulates in the national accounts while
|
||
real humans stop being able to pay their mortgages.
|
||
|
||
Noah Smith, writing on [16]Noahpinion the day after the selloff, called it "a
|
||
scary bedtime story" and pointed out that Citrini doesn't use an explicit
|
||
macroeconomic model, so you can't actually see what assumptions are driving the
|
||
doom spiral. Smith noted that none of the analysts whose job it is to track
|
||
Visa and Mastercard stock had apparently thought about AI disruption until a
|
||
blogger spelled it out for them, which tells you more about sentiment-driven
|
||
trading than it does about macroeconomics. The economist Gerard MacDonell
|
||
described the entire piece as "allegorical" but pointed out that it ignores a
|
||
basic economic principle: production generates income.
|
||
|
||
Ben Thompson, on Stratechery, has been making a version of this counterargument
|
||
for months, most forcefully in his January piece [17]AI and the Human Condition
|
||
, where he argued that even if AI does all of the jobs, humans will still want
|
||
humans, creating an economy for labor precisely because it is labor. Thompson's
|
||
framing cuts to something the doom narratives consistently miss. They model AI
|
||
exclusively as labor substitution: the same economy, minus humans. Every
|
||
section of the Citrini piece is about replacing workers and squeezing margins
|
||
on existing activity. What they don't model is what the freed-up surplus
|
||
creates. As Thompson put it in [18]his analysis of the Citrini selloff, this is
|
||
the real error: a refusal to believe in human choice and markets.
|
||
|
||
It's an error that has been made, in nearly identical form, about every major
|
||
technological transformation in modern history. Every single time, the
|
||
pessimists looked at what was being destroyed and extrapolated catastrophe,
|
||
while failing to imagine what would be created, because the thing that would be
|
||
created hadn't been invented yet.
|
||
|
||
Catastrophists keep being wrong
|
||
|
||
In 1810, 81% of the American workforce was employed in agriculture. Two hundred
|
||
years later, it's about 1%. If you had shown someone in 1810 a chart of
|
||
agricultural employment decline and asked them to model the economic
|
||
consequences, the only rational projection would have been apocalypse. Where
|
||
would 80% of the population find work? What would they do? How would anyone eat
|
||
if the farmers were all displaced by machines?
|
||
|
||
The answer, of course, is that entirely new categories of work were created
|
||
that no one in 1810 could have conceived of, and these new jobs paid
|
||
dramaticaly more than subsistance farming. Factory work, office work, services,
|
||
knowledge work, the entire apparatus of modernity: none of it was visible from
|
||
the vantage point of the pre-industrial economy. The transition was brutal and
|
||
uneven. The handloom weavers of England suffered. Dickens documented the
|
||
squalor of early industrialization in prose that still makes you flinch. But
|
||
the trajectory was real, and the people projecting permanent immiseration from
|
||
the displacement of agricultural labor were, in the fullest sense,
|
||
catastrophically wrong.
|
||
|
||
Tom Lee of Fundstrat made this point with a specific example that I find
|
||
clarifying. The invention of flash-frozen food in the early 1900s disrupted
|
||
farming, taking agriculture from 30-40% of employment down to its current
|
||
sliver. The economy didn't collapse. It reallocated value elsewhere, into
|
||
industries and occupations that the frozen food pioneers couldn't have
|
||
imagined. And today, I can't name a single family that subsists on frozen TV
|
||
dinners.
|
||
|
||
The Citrini scenario expects you to believe that AI will be the first major
|
||
technological revolution in which this reallocation mechanism fails entirely.
|
||
Where every previous wave of automation freed up human labor and capital to
|
||
flow into new, higher-value activities, this time the loop... stops. The
|
||
surplus accrues to the owners of compute, consumers lose purchasing power, and
|
||
the negative feedback loop has no natural brake. It's worth sitting with how
|
||
strong a claim that is. It requires every previous pattern of technological
|
||
adaptation to be wrong, or at least irrelevant. And when you look at the actual
|
||
data, there are signs that white-collar job postings have stabilized, layoff
|
||
mentions on earnings calls remain well below early 2023 peaks, and
|
||
forward-looking labor indicators show no sign of the displacement spiral that
|
||
the doom thesis predicts.
|
||
|
||
Does that mean AI won't disrupt specific industries and jobs? Obviously it
|
||
will. Some of those disruptions will be painful and dislocating for the people
|
||
caught in them. But there's an enormous gap between "this technology will cause
|
||
serious labor market disruption that we need to manage" and "this technology
|
||
will cause a self-reinforcing economic death spiral from which there is no
|
||
recovery." Citrini is arguing the latter, while the evidence supports the
|
||
former.
|
||
|
||
Why vivid scenarios beat boring probabilities
|
||
|
||
There's a reason the doom narratives go viral while the measured
|
||
counterarguments get a polite nod // a fraction of the engagement. It has
|
||
nothing to do with the quality of the underlying analysis. It has everything to
|
||
do with how human brains process information.
|
||
|
||
Daniel Kahneman's work on the availability heuristic showed that we judge the
|
||
probability of events by how easily we can imagine them. Dystopia is easy to
|
||
imagine. We have an extraordinarily rich cultural tradition of imagining
|
||
technological nightmare scenarios in exquisite detail. Orwell did it
|
||
brilliantly. Every season of Black Mirror does it competently. The Terminator
|
||
gave us the visual grammar for AI catastrophe decades before anyone had a
|
||
working language model. When Citrini describes a world where the unemployment
|
||
rate hits 10.2% and the S&P crashes 38%, you can picture it. You can feel the
|
||
dread. Hollywood has been training you to feel exactly this dread for your
|
||
entire life.
|
||
|
||
Now try to imagine the positive scenarios. Try to picture, in concrete sensory
|
||
detail, a world where AI helps us solve protein folding problems across
|
||
thousands of neglected tropical diseases, where it accelerates materials
|
||
science research by orders of magnitude, where it makes high-quality legal and
|
||
medical advice accessible to people who currently can't afford it, where it
|
||
enables forms of creative expression and economic activity that we can't yet
|
||
name because they don't exist yet. It's fuzzy and abstract. You can state it
|
||
intellectually, but you can't feel it the way you can feel the unemployment
|
||
spiral.
|
||
|
||
This asymmetry isn't trivial. The [19]Ifo Institute has published research
|
||
showing that investors are willing to pay more for economic narratives than for
|
||
raw forecasts, and that pessimistic narratives command higher prices among
|
||
certain investor types. As [20]Joachim Klement put it in his response to the
|
||
Citrini selloff: investors value narratives more than actual recession
|
||
forecasts. Stories travel faster than spreadsheets.
|
||
|
||
Shumer's piece is a narrative construction, and a questionable piece of
|
||
analysis. He opens with the COVID comparison: remember February 2020, when a
|
||
few people were talking about a virus and everyone thought it was overblown? He
|
||
positions himself as the insider who sees what's coming, who's been "giving the
|
||
polite, cocktail-party version" but can't hold back the truth any longer. [21]
|
||
Paulo Carvao, writing in Forbes, noted that it reads at times like a sales
|
||
pitch. It’s a used-car pitch at that. The Guardian pointed out that Shumer
|
||
"previously excited the internet by announcing the release of the world's 'top
|
||
open-source model,' which it was not." (To be clear: this is a kinder way of
|
||
saying [22]it was fraud.)
|
||
|
||
But criticism doesn't travel like fear does. Fear is a better story. And so the
|
||
doom narratives accumulate cultural mass while the boring, incremental,
|
||
statistically-grounded counterarguments remain niche reading for economists and
|
||
strategists.
|
||
|
||
We remember disasters, not the ones we dodged
|
||
|
||
Humans are spectacular at remembering disasters, passed down in every format
|
||
from the written word to the oral tradition. We are (for obvious reasons)
|
||
terrible at remembering the disasters that didn't happen. In 1962, during the
|
||
Cuban Missile Crisis, a Soviet submarine officer named Vasili Arkhipov refused
|
||
to authorize the launch of a nuclear torpedo, overriding two other officers who
|
||
wanted to fire. The world didn't end. Most people today have never heard of
|
||
Arkhipov. Everyone knows about Hiroshima and Nagasaki. The bomb that fell is
|
||
seared into collective memory. The bomb that didn't fall is a footnote.
|
||
|
||
The Y2K bug was going to crash civilization; then billions of dollars of
|
||
engineering work fixed it, and everyone retroactively decided it was never a
|
||
real threat. The ozone layer was going to disintegrate; then the Montreal
|
||
Protocol worked better than almost anyone predicted, and ozone depletion feels
|
||
like a quaint 1990s worry. Acid rain was dissolving the forests of North
|
||
America; then sulfur dioxide regulations cut emissions drastically, and the
|
||
whole issue evaporated from public consciousness. Every one of these was a
|
||
genuine threat. Every one was met by human ingenuity and institutional
|
||
coordination. Every one was subsequently memory-holed, because success is
|
||
boring and failure is vivid.
|
||
|
||
We're running our forecasting models on a dataset that systematically excludes
|
||
our wins. It should be entirely unsurprising that the forecasts come out
|
||
somewhat bearish.
|
||
|
||
Ben Thompson (as usual) gets it right
|
||
|
||
Thompson's core insight is that humans want humans. He points to the
|
||
agricultural revolutions: in the pre-Neolithic era, zero percent of humans
|
||
worked in agriculture. By 1810, 81%. By today, 1%. Machines replaced human
|
||
agricultural labor entirely, and rather than the economy collapsing, entirely
|
||
new categories of work were created that paid dramatically more. This cycle
|
||
played out again with industrialization, with computing, with the internet.
|
||
Every time, the displacement was real, and every time, new forms of
|
||
human-valued work emerged that couldn't have been predicted.
|
||
|
||
Citrini called DoorDash "the poster child" for AI disruption, imagining
|
||
vibe-coded competitors fragmenting the market overnight. Thompson flips it:
|
||
DoorDash is the poster child for why the article is absurd. DoorDash didn't
|
||
always exist. It was built, and it wins through the active choice of customers,
|
||
restaurants, and drivers. The doom thesis treats it as a static rent-extraction
|
||
layer sitting on top of human laziness, but DoorDash created its market from
|
||
scratch and generated new jobs for millions of drivers along the way. What the
|
||
Citrini analysis lacks, Thompson argued, is any belief in human choice or
|
||
markets. If your starting assumption is that things are as they are, you can
|
||
only envision breaking them.
|
||
|
||
Citrini predicted AI would collapse real estate commissions by eliminating
|
||
information asymmetry. But the internet already did that. You can look up every
|
||
house for sale right now, with full history and photos. Real estate agents
|
||
still exist, which is one of the better arguments that humans are resourceful
|
||
at giving themselves work to do even in fields where they arguably shouldn't
|
||
need to.
|
||
|
||
In a world of AI abundance, the things humans create will become more valuable
|
||
precisely because they're human. AI art will make human art more desirable, not
|
||
less, because provenance matters. AI-generated content will make
|
||
human-generated content worth more, because the imperfections and
|
||
idiosyncrasies are features.
|
||
|
||
Is this optimistic? Yes. Could it be wrong? Sure it could. But it's grounded in
|
||
a real observation about human psychology that the doom models don't account
|
||
for. Citrini's Ghost GDP thesis assumes that when AI replaces human labor, the
|
||
value simply evaporates from the consumer economy. Thompson's counterargument
|
||
is that humans will create new forms of value that are specifically human, and
|
||
that demand for those forms of value will intensify as machine-generated
|
||
alternatives become ubiquitous. The history of technological disruption
|
||
suggests Thompson has the stronger case.
|
||
|
||
Pessimism as a self-fulfilling prophecy
|
||
|
||
What actually worries me is the second-order effects of the doom narrative
|
||
itself.
|
||
|
||
When the smartest, most technically capable people in a field become convinced
|
||
that the field is heading toward catastrophe, several things happen. Some leave
|
||
the field entirely, removing exactly the talent you'd want steering the ship.
|
||
Some stay but adopt a posture of resigned inevitability, which is functionally
|
||
identical to apathy. Some decide that since disaster is coming, they might as
|
||
well accelerate and cash out. And a vocal minority become so consumed by
|
||
existential risk that they advocate for extreme countermeasures that would
|
||
concentrate power in ways that create entirely new categories of danger.
|
||
|
||
Robert Oppenheimer (in the wake of his famous invocation of the Bhagavad Gita)
|
||
spent the years after the Manhattan Project arguing passionately for
|
||
international cooperation on nuclear governance. He didn't say "we should never
|
||
have done this." He said, essentially, "this is incredibly powerful, and we
|
||
need to build institutions that can handle it." He was an optimist in the
|
||
meaningful sense: he believed better outcomes were achievable if people worked
|
||
to achieve them. He was right about that, because we're still here.
|
||
|
||
The most effective people working on AI safety and governance right now are,
|
||
almost without exception, optimists. They work on alignment because they
|
||
believe alignment is solvable. They push for better governance becuase they
|
||
believe governance can work. The ones who've concluded that the problem is
|
||
unsolvable tend to stop doing useful work, for obvious reasons.
|
||
|
||
Gramsci wrote about "pessimism of the intellect, optimism of the will." You
|
||
look at the world clearly. You see the problems. And then you choose to act as
|
||
if better outcomes are possible, because that choice is the precondition for
|
||
achieving them.
|
||
|
||
Nobody can see the next economy
|
||
|
||
What both Shumer and Citrini miss is that they're modeling a future economy
|
||
using the structure of the present economy. They see AI replacing white-collar
|
||
workers within the existing economic framework and project the consequences of
|
||
that replacement within that same framework. But every major technological
|
||
transformation has changed the framework itself, creating entirely new economic
|
||
structures that were invisible from the vantage point of the old ones.
|
||
|
||
In 1995, if you told someone that one of the largest employers in America would
|
||
be a company that let strangers sleep in each other's homes, they would have
|
||
thought you were insane. If you told them that millions of people would make a
|
||
living by talking into microphones about their opinions, or recording
|
||
themselves playing video games, or writing newsletters on the internet, they'd
|
||
have had you committed. The entire creator economy, the gig economy, the app
|
||
economy, the SaaS economy that Citrini is now eulogizing: none of it was
|
||
predictable from the vantage point of 1995. And that's a 30-year window. The
|
||
agricultural revolutions played out over centuries.
|
||
|
||
What will people do when AI can handle most current white-collar tasks?
|
||
|
||
I don't know.
|
||
|
||
And that's the whole point.
|
||
|
||
Nobody knew what displaced agricultural workers would do, either, until they
|
||
did it. The absence of a visible next chapter isn't evidence that there won't
|
||
be one. It's evidence that we're bad at predicting what humans will invent when
|
||
constraints shift.
|
||
|
||
Choosing optimism with open eyes
|
||
|
||
I'm not saying everything will be fine. I'm not saying the transition will be
|
||
smooth. I'm not saying that the people displaced by AI won't suffer, or that we
|
||
don't need better policy frameworks to handle the disruption. The
|
||
distributional concerns at the heart of the Citrini piece are legitimate. If
|
||
productivity gains accrue primarily to the owners of compute and capital while
|
||
labor income stagnates, that's a genuine problem. Labour's share of GDP has
|
||
been declining for decades. These are real numbers pointing to real challenges.
|
||
|
||
What I am saying is that the leap from "this will be disruptive and we need to
|
||
manage it carefully" to "this will cause an irreversible economic death spiral"
|
||
isn't supported by the evidence, by economic history, or by what we know about
|
||
how humans respond to technological change. The Citrini scenario requires every
|
||
adaptive mechanism in the economy to fail simultaneously and completely within
|
||
roughly two years. That's a very specific left-tail outcome.
|
||
|
||
If you're building AI systems, if you're founding companies, if you're writing
|
||
code that will shape how people experience the world, your psychological
|
||
orientation toward the future is a variable that directly shapes // affects
|
||
outcomes. Pessimistic builders build defensively. They hoard and hedge and make
|
||
decisions based on fear. Optimistic builders build with ambition. They invest
|
||
in safety because they believe safety is achievable. They take on hard problems
|
||
because they believe hard problems have solutions.
|
||
|
||
The tech industry is at a hinge point, and the narrative it tells itself will
|
||
shape what it creates. If the dominant narrative is doom, the best people
|
||
leave, the remaining people race to extract value before the collapse, and the
|
||
governance frameworks get built by people who don't understand the technology.
|
||
If the dominant narrative is cautious optimism, the best people stay, the work
|
||
is good, and the institutions get built by people who know what they're
|
||
building for.
|
||
|
||
Ed Yardeni, the veteran Wall Street strategist, noted in the wake of the
|
||
Citrini selloff that "the AI story has morphed from a Roaring 2020s
|
||
productivity booster to an existential threat to our way of life." He found
|
||
this striking. I find it absurd. The underlying technology hasn't changed, and
|
||
the capabilities haven't shifted. What changed is the narrative, and narratives
|
||
are always, always choices.
|
||
|
||
I choose optimism. I choose it because the alternative is surrender as
|
||
sophistication. And because every time I look at the historical record, the
|
||
full record that includes both the disasters and the averted disasters, both
|
||
the tragedies and the triumphs, the case for human ingenuity and resilience is
|
||
stronger than the case against it.
|
||
|
||
The doomers may have the best stories.
|
||
|
||
I believe the optimists have the best evidence.
|
||
|
||
I'll take the evidence.
|
||
|
||
Everything is (going to be) awesome.
|
||
|
||
$ cat ./comments
|
||
$ cat ./subscribe.md
|
||
|
||
Get updates
|
||
|
||
Field Notes on Now.
|
||
|
||
[23][ ] SUBSCRIBE_
|
||
[25]Home/// [26]About/// [27]RSS/// [28]Tools/// [29]YouTube/// [30]Forum///
|
||
[31]Home/// [32]About/// [33]RSS/// [34]Tools/// [35]YouTube/// [36]Forum///
|
||
[37]Home/// [38]About/// [39]RSS/// [40]Tools/// [41]YouTube/// [42]Forum///
|
||
[43]Home/// [44]About/// [45]RSS/// [46]Tools/// [47]YouTube/// [48]Forum///
|
||
© 2026 Westenberg. [49]Sign up
|
||
Theme by [50]JA Westenberg x [51]Studio Self
|
||
|
||
References:
|
||
|
||
[1] https://www.joanwestenberg.com/
|
||
[3] https://www.joanwestenberg.com/
|
||
[4] https://www.joanwestenberg.com/about/
|
||
[5] https://www.joanwestenberg.com/rss/
|
||
[6] https://westenberg.gumroad.com/
|
||
[7] https://www.youtube.com/@jawestenberg
|
||
[8] https://westenberg.discourse.group/
|
||
[9] https://www.joanwestenberg.com/everything-is-awesome-why-im-an-optimist/#
|
||
[10] https://www.joanwestenberg.com/signin/
|
||
[11] https://www.joanwestenberg.com/signup/
|
||
[12] https://www.joanwestenberg.com/author/jawestenberg/
|
||
[13] https://shumer.dev/something-big-is-happening?ref=joanwestenberg.com
|
||
[14] https://www.citriniresearch.com/p/2028gic?ref=joanwestenberg.com
|
||
[15] https://www.bloomberg.com/news/articles/2026-02-23/software-payments-shares-tumble-after-citrini-post-on-ai-risks?ref=joanwestenberg.com
|
||
[16] https://www.noahpinion.blog/p/the-citrini-post-is-just-a-scary?ref=joanwestenberg.com
|
||
[17] https://stratechery.com/2026/ai-and-the-human-condition/?ref=joanwestenberg.com
|
||
[18] https://stratechery.com/2026/another-viral-ai-doomer-article-the-fundamental-error-doordashs-ai-advantages/?ref=joanwestenberg.com
|
||
[19] https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5637576&ref=joanwestenberg.com
|
||
[20] https://klementoninvesting.substack.com/p/why-pessimists-make-more-money?ref=joanwestenberg.com
|
||
[21] https://carvao.substack.com/p/the-problem-with-techs-latest-something?ref=joanwestenberg.com
|
||
[22] https://x.com/jawestenberg/status/2021782902342922514?s=20&ref=joanwestenberg.com
|
||
[25] https://www.joanwestenberg.com/
|
||
[26] https://www.joanwestenberg.com/about/
|
||
[27] https://www.joanwestenberg.com/rss/
|
||
[28] https://westenberg.gumroad.com/
|
||
[29] https://www.youtube.com/@jawestenberg
|
||
[30] https://westenberg.discourse.group/
|
||
[31] https://www.joanwestenberg.com/
|
||
[32] https://www.joanwestenberg.com/about/
|
||
[33] https://www.joanwestenberg.com/rss/
|
||
[34] https://westenberg.gumroad.com/
|
||
[35] https://www.youtube.com/@jawestenberg
|
||
[36] https://westenberg.discourse.group/
|
||
[37] https://www.joanwestenberg.com/
|
||
[38] https://www.joanwestenberg.com/about/
|
||
[39] https://www.joanwestenberg.com/rss/
|
||
[40] https://westenberg.gumroad.com/
|
||
[41] https://www.youtube.com/@jawestenberg
|
||
[42] https://westenberg.discourse.group/
|
||
[43] https://www.joanwestenberg.com/
|
||
[44] https://www.joanwestenberg.com/about/
|
||
[45] https://www.joanwestenberg.com/rss/
|
||
[46] https://westenberg.gumroad.com/
|
||
[47] https://www.youtube.com/@jawestenberg
|
||
[48] https://westenberg.discourse.group/
|
||
[49] https://www.joanwestenberg.com/everything-is-awesome-why-im-an-optimist/#/portal/
|
||
[50] https://joanwestenberg.com/
|
||
[51] https://thisisstudioself.com/
|