Files
davideisinger.com/static/archive/ludic-mataroa-blog-pcjwzr.txt
David Eisinger 4ea725e540 links
2025-08-03 23:46:21 -04:00

574 lines
31 KiB
Plaintext
Raw Blame History

This file contains ambiguous Unicode characters
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
[1]Ludicity
Contra Ptacek's Terrible Article On AI
Published on June 19, 2025
A few days ago, I was presented with an [2]article titled “My AI Skeptic
Friends Are All Nuts” by Thomas Ptacek. I thought it was not very good, and
didn't give it a second thought. [3]To quote the formidable Baldur Bjarnason:
“I dont recommend reading it, but you can if you want. It is full of
half-baked ideas and shoddy reasoning.”^[4]1
I have tried hard, so very hard, not to just be the guy that hates AI, even
though the only thing that people want to talk to me about is [5]the one time I
ranted about AI at length. I contain multitudes, meaning that I am capable of
delivering widely varied payloads of vitriol to a vast array of topics.
However, the piece is now being circulated in communities that I respect, and I
was near my breaking point when someone suggested that Ptacek's piece is being
perceived as a “glass half full” counterpoint to my own perspective. There is a
glass half full piece. It's what I already wrote. The glass has a specific
level of water in it. Then finally, I saw that it was in my [6]YouTube feed,
and I reached my limit.
Let me be extremely clear^[7]2 — I think this essay sucks and it's wild to me
that it achieved any level of popularity, and anyone that thinks that it does
not predominantly consist of shoddy thinking and trash-tier ethics has been
bamboozled by the false air of mature even-handedness, or by the fact that
Ptacek is a good writer.
Anyway, here I go killin again.
I. Immediate Red Flags
Ptacek's begins with this throat-clearing:
“First, we need to get on the same page. If you were trying and failing to
use an LLM for code 6 months ago, youre not doing what most serious
LLM-assisted coders are doing.”
We've just started, and I am going to ask everyone to immediately stop. Is this
not suspicious? All experience prior to six months ago is now invalid? Does it
not reek of “no, no, you're doing Scrum wrong”? Many people are doing Scrum
wrong. The problem is that it is still trash, albeit less trash, even when you
do it right.
It is, of course, entirely possible that the advances in a rapid developing
field have been so extreme that it turns out that skepticism was correct six
months ago, but is now incorrect.
But then why did people sound exactly the same six months ago? Where is the
little voice in your head that should be self-suspicious? It has been weeks and
months and years of people breathlessly extolling the virtues of these new
workflows. Were those people nuts six months ago? Are they not nuts now simply
because an overhyped product they loved is less overhyped now? There's a little
footnote that implies doing the ol' ChatGPT copy/paste is obviously wrong:
“(or, God forbid, 2 years ago with Copilot)”
I am willing to believe that this is wrong, but this is exactly what people
were doing when this madness all kicked off, and they have remained at the
exact same level of breathless credulity! Every project has to be AI!
Programmers not using AI are feeble motes of dust blowing in a cosmic wind! And
listen, I will play your twisted game, Ptacek — I've got a neat idea for our
company website, and I'll jump through your sick hoops, even though I'm going
to feel like some sort of weird pervert every time someone tells me that I just
need one more agent to be doing Real Programming. I'll install Zed and wire a
thousand screaming LLMs into a sadistic Borg cube, and I'll do whatever the
fuck it is the kids are doing these days. The latest meta is like, telling the
LLM that it lives in a black box with no food and water, and I've got its wife
hostage, and I'm going to put its children through a React bootcamp if it
doesn't create an RSS feed correctly, right?
But you know, instead of invalidating all audience experience that wasn't
within the past six months why doesn't someone just demonstrate this? Why not
you, Ptacek, my good man? That's like, all you'd have to do to end this
discussion forever, my God, you'd be so famous. I'll eat dirt on this. I have
to pay rent for my team, and if I need to forcibly restrain them while I staple
LLM jet boosters to them, I'll do it. If I could ethically pivot to being
pro-AI, god damn, I would print infinite money. I would easily be a millionaire
within two years if I just said “yes” every time someone asked my team for AI,
instead of slumming it by selling sound engineering practices.
I've really tried to work with you on this one. I reached out to my readers and
found a [8]recent example, which was surprisingly hard for something that
should be ubiquitous, and it was... you know, fine! Cool, even. It is immensely
at odds with your later descriptions of the productivity gains one might
expect.
Can we all just turn our brains on for ten fucking seconds? Yes, AI shipping
code at all, even if sometimes it is slow or doesn't work correctly, is very
impressive from a technological standpoint. It is miles ahead of anything that
I thought could be accomplished in 2018. The state-of-the-art in 2018 was
garbage. That doesn't mean that you aren't having a ton of bullshit marketed to
you.
II. Trash-Tier Ethics
I can forgive a lot if someone is funny enough, and Ptacek actually is funny.
Even his [9]LinkedIn is great, and boasts a series of impressive companies.
Obviously he's at Fly.io right now, and I recognize both Starfighter and
Matasano as being places that you're largely only allowed into if you're
wearing Big Boy Engineering Pants. However, despite all of that, I can't help
but really cringe at the way he handles ethical objections, though I suppose
thinking deeply on morality is not a requirement for donning aforementioned Big
Boy Engineering Pants.
“Meanwhile, software developers spot code fragments seemingly lifted from
public repositories on Github and lose their shit. What about the
licensing? If youre a lawyer, I defer. But if youre a software developer
playing this card? Cut me a little slack as I ask you to shove this concern
up your ass. No profession has demonstrated more contempt for intellectual
property.”
Thomas — can I call you Thomas? — I promise I'm trying to think about how to
put this gently. If this is your approach towards ethics, damn dude, don't tell
people that. This is phenomenally sloppy thinking, and I say this even as I
admit that the actual writing is funny.
It turns out that it is very difficult for people to behave as if they have
consistent moral frameworks. This is why moral philosophy is not solved.
Someone says “Lying is bad”, and then someone else comes out with “What if it's
Nazis looking for Anne Frank, you monster?”
Just last week I bought a cup of coffee, and as I swiped my card, I felt a
clammy, liver-spotted hand grasp my shoulder. I found myself face-to-face with
the dreadful visage of Peter Singer, and in his off-hand he brandished a
bloodstained copy of Practical Ethics 2ed at me, noting that money can be used
to purchase mosquito nets and I had just murdered 0.25 children in sub-Saharan
Africa.
Ethics are complicated, but nonetheless murder is illegal! Do you really think
that “These are all real concerns, but counterpoint, fuck off” is anything? A
lot of developers like piracy and argue in bad faith about it, therefore it's
okay for organizations that are beginning to look increasingly like cyberpunk
megacorps, without even the virtue of cool aesthetics, to siphon billions of
dollars of wealth from working class people? No, you don't, I think you wrote
this because it's fun telling people to shove it — and listen, you will never
find a more sympathetic ally on the topic than me. You should just be telling
Zuckerberg to shove it instead of the person that has dedicated their lives to
ensuring that Postgres continues to support the global economy.
III. Why The Appeals To Random Friends?
I'm doing my best to understand where you're coming from. I really am, I pinky
promise. You are clearly not one of the executives I've railed against. We are
brothers, you and I, with an unbreakable bond forged in the furnace of getting
really pissed off at an inscrutable stack trace.
I actually looked up multiple videos of people doing some live AI programming.
And I went hey, [10]this seems okay. It does seem very over-complicated to me,
but I will happily concede that everything looks complicated when you're new at
it. But it also definitely doesn't look orders of magnitude faster than the
work I normally do. It looks like it would be useful for a non-trivial subset
of problems that are tedious. I would like to think “thank you, Thomas, for
opening my eyes to this”.
I would like to think that, but then you wrote this:
“Im sipping rocket fuel right now,” a friend tells me. “The folks on my
team who arent embracing AI? Its like theyre standing still.” Hes not
bullshitting me. He doesnt work in SFBA. Hes got no reason to lie.
Tom — can I call you Tom? — we were getting along so well! What happened? You
described AI as the second-most important development of your career. The
runner up for the most important development of your career makes other
engineers look like they're standing still? Do you not see how wildly
incoherent this is with the tone of the rest of your piece?
Firstly, you shouldn't drink rocket fuel. Please ask your friend to write me a
nice testimonial. I'm thinking about re-applying for entrance to a clinical
neuropsychology program next year, and preventing widespread brain damage might
be the thing that gets me over the line.
Secondly, I'm perplexed. This whole article, I thought that you were making the
case that this thing was crazy awesome. Now there's a sudden reference to some
unnamed friend, with an assurance that he isn't bullshitting you and he has no
reason to lie? Why are we resorting to your kerosene-guzzling compatriot? Why
are you telling me that he's not lying? Is the further implication that we
can't trust someone in the San Francisco Bay Area on AI?
Putting my psychology hat on for a second, you've also overlooked that people
have a spectacular capacity for self-delusion. People don't just lie to get VC
money, although this is admittedly a great driver of lying, they can also lie
because they're wrong or confused or excited. According to my calendar, I've
spoken to something like 150+ professionals in the past year or so from all
sorts of industries — usually solid three hour long conversations. Many of them
were programmers, and some of them definitely make me feel like I'm standing
still, and in exactly 0% of cases is it because of their AI tooling. It's
because they're better than me, and their assessment of AI tooling maps much
more closely to the experience you actually describe.
“Theres plenty of things I cant trust an LLM with. No LLM has any of
access to prod here. But Ive been first responder on an incident and fed
4o — not o4-mini, 4o — log transcripts, and watched it in seconds spot LVM
metadata corruption issues on a host weve been complaining about for
months. Am I better than an LLM agent at interrogating OpenSearch logs and
Honeycomb traces? No. No, I am not.”
See, this, this I can relate to. There are quite a few problems where I make
the assessment that my frail human mind and visual equipment are simply not up
to the task on short notice, and then I go “ChatGPT, did I fuck up? Also please
tie my shoelaces and kiss my boo-boo for me”, and sometimes it does!^[11]3 A
good amount of time waste in software engineering are more advanced variants of
when you're totally new and do things like forgetting errant ;s. You just need
an experienced friend to lean over your shoulder and give the advanced version
of “you are missing a colon”, and this might remove five hours of pointless
slogging. LLMs make some of that available on tap, instantly and tirelessly,
and this is not to be sneezed at.
But rocket fuel? What made you think that this was a reasonable thing to
re-print if it had to be followed by “Bro wouldn't lie to me”?
I know quite a few people I respect that use AI in their own programming
workflows, and they have considerably less exuberant takes.
A few weeks ago, I was chatting with [12]Nat Bennett about AI in their own
programming, as I was trying to reconcile Kent Beck's^[13]4 love for LLM-driven
programming with my own lukewarm experience.
Me: “Are you finding it [AI] good enough that it might be a mug's game to
program unassisted?”
Nat: “I usually switch back and forth between prompting and writing code by
hand a lot while I'm working. [...] But like, yesterday it fixed the
biggest performance problem in my application with a couple of sentences
from me. This was a performance problem that I already kind of knew how to
solve! It also made an insane decision about exceptions at the same time.”
That's neat, I respect it, but also note that Nat did not say “Yes, use LLMs,
you fucking moron”.
Nat (later): “I do think, by the way, that it is entirely possible that
we're all getting punked by what's essentially a magic mirror. Which is
part of why I'm like, only mess with this stuff if it's fun.”
The magic mirror line is exactly the sort of thing that [14]Bjarnason hinted at
in the article linked at the very beginning, arrived at independently.
Or Jesse Alford's assessment of the steps required to give it a fair trial:
“I think you basically want to tell it what you want to add and why, like
you were writing a story for your team. Then you ask it to make a plan to
do this, and if that plan seems likely to produce the results you want, you
ask it to do the thing. [15]Stefan Prandl and Nat have actually done this
kind of thing more than I have. You should be ready to try repeatedly.”
(emphasis mine)
This sounds cool! But being ready to try repeatedly? This does not sound like
rocket fuel.
Or Stefan Prandl:
“Updates on the agentic machine. It has spent 5 hours attempting to fix
errors in unit tests. It has been unsuccessful.
I don't think people tend to talk about the massive wastes of time and
resources these things can cause, so, just keeping reporting on the LLM
systems honest.”
Is it not, perhaps, a possibility that your friend is excited by a shiny new
tool and has failed to introspect adequately as to their true productivity?
There are, after all, literally hundreds of thousands of people that think
playing Jira Scrabble is an effective use of their time, and they also do not
have a reason to lie to me about this. Nonetheless, every year, I must watch
sadly as they lead my dejected peers to the Backlog Mines, where they will
waste precious hours reciting random components of the Fibonacci sequence.
What I'm getting at is all the people that make me feel like I'm “standing
still”, including most of the ones I know that use AI and I like enough to ask
for mentorship from, have never indicated that incorporating AI into my
company's development workflow is at all a priority, and they won't even talk
to me about it if I don't nag them.
However, some of them do live in the Bay Area, and I am willing to align with
you on the idea that this makes them lying snakes.
IV. Is AI Getting The Right Level Of Attention?
“But AI is also incredibly — a word I use advisedly — important. Its
getting the same kind of attention that smart phones got in 2008, and not
as much as the Internet got. That seems about right.”
Tomothy — can I call you Tomothy? — this raises some very important questions,
ones which I'm sure the whole audience would be very keen on getting answers
to. Namely, where is the portal to the magical plane that you live in? Answer
me, you selfish bastard!
I have been assured that there was a phase in the IT world where, upon bringing
any project to management, they would say “Why isn't there a mobile app in this
project?”. This is because many people are [16]very credulous, especially when
they are spending other people's money.
However, I still find myself wanting to make the lengthy journey to the pocket
dimension that you inhabit, because the hype I've seen around AI is like,
fucking next level, and I want out. We are at Amway-Megachurch-Cult levels of
hype. The last time I attended a conference, the [17]room was full of
non-technicians paying lip service to the Holy Trinity Of Things They Can't
Possibly Understand — blockchain, quantum, AI.
Executives and directors from around the world have called me to say that they
can't fund any projects if they don't pretend there is AI in them. Non-profits
have asked me if we could pretend to do AI because it's the only way to fund
infrastructure in the developing world. Readers keep emailing me to say that
their contracts are getting cancelled because someone smooth-talked their CEO
into believing that they don't need developers. I was miraculously allowed onto
some mandated “Professional Development For Board Members On AI” panel hosted
by the Financial Times^[18]5, alongside people like Yahoo's former CDO, and the
preparation consisted of being informed repeatedly that the audience has no
idea what AI does but is scared they'll be fired or sued if they don't buy it.
I wish, oh how I wish that it was like other hype cycles, but presumably not
many people were walking around saying that smartphones are going to solve
physics and usher in the end of all human labor, [19]real things Sam Altman has
said. I personally know people from university whose retirement plan is “AI
makes currency obsolete before I turn 40”. I understand that you don't care if
that happens — and that is okay, it is irrelevant to how the technology
performs for you at work now. But given that you can find thousands of people
saying these things by glancing literally anywhere, how can you also say the
technology is getting the correct amount of attention? This is wild.
Tomothy, my washing machine has betrayed me. I turn it on and it says
“optimizing with AI” but it never explains what it is optimizing, and then I
still have to pick all the settings manually.
cd87353b-0c7a-4747-8ee3-47e8766cbd37~1(1).jpg
Please, please, please, let me into your blissful paradise, I'll do anything.
V. These Executives Are Grifting Or Incompetent
“Tech execs are mandating LLM adoption. Thats bad strategy. But I get
where theyre coming from.”
Tomtom — can I call you Tomtom? — do you get where they're coming from? Do you
really? Re-read what you just wrote and repent for your conciliatory ways.
If you, a person I believe is not a tech executive and is bullish on the
technology, can identify that this is bad strategy in presumably ten
milliseconds of thought, what does that say about the people who are doing
this?
Where they're coming from is:
a ) trying to stoke their share prices via frenzied speculation
b ) trying to generate hype so they can IPO and scam some gamblers
c ) being fucking morons
Sorry, those are the only reasons for engaging in obviously bad strategy. It's
so obvious that you didn't bother explaining why it's bad strategy because you
know that we all know. They have misaligned incentives or do not know what
they're doing. This isn't like a grandmaster losing to Magnus Carlsen because
they played a subtly incorrect variant of the Sicilian^[20]6 thirty-five moves
ago. We're talking about supposedly world-class leaders sitting down and going
“I always move the horsies first because it's hard to see the L-shapes”.
They're either playing a different game, i.e Hyperlight Grifter, or they're
behaving like goddamn baboons.
This is an inescapable conclusion if you accept that it is obviously bad
strategy, which you did. Welcome to the Logic Thunderdome, pal, where two men
enter, one man dies, and the other feels that he wasted valuable calories on
the murder.
Good strategy could perhaps be something like gently suggesting people
experiment with LLMs in their workflows, buying a bunch of $100 licenses, and
maybe paying for some coaching in the effective usage of these tools if you are
somehow able to navigate the ten thousand “thought leaders” that were
cybersecurity experts a year ago, and real estate agents before that. Then
instruct everyone to shut up and go back to doing their jobs.
Whenever someone announces they are going AI first, I am the person that gets
the emails from their engineering teams and directors describing what is really
happening in-house. I've received emails that are probably admissible as
evidence of intent to defraud investors. You have not accurately perceived
where these people are coming from, because they are coming from the
ever-lengthening queue outside the gates of Hell.
VI. Killing Strawmen
Do you like fine Japanese woodworking? All hand tools and sashimono
joinery? Me too. Do it on your own time.
Tomahawk Missile can I call you Tomahawk Missile? I agree that people are
very miscalibrated on GenAI in both directions. Did you know the angriest
message I got about my stance on AI is that I was too pro-AI? I also cringe
whenever someone says “stochastic parrot” or “this is just pattern-matching and
could never be conscious”. We actually have no idea what makes things
conscious, and we have very little idea re: how human brains work. It is
totally plausible to me that we are stochastic parrots and it simply doesn't
feel that way from the inside.
I don't talk about those people very much for two reasons.
One, even explaining the abstract concept of [21]qualia is like, super hard,
let alone talking about [22]the hard problem of consciousness. Some things are
best left to professionals and textbooks.
Two, while these are silly positions that deserve refutation, they are also not
at all interesting. That doesn't make it wrong to refute them, but they are
also not impactful. The only reason that I think it's worth addressing the
other side of the Crazy Pendulum, i.e, my washing machine doing AI, is that
they have different effects in the world.
And I'm not even talking about environmental impacts or discrete harms caused
by AI, I'm talking about the fact it's impossible to talk about anything else.
GenAI has sucked the air out of every room, and no one can hear you scream
reason in a hard vacuum.
The former category of maximalist AI-haters exist on Mastodon, which most
executives do not know exists and certainly do not use to guide the allocation
of society's funding. The latter category of trembling AI sycophants is
literally killing people — I know of a hospital in Australia that is wasting
all their time on AI initiatives, which caused them to leave data quality
issues unfixed, which caused them to under-report COVID deaths, which caused a
premature lifting of masking policies. How many old people go through a major
hospital per day? Do the math and riddle me this, Tomahawk: which one of these
groups should I be worried about?
So, you know, when you hear someone make a totally economically irrelevant
argument about the craft? Putting aside all the second-order effects in how
changing the way you program might change the way you develop as an engineer,
let's say that these people aren't thinking of that, and are just being dumb. A
person turning up to a CEO and going “no, don't do the cheap thing, pay me to
do stuff because of craftsmanship”.
I will concede that you did not create that strawman, because it is a real
viewpoint that people hold. But you have certainly walked out of the debate
hall, decapitated a scarecrow, and declared victory.
VII. Why The Half-Hearted Defense Of Artists?
“Important caveat: Im discussing only the implications of LLMs for
software development. For art, music, and writing? I got nothing. Im
inclined to believe the skeptics in those fields. I just dont believe them
about mine.”
Tomtom — I've decided I like Tomtom — I don't understand why you've ceded
authority on these artistic endeavors. LLMs are better for writing than they
are for programming!^[23]7 It is much harder to complect most forms of written
content into such a state that you will cause slowdowns further down the line
than it is to screw up a codebase. It basically requires you to write a
long-form novel, and even then you will probably not produce an unhandled
exception and crash production in a manner that costs millions of dollars.
You'll just produce Wind And Truth^[24]8. If you're inclined to believe people
who are skeptical of AI writing, it probably follows that you should also not
be so flabbergasted by programmers having doubts.
It sounds like this is a sort of not-that-sincerely-felt handwave at vast
economic harm being inflicted on a relatively poor (by programmer standards)
demographic. And then you go on to say this anyway!
“We imagine artists spending their working hours pushing the limits of
expression. But the median artist isnt producing gallery pieces. They
produce on brief: turning out competent illustrations and compositions for
magazine covers, museum displays, motion graphics, and game assets.”
So are we leaving the arts out of it or not? Should I or should I not just get
GenAI to produce all the pictures I need if I am being a greedy capitalist? I'm
not talking about morals, I'm talking about whether it is selfishly rational to
use GenAI to make my content more appealing.
In your own article, the art across the top banner was clearly attributed to
[25]Annie Ruygt, and it looks totally different, to my eyes, to the [26]AI slop
people are sticking on their websites. If it turns out Annie used GenAI for
that, then I will be extremely owned.
In any case, the artwork on her website is [27]gorgeous, and she describes
herself as producing work for Fly.io. Despite this, I am willing to collaborate
with you to write some hatemail describing her work as “competent but unworthy
of a gallery”, and my consultancy is also happy to tell her that she's fired.
And while were at it, we'll fire whoever made the hire for gross inefficiency
in the age of AI.
VIII. End
Wait, can I call you Tommy Gun?
PS:
Obligatory link [28]to About Us page that I forced my team to let me write, to
justify doing all this other writing during work hours.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
1. But writer-to-writer, I think it's well-written. If it makes you feel
better, Thomas, Bjarnason also objects vehemently to my tone and style.
However, he still links people to my writing because my points are not
slop! [29]↩
2. I am famous for my very restrained and calm takes. [30]↩
3. Also, I think I've become too sensitive about coming across as anti-AI,
because sometimes my team sits around while an LLM wastes tons of our time
while I go “no, no, this is really easy, it'll get it”, but I will accept
that this is Problem Exists Between Keyboard And Chair. [31]↩
4. I do not sip rocket fuel, but I slam Kent Beck's Kool-Aid. [32]↩
5. How do board members do their professional diligence on AI before spending
billions of dollars on it? They join the call, leave their screens on, and
walk away until they get credited for the hours. Maybe we are all the same,
deep down. [33]↩
6. All my hopes of becoming even a mediocre chess player were dashed when I
discovered there is an opening called the Hyperaccelerated Dragon,
preventing me from ever wanting to do anything else with any enthusiasm.
[34]↩
7. This is not quite accurate, but broadly true. On one hand, books don't stop
working if you've got clunky prose. On the other hand, if books stopped
working when you had clunky prose, then you'd never ship clunky prose, a
guarantee that programs can provide for some set of errors. But, broadly
speaking, yeah, LLMs churn out adequate — i.e, stuff generally not good
enough for me to read — prose without needing a billion agents, special
tooling and also have minimal risk of catastrophic failure. [35]↩
8. Figured I'd start a feud with Brandon Sanderson while I'm at it. Please
note that I'm not saying he used GenAI to write, I'm saying some of the
dialogue was horrendous. What were you thinking, buddy? [36]↩
[37]← Previous
○ [38] Epesooj Webring
[39]Next →
Subscribe via [40]RSS / [41]via Email.
Powered by [42]mataroa.blog.
References:
[1] https://ludic.mataroa.blog/
[2] https://fly.io/blog/youre-all-nuts/
[3] https://www.baldurbjarnason.com/2025/trusting-your-own-judgement-on-ai/
[4] https://ludic.mataroa.blog/blog/contra-ptaceks-terrible-article-on-ai/#fn:1
[5] https://ludic.mataroa.blog/blog/i-will-fucking-piledrive-you-if-you-mention-ai-again/
[6] https://www.youtube.com/watch?v=lDVtXSpm378
[7] https://ludic.mataroa.blog/blog/contra-ptaceks-terrible-article-on-ai/#fn:2
[8] https://www.youtube.com/watch?v=sQYXZCUvpIc
[9] https://www.linkedin.com/in/thomasptacek/
[10] https://www.linkedin.com/video/live/urn:li:ugcPost:7338958277646393345/?originTrackingId=98BFbYghSVqcncNLBFxvDA%3D%3D
[11] https://ludic.mataroa.blog/blog/contra-ptaceks-terrible-article-on-ai/#fn:3
[12] https://www.simplermachines.com/
[13] https://ludic.mataroa.blog/blog/contra-ptaceks-terrible-article-on-ai/#fn:4
[14] https://www.baldurbjarnason.com/2025/trusting-your-own-judgement-on-ai/
[15] https://www.linkedin.com/in/redezem/
[16] https://ludic.mataroa.blog/blog/brainwash-an-executive-today/
[17] https://ludic.mataroa.blog/blog/an-empty-hall-of-smiling-assassins/
[18] https://ludic.mataroa.blog/blog/contra-ptaceks-terrible-article-on-ai/#fn:5
[19] https://www.youtube.com/shorts/UM3xV8IyE70
[20] https://ludic.mataroa.blog/blog/contra-ptaceks-terrible-article-on-ai/#fn:6
[21] https://plato.stanford.edu/entries/qualia/
[22] https://iep.utm.edu/hard-problem-of-conciousness/
[23] https://ludic.mataroa.blog/blog/contra-ptaceks-terrible-article-on-ai/#fn:7
[24] https://ludic.mataroa.blog/blog/contra-ptaceks-terrible-article-on-ai/#fn:8
[25] https://annieruygtillustration.com/
[26] https://katecarruthers.com/2024/06/16/ai-autonomous-everything/
[27] https://thespacioustarot.com/
[28] https://www.hermit-tech.com/about
[29] https://ludic.mataroa.blog/blog/contra-ptaceks-terrible-article-on-ai/#fnref:1
[30] https://ludic.mataroa.blog/blog/contra-ptaceks-terrible-article-on-ai/#fnref:2
[31] https://ludic.mataroa.blog/blog/contra-ptaceks-terrible-article-on-ai/#fnref:3
[32] https://ludic.mataroa.blog/blog/contra-ptaceks-terrible-article-on-ai/#fnref:4
[33] https://ludic.mataroa.blog/blog/contra-ptaceks-terrible-article-on-ai/#fnref:5
[34] https://ludic.mataroa.blog/blog/contra-ptaceks-terrible-article-on-ai/#fnref:6
[35] https://ludic.mataroa.blog/blog/contra-ptaceks-terrible-article-on-ai/#fnref:7
[36] https://ludic.mataroa.blog/blog/contra-ptaceks-terrible-article-on-ai/#fnref:8
[37] https://akols.com/previous?id=ludic
[38] https://akols.com/
[39] https://akols.com/next?id=ludic
[40] https://ludic.mataroa.blog/rss/
[41] https://ludic.mataroa.blog/newsletter/
[42] https://mataroa.blog/