336 lines
14 KiB
Plaintext
336 lines
14 KiB
Plaintext
[1]Skip to main content
|
||
|
||
Stop Sloppypasta
|
||
|
||
slop·py·pas·ta n. Verbatim LLM output copy-pasted at someone, unread,
|
||
unrefined, and unrequested. From slop (low-quality AI-generated content) +
|
||
copypasta (text copied and pasted, often as a meme, without critical thought).
|
||
It is considered rude because it asks the recipient to do work the sender did
|
||
not bother to do themselves.
|
||
[2]A few examples [3]Why it's rude [4]Guidelines to do better
|
||
|
||
You just got an unread message notification.
|
||
|
||
Maybe it's on Slack (or Teams), a text message, or an email. Maybe you were
|
||
tagged in Notion or an Office doc.
|
||
|
||
You open it to find several paragraphs of text - or perhaps a listicle - [5]
|
||
with all the hallmarks of AI-generated writing: headings, heavy formatting, and
|
||
"it's not X it's Y", with em-dashes sprinkled gratuitously throughout.
|
||
|
||
The person who sent it probably spent about ten seconds on it. They asked a
|
||
chatbot and forwarded its response to you verbatim without validation or
|
||
critical review.
|
||
|
||
But now you're expected to read, vet, and act on this wall of text. When
|
||
someone forwards text they themselves have not considered, they are asking you
|
||
to do work they chose not to do. The asymmetric effort makes it rude.
|
||
|
||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||
|
||
A few examples
|
||
|
||
❌ Example
|
||
#strategy
|
||
|
||
The team is mid-discussion about whether to expand into the German market.
|
||
|
||
👤:
|
||
|
||
I asked Claude about this! Here's what it said:
|
||
|
||
Expanding into the German market presents both significant opportunities and
|
||
notable challenges. Germany is the largest economy in the EU and offers a
|
||
robust consumer base. However, businesses should consider the following key
|
||
factors:
|
||
|
||
1. Regulatory Environment
|
||
|
||
Germany has strict regulations including GDPR compliance, local labour laws...
|
||
|
||
[6 more sections follow. None mention your company, your product, your budget,
|
||
or reference the discussion already underway.]
|
||
|
||
buries the live discussion · unrequested · generic
|
||
|
||
The Eager Beaver
|
||
|
||
A conversation participant wants to contribute to the topic at hand, so they
|
||
ask a chatbot and share whatever comes back. The intention is good - they
|
||
genuinely want to help - but the wall of generic AI text they contributed
|
||
blocks the discussion already underway. Now other participants have to scroll
|
||
past it to continue, or stop to read and validate it.
|
||
|
||
It feels helpful to send. It creates work to receive.
|
||
|
||
❌ Example
|
||
you:
|
||
|
||
Does anyone know why our email open rates have been dropping? We changed the
|
||
subject line format last month.
|
||
|
||
👤:
|
||
|
||
ChatGPT says:
|
||
|
||
Email open rate declines can be attributed to several factors. These include
|
||
changes in subject line strategy, sender reputation issues, list hygiene
|
||
problems, and deliverability concerns. Here are the most common causes:
|
||
|
||
1. Subject Line Fatigue
|
||
|
||
If subject lines have become too similar or predictable, subscribers may stop
|
||
engaging…
|
||
|
||
[Provides 5 more sections of generic email open diagnostics. Does not mention
|
||
your subject line change, your audience, or your platform.]
|
||
|
||
irrelevant to the specific question · generic
|
||
|
||
The OrAIcle
|
||
|
||
Someone asks a specific question. Another person puts it into a chatbot and
|
||
pastes the response as the answer.
|
||
|
||
"ChatGPT says" is the enshittified LLM-era equivalent of [6]LMGTFY (Let Me
|
||
Google That For You). Shared as a link or a GIF, LMGTFY was easy to ignore, and
|
||
clear about what it was (sarcastic commentary). Sloppypasta is neither.
|
||
Recipients are left to figure out whether it's AI generated, whether it's
|
||
correct, and which part actually answers the question (if it's actually
|
||
relevant at all). If you ask a person a question, you're looking for their
|
||
perspective and expertise. In this sense, both LMGTFY and sloppypasta are
|
||
etiquette failures where sender disregarded the recipient the dignity of the
|
||
basic human reply.
|
||
|
||
❌ Example
|
||
👤:
|
||
|
||
Hey team - I did some research on our competitors this week. Here's a summary:
|
||
|
||
Competitive Landscape Overview
|
||
|
||
The market is highly competitive, with several established players and emerging
|
||
challengers. Key competitors offer distinct value propositions across pricing
|
||
tiers…
|
||
|
||
[It's a 5-page essay with handwavy assertions and no concrete details. No
|
||
dates. No sources. No live pricing.]
|
||
|
||
presented as personal work · no one knows to check · hallucinated details
|
||
possible
|
||
|
||
The Ghostwriter
|
||
|
||
The sender shares AI output as their own work, with no indication a chatbot
|
||
wrote it. Recipients have no reason to question it, and may act on information
|
||
that is out of date, incomplete, or simply wrong.
|
||
|
||
Using AI as a ghostwriter borrows the sender's credibility. If the content
|
||
turns out to be wrong, that credibility is what gets spent.
|
||
|
||
Why it's rude
|
||
|
||
As a Recipient As a Sender Feedback loop
|
||
Previously, effort to read Writing requires Sender's skipped
|
||
was balanced by the effort effort, which effort becomes
|
||
to write. Now LLMs make contributes to recipient's added
|
||
Effort writing "free" and increase comprehension. LLMs effort, increasing
|
||
the effort to read due to increase cognitive debt frustration as
|
||
additional verification by reducing struggle. incidence increases.
|
||
burden.
|
||
LLM propensity for
|
||
hallucination and What you share directly
|
||
capability to bullshit influences your Eroding trust from
|
||
Trust convincingly mean that reputation. Sharing raw LLM sloppypasta is
|
||
"trust but verify" is LLM output - especially the modern 'Boy Who
|
||
broken. All correspondence unvetted - burns your Cried Wolf.'
|
||
must be untrusted by credibility.
|
||
default.
|
||
|
||
Sharing raw AI output is like eating junk food: it's easy and may feel good,
|
||
but it's not in your best interest. You'll negatively influence your
|
||
relationship with the recipient, and do yourself a disservice by reducing your
|
||
own comprehension.
|
||
|
||
"For the longest time, writing was more expensive than reading. If you
|
||
encountered a body of written text, you could be sure that at the very
|
||
least, a human spent some time writing it down. The text used to have an
|
||
innate proof-of-thought, a basic token of humanity."
|
||
|
||
— Alex Martsinovich, [7]It's rude to show AI output to people
|
||
|
||
Before LLMs, writing took effort. Authors spent time and effort considering and
|
||
selecting their words with intention; time and effort that was balanced by that
|
||
spent by the audience as they read. This balance is broken with LLMs; the
|
||
effort to produce text is effectively free, but the effort required to read the
|
||
text hasn't changed. [8]The increasing verbosity of LLMs further increases the
|
||
effort asymmetry. In some circumstances (like pasting raw LLM output into a
|
||
chat thread), the sloppypasta effectively becomes a filibuster, crowding out
|
||
the existing conversation and blocking the viewport.
|
||
|
||
"Cognitive effort — and even getting painfully stuck — is likely important
|
||
for fostering mastery."
|
||
|
||
— Anthropic, [9]How AI assistance impacts the formation of coding skills
|
||
|
||
Writing is thinking. The writing process forces the author to work through
|
||
their thoughts, building their comprehension and retention. [10]Multiple [11]
|
||
studies have found that delegating tasks to LLMs creates cognitive debt.
|
||
Shortcutting thinking with LLMs ultimately reduces comprehension of and recall
|
||
about the delegated subject.
|
||
|
||
"A polished AI response feels dismissive even if the content is correct"
|
||
|
||
— Blake Stockton, [12]AI Writing Etiquette Manifesto
|
||
|
||
Before LLMs, trust was the default. Authors wrote from their personal expertise
|
||
and perspective, and readers could judge an author's understanding of the
|
||
subject based on the coherence of their writing. LLMs generate the most
|
||
probable next token given an overarching goal to be helpful, which explains
|
||
their propensity for hallucination ([13]confabulation) and why many people feel
|
||
that [14]LLMs are bullshit generators. Modern LLMs are typically provided tools
|
||
to help them look up grounding information that reduces (but does not
|
||
eradicate) their likelihood to outright make up facts during their responses.
|
||
But that still doesn't solve the trust problem; the reader still has no way to
|
||
know what the sender checked and what they didn't. LLM responses, therefore,
|
||
cannot be trusted by default and compound the effort asymmetry on the reader by
|
||
adding a verification tax.
|
||
|
||
Beyond accuracy, LLMs write authoritatively with the tone and confidence of an
|
||
expert. This adds further uncertainty to the reader's burden; they have no way
|
||
to gauge the sender's actual level of expertise with the subject matter. The
|
||
result is a further erosion of trust, because the AI's voice removes signal
|
||
that recipients previously used to distinguish expertise from
|
||
plausible-sounding slop.
|
||
|
||
"I think it's rude to publish text that you haven't even read yourself. I
|
||
won't publish anything that will take someone longer to read than it took
|
||
me to write."
|
||
|
||
— Simon Willison, [15]Personal AI Ethics
|
||
|
||
Formerly, "Trust but verify" ruled. Readers would trust until that trust was
|
||
broken; the author was trustworthy or they weren't. However, shared LLM output
|
||
obfuscates the chain of trust. Did the prompter do the appropriate due
|
||
diligence to validate the LLM response? If problems or errors are discovered,
|
||
who is to blame, the prompter or the AI? Was it an oversight, a missed
|
||
verification step, or was verification skipped altogether? The uncertainty
|
||
means the recipient doesn't know what they can trust, what has or has not been
|
||
verified; they must treat everything as untrusted. Just like the Boy Who Cried
|
||
Wolf, once the trust is broken, the uncertainty spreads to all future messages
|
||
from the sender.
|
||
|
||
Assumptions of balanced effort and presumed trust are no longer guaranteed in a
|
||
post-LLM world. Sloppypasta creates a compounding negative feedback loop where
|
||
the sender forfeits learning and credibility while the recipient burns effort
|
||
and loses trust. Receiving raw AI output feels bad due to the cognitive
|
||
dissonance of having these assumptions violated.
|
||
|
||
Read the full essay
|
||
|
||
Simple guidelines to do better
|
||
|
||
Read.
|
||
|
||
Read the output before you share it. If you haven't read it, you don't know
|
||
whether it's correct, relevant, or current.
|
||
|
||
Delegating work to AI creates cognitive debt. Working with the results helps
|
||
run damage control for your own understanding.
|
||
|
||
Verify.
|
||
|
||
Check the facts before you forward them. Anything you forward carries your
|
||
implicit endorsement -- your reputation depends on managing the quality of what
|
||
you share.
|
||
|
||
LLMs are trained to "be helpful", and will produce outdated facts, wrong
|
||
figures, and plausible nonsense to provide a response to your requests.
|
||
Further, an LLM is inherently out-of-date; their knowledge cutoffs contain at
|
||
best information on the state of the world when their training started (months
|
||
ago).
|
||
|
||
Distill.
|
||
|
||
Cut the response down to what matters. Distilling the generated response to the
|
||
useful essence is your job.
|
||
|
||
LLMs are incentivized to use many words when few would do: API-priced models
|
||
have a per-token incentive to train chatty LLMs that use many tokens, and [17]
|
||
research shows that longer, highly formatted posts are often preferred as more
|
||
engaging.
|
||
|
||
Disclose.
|
||
|
||
Share how AI helped.
|
||
|
||
If you've read, verified, and edited it, send it as yours -- preferably with a
|
||
note that you worked with AI assistance. If you're sharing raw output, say so
|
||
explicitly. In both cases, it may be useful to share your prompt and how you
|
||
worked with the AI to get the final output.
|
||
|
||
Disclosure restores the trust signals that sloppypasta destroys and tells the
|
||
recipient what you checked and what they may be on the hook for.
|
||
|
||
Share only when requested.
|
||
|
||
Never share unsolicited AI output into a conversation.
|
||
|
||
Remember that AI generations create effort asymmetry and be respectful of those
|
||
you share with. Sloppypasta delegates the full burden of reading, verifying,
|
||
and distilling to a recipient who didn't ask for it and may not realize the
|
||
effort required of them.
|
||
|
||
Share as a link.
|
||
|
||
Share AI output as a link or attached document rather than dropping the full
|
||
text inline.
|
||
|
||
In messaging environments, a large paste takes over the viewport and crowds out
|
||
the existing conversation. A link lets the recipient choose when - and whether
|
||
- to engage, rather than having that choice imposed on them.
|
||
|
||
AI capabilities keep increasing, and using it to draft, brainstorm or
|
||
accelerate you will be increasingly useful. However, using AI should not make
|
||
your productivity someone else's burden. New tools require new manners.
|
||
|
||
Use AI to accelerate your work or improve what you send.
|
||
Don't use it to replace thinking about what you're sending.
|
||
|
||
Further reading
|
||
|
||
• [18]It's Rude to Show AI Output to People
|
||
• [19]Personal AI Ethics by Simon Willison
|
||
• [20]AI Manifesto
|
||
• [21]Using AI Responsibly in Development & Collaboration
|
||
• [22]AI Writing Etiquette Manifesto
|
||
|
||
inspired by [23]nohello.net · [24]dontasktoask.com [25]open source
|
||
|
||
References:
|
||
|
||
[1] https://stopsloppypasta.ai/en/#main-content
|
||
[2] https://stopsloppypasta.ai/en/#types
|
||
[3] https://stopsloppypasta.ai/en/#why
|
||
[4] https://stopsloppypasta.ai/en/#rules
|
||
[5] https://tropes.fyi/directory
|
||
[6] https://lmgtfy.app/?q=what+is+lmgtfy
|
||
[7] https://distantprovince.by/posts/its-rude-to-show-ai-output-to-people/
|
||
[8] https://epoch.ai/data-insights/output-length
|
||
[9] https://www.anthropic.com/research/AI-assistance-coding-skills
|
||
[10] https://www.media.mit.edu/publications/your-brain-on-chatgpt/
|
||
[11] https://www.anthropic.com/research/AI-assistance-coding-skills
|
||
[12] https://www.blakestockton.com/ai-writing-etiquette-manifesto/
|
||
[13] https://pmc.ncbi.nlm.nih.gov/articles/PMC10619792/
|
||
[14] https://machine-bullshit.github.io/
|
||
[15] https://simonwillison.net/2023/Aug/27/wordcamp-llms/#personal-ai-ethics
|
||
[17] https://arxiv.org/abs/2310.10076
|
||
[18] https://distantprovince.by/posts/its-rude-to-show-ai-output-to-people/
|
||
[19] https://simonwillison.net/2023/Aug/27/wordcamp-llms/#personal-ai-ethics
|
||
[20] https://noellevandijk.com/ai-manifesto/
|
||
[21] https://ai-manifesto.dev/
|
||
[22] https://www.blakestockton.com/ai-writing-etiquette-manifesto/
|
||
[23] https://nohello.net/
|
||
[24] https://dontasktoask.com/
|
||
[25] https://github.com/ahgraber/stopsloppypasta
|