diff --git a/content/journal/dispatch-30-august-2025/index.md b/content/journal/dispatch-30-august-2025/index.md index a327379..4430573 100644 --- a/content/journal/dispatch-30-august-2025/index.md +++ b/content/journal/dispatch-30-august-2025/index.md @@ -4,6 +4,31 @@ date: 2025-07-29T17:05:15-04:00 draft: false tags: - dispatch +references: +- title: "Flounder Mode - Colossus" + url: https://joincolossus.com/article/flounder-mode/ + date: 2025-08-04T03:36:39Z + file: joincolossus-com-pz3sdf.txt +- title: "DIYR" + url: https://diyr.dev/ + date: 2025-08-04T03:36:44Z + file: diyr-dev-akislx.txt +- title: "Naz Hamid • Just One Good Thing" + url: https://nazhamid.com/journal/just-one-good-thing/ + date: 2025-08-04T03:39:16Z + file: nazhamid-com-8ujuab.txt +- title: "Contra Ptacek's Terrible Article On AI — Ludicity" + url: https://ludic.mataroa.blog/blog/contra-ptaceks-terrible-article-on-ai/ + date: 2025-08-04T03:41:30Z + file: ludic-mataroa-blog-pcjwzr.txt +- title: "The AI-Native Software Engineer - by Addy Osmani - Elevate" + url: https://addyo.substack.com/p/the-ai-native-software-engineer + date: 2025-08-04T03:41:34Z + file: addyo-substack-com-2unltb.txt +- title: "Full-breadth Developers | justin․searls․co" + url: https://justin.searls.co/posts/full-breadth-developers/ + date: 2025-08-04T03:42:52Z + file: justin-searls-co-9dhvbh.txt --- Some thoughts here... @@ -45,24 +70,47 @@ Some thoughts here... ### Links -* [Title][4] -* [Title][5] -* [Title][6] +* [Flounder Mode - Colossus][4] -[4]: https://example.com/ -[5]: https://example.com/ -[6]: https://example.com/ + > I asked Kelly about the tradeoffs of focusing on a single thing if you want to be great (which is what I had been getting at before). “Greatness is overrated,” he said, and I perked up. “It’s a form of extremism, and it comes with extreme vices that I have no interest in. Steve Jobs was a jerk. Bob Dylan is a jerk.” + +* [DIYR][5] + + > Celebrates the spirit of independence, creativity, and resourcefulness. The acronym DIYR stands for 'Do It Yourself Revolution', promoting reflection and new forms of production, combining simplicity and longevity, ethics and aesthetics. + +* [Naz Hamid • Just One Good Thing][6] + + > In the last year, a mindset shift and approach appeared as a very simple idea: just do one thing, that I want to do today. + +* [Contra Ptacek's Terrible Article On AI — Ludicity][7] + + > Let me be extremely clear — I think this essay sucks and it's wild to me that it achieved any level of popularity, and anyone that thinks that it does not predominantly consist of shoddy thinking and trash-tier ethics has been bamboozled by the false air of mature even-handedness, or by the fact that Ptacek is a good writer. + +* [The AI-Native Software Engineer][8] + + > A practical playbook for integrating AI into your daily engineering workflow + +* [Full-breadth Developers | justin․searls․co][9] + + > The software industry is at an inflection point unlike anything in its brief history. Generative AI is all anyone can talk about. It has rendered entire product categories obsolete and upended the job market. With any economic change of this magnitude, there are bound to be winners and losers. So far, it sure looks like full-breadth developers—people with both technical and product capabilities—stand to gain as clear winners. + +[4]: https://joincolossus.com/article/flounder-mode/ +[5]: https://diyr.dev/ +[6]: https://nazhamid.com/journal/just-one-good-thing/ +[7]: https://ludic.mataroa.blog/blog/contra-ptaceks-terrible-article-on-ai/ +[8]: https://addyo.substack.com/p/the-ai-native-software-engineer +[9]: https://justin.searls.co/posts/full-breadth-developers/ [^1]: Here are the samples I used: - 1. [Lake Beach Waves][7] - 2. [Wooden floor creak][8] - 3. [Fireworks Field Recording][9] - 4. [Cicadas][10] - 5. [Osprey Sounds][11] + 1. [Lake Beach Waves][10] + 2. [Wooden floor creak][11] + 3. [Fireworks Field Recording][12] + 4. [Cicadas][13] + 5. [Osprey Sounds][14] -[7]: https://pixabay.com/sound-effects/lake-beach-waves-28492/ -[8]: https://pixabay.com/sound-effects/wooden-floor-creak-81237/ -[9]: https://pixabay.com/sound-effects/fireworks-field-recording-70720/ -[10]: https://pixabay.com/sound-effects/cicadas-18654/ -[11]: https://www.allaboutbirds.org/guide/OSPREY/sounds +[10]: https://pixabay.com/sound-effects/lake-beach-waves-28492/ +[11]: https://pixabay.com/sound-effects/wooden-floor-creak-81237/ +[12]: https://pixabay.com/sound-effects/fireworks-field-recording-70720/ +[13]: https://pixabay.com/sound-effects/cicadas-18654/ +[14]: https://www.allaboutbirds.org/guide/OSPREY/sounds diff --git a/static/archive/addyo-substack-com-2unltb.txt b/static/archive/addyo-substack-com-2unltb.txt new file mode 100644 index 0000000..fdca7b9 --- /dev/null +++ b/static/archive/addyo-substack-com-2unltb.txt @@ -0,0 +1,1831 @@ +[1] +Elevate + +[2]Elevate + +SubscribeSign in + +Share this post + +[8] +[https] +Elevate +Elevate +The AI-Native Software Engineer +Copy link +Facebook +Email +Notes +More + +The AI-Native Software Engineer + +A practical playbook for integrating AI into your daily engineering workflow + +[14] +Addy Osmani's avatar +[15]Addy Osmani +Jul 01, 2025 +276 + +Share this post + +[17] +[https] +Elevate +Elevate +The AI-Native Software Engineer +Copy link +Facebook +Email +Notes +More +[23] +4 +36 +[24] +Share + +An AI-native software engineer is one who deeply integrates AI into their daily +workflow, treating it as a partner to amplify their abilities. + +This requires a fundamental mindset shift. Instead of thinking “AI might +replace me” an AI-native engineer asks for every task: “Could AI help me do +this faster, better, or differently?”. + +The mindset is optimistic and proactive - you see AI as a multiplier of your +productivity and creativity, not a threat. With the right approach AI could 2x, +5x or perhaps 10x your output as an engineer. Experienced developers especially +find that their expertise lets them prompt AI in ways that yield high-level +results; a senior engineer can get answers akin to what a peer might deliver by +asking AI the right questions with appropriate [25]context-engineering. + +[26] +[https] + +Being AI-native means embracing continuous learning and adaptation - engineers +build software with AI-based assistance and automation baked in from the +beginning. This mindset leads to excitement about the possibilities rather than +fear. + +Yes, there may be uncertainty and a learning curve - many of us have ridden the +emotional rollercoaster of excitement, fear, and back again - but ultimately +the goal is to land on excitement and opportunity. The AI-native engineer views +AI as a way to delegate the repetitive or time-consuming parts of development +(like boilerplate coding, documentation drafting, or test generation) and free +themselves to focus on higher-level problem solving and innovation. + +Key principle - AI as collaborator, not replacement: An AI-native engineer +treats AI like a knowledgeable, if junior, pair-programmer who is available 24/ +7. + +You still drive the development process, but you constantly leverage the AI for +ideas, solutions, and even warnings. For example, you might use an AI assistant +to brainstorm architectural approaches, then refine those ideas with your own +expertise. This collaboration can dramatically speed up development while also +enhancing quality - if you maintain oversight. + +Importantly, you don’t abdicate responsibility to the AI. Think of it as +working with a junior developer who has read every StackOverflow post and API +doc: they have a ton of information and can produce code quickly, but you are +responsible for guiding them and verifying the output. This “[27]trust, but +verify” mindset is crucial and we’ll revisit it later. + +[28] +[https] + +Let's be blunt: AI-generated slop is real and is not an excuse for [29] +low-quality work. A persistent risk in using these tools is a combination of +rubber-stamped suggestions, subtle hallucinations, and simple laziness that +falls far below professional engineering standards. This is why the "verify" +part of the mantra is non-negotiable. As the engineer, you are not just a user +of the tool; you are the ultimate guarantor. You remain fully and directly +responsible for the quality, readability, security, and correctness of every +line of code you commit. + +[30] +[https] + +Key principle - Every engineer is a manager now: The role of the engineer is +fundamentally changing. With AI agents, you orchestrate the work rather than +executing all of it yourself. + +You remain responsible for every commit into main, but you focus more on +defining and “assigning” the work to get there. In the not-distant future we +may increasingly say “[31]Every engineer is a manager now.” Legitimate work can +be directed to background agents like Jules or Codex, or you can task Claude +Code/ Gemini CLI/OpenCode with chewing through an analysis or code migration +project. The engineer needs to intentionally shape the codebase so that it’s +easier for the AI to work with, using rule files (e.g. GEMINI.md), good +READMEs, and well-structured code. This puts the engineer into the role of [32] +supervisor, mentor, and validator. AI-first teams are smaller, able to +accomplish more, and capable of [33]compressing steps of the SDLC to deliver +better quality, [34]faster. + +[35] +[https] + +High-level benefits: By fully embracing AI in your workflow, you can achieve +some serious productivity leaps, potentially shipping more features faster +without sacrificing quality (this of course has nuance such as keeping task +complexity in mind). + +Routine tasks (from formatting code to writing unit tests) can be handled in +seconds. Perhaps more importantly, AI can augment your understanding: it’s like +having an expert on call to explain code or propose solutions in areas outside +your normal expertise. The result is that an AI-native engineer can take on +more ambitious projects or handle the same workload with a smaller team. In +essence, AI extends what you’re capable of, allowing you to work at a higher +level of abstraction. The caveat is that it requires skill to use effectively - +that’s where the right mindset and practices come in. + +Example - Mindset in action: Imagine you’re debugging a tricky issue or +evaluating a new tech stack. A traditional approach might involve lots of +Googling or reading documentation. An AI-native approach is to engage an AI +assistant that supports Search grounding or deep research: describe the bug or +ask for pros/cons of the tech stack, and let the AI provide insights or even +code examples. + +You remain in charge of interpretation and implementation, but the AI +accelerates gathering information and possible solutions. This collaborative +problem-solving becomes second nature once you get used to it. Make it a habit +to ask, “How can AI help with this task?” until it’s reflex. Over time you’ll +develop instincts for what AI is good at and how to prompt it effectively. + +In summary, being AI-native means internalizing AI as a core part of how you +think about solving problems and building software. It’s a mindset of +partnership with machines: using their strengths (speed, knowledge, pattern +recognition) to complement your own (creativity, judgment, context). With this +foundation in mind, we can move on to practical steps for integrating AI into +your daily work. + +Getting Started - Integrating AI into your daily workflow + +Adopting an AI-native workflow can feel daunting if you’re completely new to +it. The key is to start small and build up your AI fluency over time. In this +section, we’ll provide concrete guidance to go from zero to productive with AI +in your day-to-day engineering tasks. + +[37] +[https] + +The above is a speculative look at where we may end up with AI in the software +lifecycle. I continue to strongly believe human-in-the-loop (engineering, +design, product, UX etc) will be needed to ensure that quality doesn’t suffer. + +Step 1: The first change? You often start with AI. + +An AI-native workflow isn’t about occasionally looking for tasks AI can help +with; it's often about giving the task to an AI model first to see how it +performs. [38]One team noted: + + The typical workflow involves giving the task to an AI model first (via + Cursor or a CLI program)... with the understanding that plenty of tasks are + still hit or miss. + +Are you studying a domain or a competitor? Start with Gemini Deep Research. +Find yourself stuck in an endless debate over some aspect of design? While your +team argued, you could have built three prototypes with AI to prove out the +idea. Googlers are already [39]using it to build slides, debug production +incidents, and much more. + +When you hear “But LLMs hallucinate and chatbots give lousy answers” it's time +to update your toolchain. Anybody [40]seriously coding with AI today is using +agents. Hallucinations can be significantly mitigated and managed with proper +[41]context engineering and agentic feedback loops. The mindset shift is +foundational: all of us should be AI-first right now. + +Step 2: Get the right AI tools in place. + +To integrate AI smoothly, you’ll want to set up at least one coding assistant +in your environment. Many engineers start with GitHub Copilot in VS Code which +has code autocomplete and code generation capabilities. If you use an IDE like +VS Code, consider installing an AI extension (for example, Cursor is a +dedicated AI-enhanced code editor, and [42]Cline is a VS Code plugin for an AI +agent - more on these later). These tools are great for beginners because they +work in the background, suggesting code in real-time for whatever file you’re +editing. Outside your editor, you might also explore ChatGPT, Gemini or Claude +in a separate window for question-answer style assistance. Starting with +tooling is important because it lowers the friction to use AI. Once installed, +the AI is only a keystroke away whenever you think “maybe the AI can help with +this.” + +Step 3: Learn prompt basics - be specific and provide context. + +Using AI effectively is a skill, and the core of that skill is [43]prompt +engineering. A common mistake new users make is giving the AI an overly vague +instruction and then being disappointed with the result. Remember, the AI isn’t +a mind reader; it reacts to the prompt you give. A little extra context or +clarity goes a long way. For instance, if you have a piece of code and you want +an explanation or unit tests for it, don’t just say “Write tests for this.” +Instead, describe the code’s intended behavior and requirements in your prompt. +Compare these two prompts for writing tests for a React login form component: + + • Poor prompt: “Can you write tests for my React component?” + + • Better prompt: “I have a LoginForm React component with an email field, + password field, and submit button. It displays a success message on + successful submit and an error message on failure, via an onSubmit + callback. Please write a Jest test file that: (1) renders the form, (2) + fills in valid and invalid inputs, (3) submits the form, (4) asserts that + onSubmit is called with the right data, and (5) checks that success and + error states render appropriately.” + +The second prompt is longer, but it gives the AI exactly what we need. The +result will be far more accurate and useful because the AI isn’t guessing at +our intentions - we spelled them out. In practice, spending an extra minute to +clarify your prompt can save you hours of fixing AI-generated code later. + +[44] +[https] + +Effective prompting is such an important skill that Google has published entire +guides on it (see [45]Google’s Prompting Guide 101 for a great starting point). +As you practice, you’ll get a feel for how to phrase requests. A couple of +quick tips: be clear about the format you want (e.g., “return the output as +JSON”), break complex tasks into ordered steps or bullet points in your prompt, +and provide examples when possible. These techniques help the AI understand +your request better. + +Step 4: Use AI for code generation and completion. + +With tools set up and a grasp of how to prompt, start applying AI to actual +coding tasks. A good first use-case is generating boilerplate or repetitive +code. For instance, if you need a function to parse a date string in multiple +formats, ask the AI to draft it. You might say: “Write a Python function that +takes a date string which could be in formats X, Y, or Z, and returns a +datetime object. Include error handling for invalid formats.” + +The AI will produce an initial implementation. Don’t accept it blindly - read +through it and run tests. This hands-on practice builds your trust in when the +AI is reliable. Many developers are pleasantly surprised at how the AI produces +a decent solution in seconds, which they can then tweak. Over time, you can +move to more significant code generation tasks, like scaffolding entire classes +or modules. As an example, Cursor even offers features to generate entire files +or refactor code based on a description. Early on, lean on the AI for helper +code - things you understand but would take time to write - rather than core +algorithmic logic that’s critical. This way, you build confidence in the AI’s +capabilities on low-risk tasks. + +Step 5: Integrate AI into non-coding tasks. + +Being AI-native isn’t just about writing code faster; it’s about improving all +facets of your work. A great way to start is using AI for writing or analysis +tasks that surround coding. For example, try using AI to write a commit message +or a Pull Request description after you make code changes. You can paste a git +diff and ask, “Summarize these changes in a professional PR description.” The +AI will draft something that you can refine. + +This is a key differentiator between casual users and true AI-native engineers. +The best engineers have always known that their primary value isn't just typing +code, but in the thinking, planning, research, and communication that surrounds +it. Applying AI to these areas - to accelerate research, clarify documentation, +or structure a project plan - is a massive force multiplier. Seeing AI as an +assistant for the entire engineering process, not just the coding part, is +critical to unlocking its full potential for velocity and innovation. + +Along these lines, use AI to document code: have it generate docstrings or even +entire sections of technical documentation based on your codebase. Another idea +is to use AI for planning - if you’re not sure how to implement a feature, +describe the requirement and ask the AI to outline a possible approach. This +can give you a starting blueprint which you then adjust. Don’t forget about +everyday communications: many engineers use AI to draft emails or Slack +messages, especially when communicating complex ideas. + +For instance, if you need to explain to a product manager why a certain bug is +tricky, you can ask the AI to help articulate the explanation clearly. This +might sound trivial, but it’s a real productivity boost and helps ensure you +communicate effectively. Remember, “it’s not always all about the code” - AI +can assist in meetings, brainstorming, and articulating ideas too. An AI-native +engineer leverages these opportunities. + +Step 6: Iterate and refine through feedback. + +As you begin using AI day-to-day, treat it as a learning process for yourself. +Pay attention to where the AI’s output needed fixing and try to deduce why. Was +the prompt incomplete? Did the AI assume the wrong context? Use that feedback +to craft better prompts next time. Most AI coding assistants allow an iterative +process: you can say “Oops, that function is not handling empty inputs +correctly, please fix that” and the AI will refine its answer. Take advantage +of this interactivity - it’s often faster to correct an AI’s draft by telling +it what to change than writing from scratch. + +Over time, you’ll develop a library of prompt patterns that work well. For +example, you might discover that “Explain X like I’m a new team member” yields +a very good high-level explanation of a piece of code for documentation +purposes. Or that providing a short example input and output in your prompt +dramatically improves an AI’s answer for data transformation tasks. Build these +discoveries into your workflow. + +Step 7: Always verify and test AI outputs. + +This cannot be stressed enough: never assume the AI is 100% correct. Even if +the code compiles or the answer looks reasonable, do your due diligence. Run +the code, write additional tests, or sanity-check the reasoning. Many +AI-generated solutions work on the surface but fail on edge cases or have +subtle bugs. + +You are the engineer; the AI is an assistant. Use all your normal best +practices (code reviews, testing, static analysis) on AI-written code just as +you would on human-written code. In practice, this means budgeting some time to +go through what the AI produced. The good news is that reading and +understanding code is usually faster than writing it from scratch, so even with +verification, you come out ahead productivity-wise. + +As you gain experience, you’ll also learn which kinds of tasks the AI is weak +at - for example, many LLMs struggle with precise arithmetic or highly +domain-specific logic - and you’ll know to double-check those parts extra +carefully or perhaps avoid using AI for those. Building this intuition ensures +that by the time you trust an AI-generated change enough to commit or deploy, +you’ve mitigated risks. A useful mental model is to treat AI like a highly +efficient but not infallible teammate: you value its contributions but always +perform the final review yourself. + +Step 8: Expand to more complex uses gradually. + +Once you’re comfortable with AI handling small tasks, you can explore more +advanced integrations. For example, move from using AI in a reactive way +(asking for help when you think of it) to a proactive way: let the AI monitor +as you code. Tools like Cursor or Windsurf can run in agent mode where they +watch for errors or TODO comments and suggest fixes automatically. Or you might +try an autonomous agent mode like what Cline offers, where the AI can plan out +a multi-step task (create a file, write code in it, run tests, etc.) with your +approval at each step. + +These advanced uses can unlock even greater productivity, but they also require +more vigilance (imagine giving a junior dev more autonomy - you’d still check +in regularly). + +A powerful intermediate step is to use AI for end-to-end prototyping. For +instance, challenge yourself on a weekend to build a simple app using mostly AI +assistance: describe the app you want and see how far a tool like Replit’s AI +or Bolt can get you, then use your skills to fill the gaps. This kind of +exercise is fantastic for understanding the current limits of AI and learning +how to direct it better. And it’s fun - you’ll feel like you have a superpower +when, in a couple of hours, you have a working prototype that might have taken +days or weeks to code by hand. + +By following these steps and ramping up gradually, you’ll go from an AI novice +to someone who instinctively weaves AI into their development workflow. The +next section will dive deeper into the landscape of tools and platforms +available - knowing what tool to use for which job is an important part of +being productive with AI. + +AI Tools and Platforms - from prototyping to production + +One of the reasons it’s an exciting time to be an engineer is the sheer variety +of AI-powered tools now available. As an AI-native software engineer, part of +your skillset is knowing which tools to leverage for which tasks. In this +section, we’ll survey the landscape of AI coding tools and platforms, and offer +guidance on choosing and using them effectively. We’ll broadly categorize them +into two groups - AI coding assistants (which integrate into your development +environment to help with code you write) and AI-driven prototyping tools (which +can generate entire project scaffolds or applications from a prompt). Both are +valuable, but they serve different needs. + +Before diving into specific tools, it's crucial for any professional to adopt a +"data privacy firewall" as a core part of their mindset. Always ask yourself: +"Would I be comfortable with this prompt and its context being logged on a +third-party server?" This discipline is fundamental to using these tools +responsibly. An AI-native engineer learns to distinguish between tasks safe for +a public cloud AI and tasks that demand an enterprise-grade, privacy-focused, +or even a self-hosted, local model. + +AI Coding Assistants in the IDE + +These tools act like an “AI pair programmer” integrated with your editor or +IDE. They are invaluable when you’re working on an existing codebase or +building a project in a traditional way (writing code, file by file). Here are +some notable examples and their nuances: + + • GitHub Copilot has transformed from an autocomplete tool into a true coding + agent: once you assign it an issue or task it can autonomously analyze your + codebase, spin up environments (like via GitHub Actions), propose + multi‑file edits, run commands/tests, fix errors, and submit draft pull + requests complete with its reasoning in the logs. Built on state‑of‑the‑art + models, it supports multi‑model selection and leverages Model Context + Protocol (MCP) to integrate external tools and workspace context, enabling + it to navigate complex repo structures including monorepos, CI pipelines, + image assets, API dependencies, and more .Despite these advances, it’s + optimized for low‑ to medium‑complexity tasks and still requires human + oversight - especially for security, deep architecture, and multi‑agent + coordination purpose + + • Cursor - AI-native code editor: Cursor is a modified VS Code editor with AI + deeply integrated. Unlike Copilot which is an add-on, Cursor is built + around AI from the ground up. It can do things like AI-aware navigation + (ask it to find where a function is used, etc.) and smart refactorings. + Notably, Cursor has features to generate tests, explain code, and even an + “Agent” mode where it will attempt larger tasks on command. Cursor’s + philosophy is to “supercharge” a developer especially in large codebases. + If you’re working in a monorepo or enterprise-scale project, Cursor’s + ability to understand project-wide context (and even customize it with + project-specific rules using something like a .cursorrules file) can be a + game changer. Many developers use Cursor in “Ask” mode to begin with - you + ask for what you want, get confirmation, then let it apply changes - which + helps ensure it does the right thing. The trade-off with Cursor is that + it’s a standalone editor (though familiar to VS Code users) and currently + it’s a paid product. It’s very popular, with millions of developers using + it, including in enterprises, which speaks to its effectiveness. + + • Windsurf - AI agent for coding with large context: Windsurf is another + AI-augmented development environment. Windsurf emphasizes enterprise needs: + it has strong data privacy (no data retention, self-hosting options) and + even compliance certifications like HIPAA and FedRAMP, making it attractive + for companies concerned about code security. Functionally, Windsurf can do + many of the same assistive tasks (code completion, suggesting changes, + etc.), but anecdotally it’s especially useful in scenarios where you might + feed entire files or lots of documentation to the AI. If you are working on + a codebase with tens of thousands of lines and need the AI to be aware of + most of it (for instance, a sweeping refactor across many files), a tool + like Windsurf is worth considering. + + • Cline - autonomous AI coding agent for VS Code: Cline takes a unique + approach by acting as an autonomous agent within your editor. It’s an + open-source VS Code extension that not only suggests code, but can create + files, execute commands, and perform multi-step tasks with your permission. + Cline operates in dual modes: Plan (where it outlines what it intends to + do) and Act (where it executes those steps) under human supervision. The + idea is to let the AI handle more complex chores, like setting up a whole + feature: it could plan “Add a new API endpoint, including route, + controller, and database migration” and then implement each part, asking + for confirmation. This aligns AI assistance with professional engineering + workflows by giving the developer control and visibility into each step. + I’ve noted that Cline “treats AI not just as a code generator but as a + systems-level engineering tool” meaning it can reason about the project + structure and coordinate multiple changes coherently. The downsides: + because it can run code or modify many files, you have to be careful and + review its plans. There’s also cost if you connect it to powerful models + (some users note it can use a lot of tokens, hence $$, when running very + autonomously). But for serious use - say you want to quickly prototype a + new module in your app with tests and docs - Cline can be incredibly + powerful. It’s like having an eager junior engineer that asks “Should I + proceed with doing X?” at each step. Many developers appreciate this more + collaborative style (Cline “asks more questions” by design) because it + reduces the chance of the AI going off-track. + +Use AI coding assistants when you’re iteratively building or maintaining a +codebase - these tools fit naturally into your cycle of edit‑compile‑test. +They’re ideal for tasks like writing new functions (just type a signature and +they’ll often co‑complete the body), refactoring (“refactor this function to be +more readable”), or understanding unfamiliar code (“explain this code” - and +you get a concise summary). They’re not meant to build an entire app in one +pass; instead, they augment your day‑to‑day workflow. For seasoned engineers, +invoking an AI assistant becomes second nature - like an on‑demand search +engine - used dozens of times daily for quick help or insights. + +Under the hood, modern asynchronous coding agents like [48]OpenAI Codex and +[49]Google’s Jules go a step further. Codex operates as an autonomous cloud +agent - handling parallel tasks in isolated sandboxes: writing features, fixing +bugs, running tests, generating full PRs - then presents logs and diffs for +review. + +Google’s Jules, powered by Gemini 2.5 Pro, brings asynchronous autonomy to your +GitHub workflow: you assign an issue (such as upgrading Next.js), it clones +your repo in a VM, plans its multi‑file edits, executes them, summarizes the +changes (including audio recap), and issues a pull request - all while you +continue working . These agents differ from inline autocomplete: they’re +autonomous collaborators that tackle defined tasks in the background and return +completed work for your review, letting you stay focused on higher‑level +challenges. + +AI-Driven prototyping and MVP builders + +Separate from the in-IDE assistants, a new class of tools can generate entire +working applications or substantial chunks of them from high-level prompts. +These are great when you want to bootstrap a new project or feature quickly - +essentially to get from zero to a first version (the “v0”) with minimal manual +coding. They won’t usually produce final production-quality code without +further iteration, but they create a remarkable starting point. + + • [51]Bolt (bolt.new) - one-prompt full-stack app generator: Bolt is built on + the premise that you can type a natural language description of an app and + get a deployable full-stack MVP in minutes. For example, you might say “A + job board with user login and an admin dashboard” and Bolt will generate a + React frontend (using Tailwind CSS for styling) and a Node.js/Prisma + backend with a database, complete with the basic models for jobs and users. + In testing, Bolt has proven to be extremely fast - often assembling a + project in 15 seconds or so. The output code is generally clean and follows + modern practices (React components, REST/GraphQL API, etc.), so you can + open it in your IDE and continue development. Bolt excels at rapid + iteration: you can tweak your prompt and regenerate, or use its UI to + adjust what it built. It even has an “export to GitHub” feature for + convenience. This makes it ideal for founders, hackathon participants, or + any developer who wants to shortcut the initial setup of an app. The + trade-off is that Bolt’s creativity is bounded by its training - it might + use certain styling by default and might not handle very unique + requirements without guidance. But as a starting point, it’s often + impressive. In comparisons, users noted Bolt produces great-looking UIs + very consistently and was a top pick for quickly getting a prototype UI + that “wows” users or stakeholders. + + • [52]v0 (v0.dev by Vercel) - text to Next.js app generator: v0 is a tool + from Vercel that similarly generates apps, especially focusing on Next.js + (since Vercel is behind Next.js). You give it a prompt for what you want, + and it creates a project. One thing to note about v0: it has a distinct + design aesthetic. Testers observed that v0 tends to style everything in the + popular ShadCN UI style - basically a trendy minimalist component library - + whether you asked for it or not. This can be good if you like that style + out of the box, but it means if you wanted a very custom design, v0 might + not match it precisely. In one comparison, v0 was found to “re-theme + designs” towards its default look instead of faithfully matching a given + spec. So, v0 might be best if your goal is a quick functional prototype and + you’re flexible on appearance. The code output is usually Next.js React + code with whatever backend you specify (it might set up a simple API or use + Vercel’s Edge Functions, etc.). As part of Vercel’s ecosystem, it’s also + oriented toward deployability - the idea is you could take what it gives + you and deploy on Vercel immediately. If you’re a fan of Next.js or + building a web product that you plan to host on Vercel, v0 is a natural + choice. Just keep in mind you might need to do some re-theming if you have + your own design, since v0 has “opinions” about how things should look. + + • [53]Lovable - prompt-to-UI mockups (with some code): Lovable is aimed more + at beginners or non-engineers who want to build apps through a simpler + interface. It lets you describe an app and provides a visual editor as + well. Users have noted that Lovable’s strength is ease of use - it’s quite + guided and has a nice UI for assembling your app - but its weakness is when + you need to dive into code, it can be cumbersome. It tends to hide + complexity (which is good if you want no-code), but if you are an engineer + who wants to tweak what it built, you might find the experience + frustrating. In terms of output, Lovable can create both UI and some logic, + but perhaps not as completely as Bolt or v0. In one test, Lovable + interestingly did better when given a screenshot to imitate than when given + a Figma design - a bit inconsistent. It’s targeted at quick prototyping and + maybe building simple apps with minimal coding. If you’re a tech lead + working with a designer or PM who can’t code, Lovable might be something to + let them play with to visualize ideas, which you then refine in code. + However, for a seasoned engineer, Lovable might feel a bit limiting. + + • [54]Replit: Replit’s online IDE has an AI mode where you can type a prompt + like “Create a 2D Zelda-like game” or “Build a habit tracker app” and it + will generate a project in their cloud environment. Replit’s strength is + that it can run and host the result immediately, and it often takes care of + both frontend and backend seamlessly since it’s all in one environment. A + standout example: when asked to make a simple game, Replit’s AI agent not + only wrote the code, but ran it and iteratively improved it by checking its + own work with screenshots. In comparisons, Replit sometimes produced the + most functionally complete result (for instance, a working game with + enemies and collision when others barely produced a moving character). + However, it might take longer to run and use more computational resources + in doing so. Replit is great if you want a one-shot outcome that is + actually runnable and possibly closer to production. It’s like having an AI + that not only writes code, but also tests it live and fixes it. For + full-stack apps, Replit likewise can wire up client and server and even set + up a database if asked. The output might not be the cleanest or most + idiomatic code in every case, but it’s often a very workable starting + point. One consideration: because Replit’s agent runs in the cloud and can + execute code, you might hit some limits for very big apps (and you need to + be careful if you prompt it to do something that could run malicious code - + though it’s sandboxed). Overall, if your goal is “I want an app that I can + run immediately and play with, and I don’t mind if the code needs + refactoring later” Replit is a top choice. + + • [55]Firebase Studio is Google’s cloud-based, agentic IDE built powered by + Gemini, which lets you rapidly prototype and ship full‑stack, AI‑infused + apps entirely in your browser. You can import existing codebases - or start + from scratch using natural‑language, image, or sketch prompts via the App + Prototyping agent - to generate a working Next.js prototype (frontend, + backend, Firestore, Auth, hosting, etc.) and immediately preview it live, + then seamlessly switch into full‑coding mode in a Code‑OSS (VS Code) + workspace powered by Nix and integrated Firebase emulators. Gemini in + Firebase offers inline code suggestions, debugging, test generation, + documentation, migrations, even running terminal commands and interpreting + outputs, so you can prompt “Build a photo‑gallery app with uploads and + authentication” see the app spun up end to end, tweak it, deploy it to + Hosting or Cloud Run, and monitor usage - all without switching tools + +When to use prototyping tools: These shine when you are starting a new project +or feature and want to eliminate the grunt work of initial setup. For instance, +if you’re a tech lead needing a quick proof-of-concept to show stakeholders, +using Bolt or v0 to spin up the base and then deploying it can save days of +effort. They are also useful for exploring ideas - you can generate multiple +variations of an app to see different approaches. However, expect to iterate. +Think of what these tools produce as a first draft. + +After generating, you’ll likely bring the code into your own IDE (perhaps with +an AI assistant there to help) and refine it. In many cases, the best workflow +is hybrid: prototype with a generation tool, then refine with an in-IDE +assistant. For example, you might use Bolt to create the MVP of an app, then +open that project in Cursor to continue development with AI pair-programming on +the finer details. These approaches aren’t mutually exclusive at all - they +complement each other. Use the right tool for each phase: prototypers for +initial scaffolding and high-level layout, assistants for deep code work and +integration. + +Another consideration is limitations and learning: by examining what these +prototyping tools generate, you can learn common patterns. It’s almost like +reading the output of a dozen framework tutorials in one go. But also note what +they don’t do - often they won’t get the last [56]20-30% of an app done (things +like polish, performance tuning, handling edge-case business logic), which will +fall to you. + +This is akin to the “[57]70% problem” observed in AI-assisted coding: AI gets +you a big chunk of the way, but the final mile requires human insight. Knowing +this, you can budget time accordingly. The good news is that initial 70% +(spinning up UI components, setting up routes, hooking up basic CRUD) is +usually the boring part - and if AI does that, you can focus your energy on the +interesting parts (custom logic, UX finesse, etc.). Just don’t be lulled into a +false sense of security; always review the generated code for things like +security (e.g., did it hardcode an API key?) or correctness. + +Summary of tools vs use-cases: It’s helpful to recap and simplify how these +tools differ. In a nutshell: Use an IDE assistant when you’re evolving or +maintaining a codebase; use a generative prototype tool when you need a new +codebase or module quickly. If you already have a large project, something like +Cursor or [58]Cline plugged into VS Code will be your day-to-day ally, helping +you write and modify code intelligently. + +If you’re starting a project from scratch, tools like Bolt or v0 can do the +heavy lifting of setup so you aren’t spending a day configuring build tools or +creating boilerplate files. And if your work involves both (which is common: +starting new services and maintaining old ones), you might very well use both +types regularly. Many teams report success in combining them: for instance, +generate a prototype to kickstart development, then manage and grow that code +with an AI-augmented IDE. + +Lastly, be aware of the “not invented here” stigma some might have with AI-gen +code. It’s important to communicate within your team about using these tools. +Some traditionalists may be skeptical of code they didn’t write themselves. The +best way to overcome that is by demonstrating the benefits (speed, and after +your review, the code quality can be made good) and making AI use +collaborative. For example, share the prompt and output in a PR description +(“This controller was generated using v0.dev based on the following +description...”). This demystifies the AI’s contribution and can invite +constructive review just like human-generated code. + +Now that we’ve looked at tools, in the next section we’ll zoom out and walk +through how to apply AI across the entire software development lifecycle, from +design to deployment. AI’s role isn’t limited to coding; it can assist in +requirements, testing, and more. + +AI across the Software Development Lifecycle + +An AI-native software engineer doesn’t only use AI for writing code - they +leverage it at every stage of the [60]software development lifecycle (SDLC). +This section explores how AI can be applied pragmatically in each phase of +engineering work, making the whole process more efficient and innovative. We’ll +keep things domain-agnostic, with a slight bias to common web development +scenarios for examples, but these ideas apply to many domains of software (from +cloud services to mobile apps). + +1. Requirements & ideation + +The first step in any project is figuring out what to build. AI can act as a +brainstorming partner and a requirements analyst. + +For example, if you have a high-level product idea (“We need an app for X”), +you can ask an AI to help brainstorm features or user stories. A prompt like: +“I need to design a mobile app for a personal finance tracker. What features +should it have for a great user experience?” can yield a list of features +(e.g., budgeting, expense categorization, charts, reminders) that you might not +have initially considered. + +The AI can aggregate ideas from countless apps and articles it has ingested. +Similarly, you can task the AI with writing preliminary user stories or use +cases: “List five user stories for a ride-sharing service’s MVP.” This can +jumpstart your planning with well-structured stories that you can refine. AI +can also help clarify requirements: if a requirement is vague, you can ask +“What questions should I ask about this requirement to clarify it?” - and the +AI will propose the key points that need definition (e.g., for “add security to +login”, AI might suggest asking about 2FA, password complexity, etc.). This +ensures you don’t overlook things early on. + +Another ideation use: competitive analysis. You could prompt: “What are the +common features and pitfalls of task management web apps? Provide a summary.” +The AI will list what such apps usually do and common complaints or challenges +(e.g., data sync, offline support). This information can shape your +requirements to either include best-in-class features or avoid known issues. +Essentially, AI can serve as a research assistant, scanning the collective +knowledge base so you don’t have to read 10 blog posts manually. + +Of course, all AI output needs critical evaluation - use your judgment to +filter which suggestions make sense in context. But at the early stage, +quantity of ideas can be more useful than quality, because it gives you options +to discuss with your team or stakeholders. Engineers with an AI-native mindset +often walk into planning meetings with an AI-generated list of ideas, which +they then augment with their own insights. This accelerates the discussion and +shows initiative. + +AI can also help non-technical stakeholders at this stage. If you’re a tech +lead working with, say, a business analyst, you might generate a draft product +requirements document (PRD) with AI’s help and then share it for review. It’s +faster to edit a draft than to write from scratch. Google’s prompt guide +suggests even role-specific prompts for such cases - e.g., “Act as a business +analyst and outline the requirements for a payroll system upgrade”. The result +gives everyone something concrete to react to. In sum, in requirements and +ideation, AI is about casting a wide net of possibilities and organizing +thoughts, which provides a strong starting foundation. + +2. System design & architecture + +Once requirements are in place, designing the system is next. Here, AI can +function as a sounding board for architecture. For instance, you might describe +the high-level architecture you’re considering - “We plan to use a microservice +for the user service, an API gateway, and a React frontend” - and ask the AI +for its opinion: “What are the pros and cons of this approach? Any potential +scalability issues?” An AI well-versed in tech will enumerate points perhaps +similar to what an experienced colleague might say (e.g., microservices allow +independent deployment but add complexity in devops, etc.). This is useful to +validate your thinking or uncover angles you missed. + +AI can also help with specific design questions: “Should we choose SQL or NoSQL +for this feature store?” or “What’s a robust architecture for real-time +notifications in a chat app?” It will provide a rationale for different +choices. While you shouldn’t take its answer as gospel, it can surface +considerations (latency, consistency, cost) that guide your decision. Sometimes +hearing the reasoning spelled out helps you make a case to others or solidify +your own understanding. Think of it as rubber-ducking your architecture to an +AI - except the duck talks back with fairly reasonable points! + +Another use is generating diagrams or mappings via text. There are tools where +if you describe an architecture, the AI can output a pseudo-diagram (in Mermaid +markdown, for example) that you can visualize. For example: “Draw a component +diagram: clients -> load balancer -> 3 backend services -> database.” The AI +could produce a Mermaid code block that renders to a diagram. This is a quick +way to go from concept to documentation. Or you can ask for API design +suggestions: “Design a REST API for a library system with endpoints for books, +authors, and loans.” The AI might list endpoints (GET /books, POST /loans, +etc.) along with example payloads, which can be a helpful starting point that +you then adjust. + +A particularly powerful use of AI at this stage is validating assumptions by +asking it to think of failure cases. For example: “We plan to use an in-memory +cache for session data in one data center. What could go wrong?” The AI might +remind you of scenarios like cache crashes, data center outage, or scaling +issues. It’s a bit like a risk checklist generator. This doesn’t replace doing +a proper design review, but it’s a nice supplement to catch obvious pitfalls +early. + +On the flip side, if you encounter pushback on a design and need to articulate +your reasoning, AI can help you frame arguments clearly. You can feed the +context to AI and have it help articulate the concerns and explore +alternatives. The AI will enumerate issues and you can use that to formulate a +respectful, well-structured response. In essence, AI can bolster your +communication around design, which is as important as the design itself in team +settings. + +A more profound shift is that we’re moving to spec-driven development. It’s not +about code-first; in fact, we’re practically [63]hiding the code! Modern +software engineers are creating (or asking AI for) [64]implementation plans +first. Some start projects by asking the tool to create a technical design +(saved to a markdown file) and an implementation plan (similarly saved locally +and fed in later).” + +[65]Some note that they find themselves “thinking less about writing code and +more about writing specifications - translating the ideas in my head into +clear, repeatable instructions for the AI.” These design specs have [66]massive +follow-on value; they can be used to generate the PRD, the first round of +product documentation, deployment manifests, marketing messages, and even +training decks for the sales field. Today’s best engineers are great at +documenting intent that in-turn spawns the technical solution. + +This strategic application of AI has profound implications for what defines a +senior engineer today. It marks a shift from being a superior problem-solver to +becoming a forward-thinking solution-shaper. A senior AI-native engineer +doesn't just use AI to write code faster; they use it to see around corners - +to model future states, analyze industry trends, and shape technical roadmaps +that anticipate the next wave of innovation. Leveraging AI for this kind of +architectural foresight is no longer just a nice-to-have; it's rapidly becoming +a core competency for technical leadership. + +3. Implementation (Coding) + +This is the phase most people immediately think of for AI assistance, and +indeed it’s one of the most transformative. We covered in earlier sections how +to use coding assistants in your IDE, so here let’s structure it around typical +coding sub-tasks: + + • Scaffolding and setup: Setting up new modules, libraries, or configuration + files can be tedious. AI can generate boilerplate configs (Dockerfiles, CI + pipelines, ESLint configs, etc.) based on descriptions. For example, + “Provide a minimal Vite and TypeScript config for a React app” may yield + decent config files that you might only need to tweak slightly. Similarly, + if you need to use a new library (say authentication or logging), you can + ask AI, “Show an example of integrating Library X into an Express.js + server.” It often can produce a minimal working example, saving you from + combing through docs for the basics. + + • Feature implementation: When coding a feature, use AI as a partner. You + might start writing a function and hit a moment of doubt - you can simply + ask, “What’s the best way to implement X?” Perhaps you need to parse a + complex data format - the AI might even recall the specific API you need to + use. It’s like having Stack Overflow threads summarized for you on the fly. + Many AI-native devs actually use a rhythm: they outline a function in + comments (steps it should take), then prompt the AI to fill it in code. + This often yields a nearly complete function which you then adjust. It’s a + different way of coding: you focus on logic and intent, the AI fleshes out + syntax and repetitive parts. + + • Code reuse and referencing: Another everyday scenario - you vaguely + remember writing similar code before or know there’s an algorithm for this. + You can describe it and ask the AI. For instance, “I need to remove + duplicates from a list of objects in Python, treating objects with same id + as duplicates. How to do that efficiently?” And if the first answer isn’t + what you need, you can refine or just say “that’s not quite it, I need to + consider X” and it will try again. This interactive Q&A for coding is a + huge quality-of-life improvement. + + • Maintaining consistency and patterns: In a large project, you often follow + patterns (say a certain way to handle errors or logging). AI can be taught + these if you provide context (some tools let you add a style guide or have + it read parts of your repo). Even without explicit training, if you point + the AI to an existing file as an example, you can prompt “Create a new + module similar to this one but for [some new entity]”. It will mimic the + style and structure, which means the new code fits in naturally. It’s like + having an assistant who read your entire codebase and documentation and + always writes code following those conventions (one day, AI might truly do + this seamlessly with features like the Model Context Protocol to plug into + different environments). + + • Generating tests alongside code: A highly effective habit is to have AI + generate unit tests immediately after writing a piece of code. Many tools + (Cursor, Copilot, etc.) can suggest tests either on demand or even + automatically. For example, after writing a function, you could prompt: + “Generate a unit test for the above function, covering edge cases.” The AI + will create a test method or test case code. This serves two purposes: it + gives you quick tests, and it also serves as a quasi-review of your code + (if the AI’s expected behavior in tests differs from your code, maybe your + code has an issue or the requirements were misunderstood). It’s like doing + TDD where the AI writes the test and you verify it matches intent. Even if + you prefer writing tests yourself, AI can suggest additional cases you + might miss (like large input, weird characters, etc.), acting as a safety + net. + + • Debugging assistance: When you hit a bug or an error message, AI can help + diagnose it. For instance, you can copy an error stack trace or exception + and ask, “What might be causing this error?” Often, it will explain in + plain terms what the error means and common causes. If it’s a runtime bug + without obvious errors, you can describe the behavior: “My function returns + null for input X when it shouldn’t. Here’s the code snippet… Any idea why?” + The AI might spot a logic flaw. It’s not guaranteed, but even just + explaining your code in writing (to the AI) sometimes makes the solution + apparent to you - and the AI’s suggestions can confirm it. Some AI tools + integrated into runtime (like tools in Replit) can even execute code and + check intermediate values, acting like an interactive debugger. You could + say, “Run the above code with X input and show me variable Y at each step” + and it will simulate that. This is still early, but it’s another dimension + of debugging that will grow. + + • Performance tuning & refactoring: If you suspect a piece of code is slow or + could be cleaner, you can ask the AI to refactor it for performance or + readability. For instance: “Refactor this function to reduce its time + complexity” or “This code is doing a triple nested loop, can you make it + more efficient?” The AI might recognize a chance to use a dictionary lookup + or a better algorithm (e.g., going from O(n^2) to O(n log n)). Or for + readability: “Refactor this 50-line function into smaller functions and add + comments.” It will attempt to do so. Always double-check the changes + (especially for subtle bugs), but it’s a great way to see alternative + implementations quickly. It’s like having a second pair of eyes that isn’t + tired and can rewrite code in seconds for comparison. + +In all these coding scenarios, the theme is AI accelerates the mechanical parts +of coding and provides just-in-time knowledge, while you remain the +decision-maker and quality control. It’s important to interject a note on +version control and code reviews: treat AI contributions like you would a +junior developer’s pull request. Use git diligently, diff the changes the AI +made, run your test suite after major edits, and do code reviews (even if +you’re reviewing code the AI wrote for you!). This ensures robustness in your +implementation phase. + +4. Testing & quality assurance + +Testing is an area where AI can shine by reducing the toil. We already touched +on unit test generation, but let’s dive deeper: + + • Unit tests generation: You can systematically use AI to generate unit tests + for existing code. One approach: take each public function or class in your + module, and prompt AI with a short description of what it should do (if + there isn’t clear documentation, you might have to infer or write a + one-liner spec) and ask for a test. For example, “Function normalizeName + (name) should trim whitespace and capitalize the first letter. Write a few + PyTest cases for it.” The AI will output tests including typical and edge + cases like empty string, all caps input, etc. This is extremely helpful for + legacy code where tests are missing - it’s like AI-driven test + retrofitting. Keep in mind the AI doesn’t know your exact business logic + beyond what you describe, so verify that the asserted expectations match + the intended behavior. But even if they don’t, it’s informative: an AI + might make an assumption about the function that’s wrong, which highlights + that the function’s purpose wasn’t obvious or could be misused. You then + improve either the code or clarify the test. + + • Property-based and fuzz testing: You can use AI to suggest properties for + property-based tests. For instance, “What properties should hold true for a + sorting function?” might yield answers like “the output list is sorted, has + same elements as input, idempotent if run twice” etc. You can turn those + into property tests with frameworks like Hypothesis or fast-check. The AI + can even help write the property test code. Similarly, for fuzzing or + generating lots of input combinations, you could ask AI to generate a + variety of inputs in a format. “Give me 10 JSON objects representing + edge-case user profiles (some missing fields, some with extra fields, etc.) + ” - use those as test fixtures to see if your parser breaks. + + • Integration and end-to-end tests: For more complex tests like API endpoints + or UI flows, AI can assist by outlining test scenarios. “List some + end-to-end test scenarios for an e-commerce checkout process.” It will + likely enumerate scenarios: normal purchase, invalid payment, out-of-stock + item, etc. You can then script those. If you’re using a test framework like + Cypress for web UI, you could ask AI to write a test script given a + scenario description. It might produce a pseudo-code that you tweak to real + code (Cypress or Selenium commands). This again saves time on boilerplate + and ensures you consider various paths. + + • Test data generation: Creating realistic test data (like a valid JSON of a + complex object) is mundane. AI can generate fake data that looks real. For + example, “Generate an example JSON for a university with departments, + professors, and students.” It will fabricate names and arrays etc. This + data can then be used in tests or to manually try out an API. It’s like + having an infinite supply of realistic dummy data without writing it + yourself. Just be mindful of any privacy - if you prompt with real data, + ensure you anonymize it first. + + • Exploratory testing via agents: A frontier area: using AI agents to + simulate users or adversarial inputs. There are experimental tools where an + AI can crawl your web app like a user, testing different inputs to see if + it can break something. Anthropic’s Claude Code best practices talk about + multi-turn debugging, where the AI iteratively finds and fixes issues. You + might be able to say, “Here’s my function, try different inputs to make it + fail” and the AI will do a mini fuzz test mentally. This isn’t foolproof, + but as a concept it points to AI helping in QA beyond static test cases - + by actively trying to find bugs like a QA engineer would. + + • Reviewing test coverage: If you have tests and want to ensure they cover + logic, you can ask AI to analyze if certain scenarios are missing. For + example, provide a function or feature description and the current tests, + and ask “Are there any important test cases not covered here?”. The AI + might notice, e.g., “the tests didn’t cover when input is null or empty” or + “no test for negative numbers”, etc. It’s like a second opinion on your + test suite. It won’t know if something is truly missing unless obvious, but + it can spot some gaps. + +The end goal is higher quality with less manual effort. Testing is typically +something engineers know they should do more of, but time pressure often limits +it. AI helps remove some friction by automating the creation of tests or at +least the scaffolding of them. This makes it likelier you’ll have a more robust +test suite, which pays off in fewer regressions and easier maintenance. + +5. Debugging & maintenance + +Bugs and maintenance tasks consume a large portion of engineering time. AI can +reduce that burden too: + + • Explaining legacy code: When you inherit a legacy codebase or revisit code + you wrote long ago, understanding it is step one. You can use AI to + summarize or document code that lacks clarity. For instance, copy a + 100-line function and ask, “Explain in simple terms what this function does + step by step.” The AI will produce a narrative of the code’s logic. This + often accelerates your comprehension, especially if the code is dense or + not well-commented. It might also identify what the code is supposed to do + versus what it actually does (catching subtle bugs). Some tools integrate + this - you can click a function and get an AI-generated docstring or + summary. This is invaluable when you maintain systems with scarce + documentation. + + • Identifying the root cause: When facing a bug report like “Feature X is + crashing under condition Y” you can involve AI as a rubber duck to reason + through the possible causes. Describe the situation and the code path as + you know it, and ask for theories: “Given this code snippet and the error + observed, what could be causing the null pointer exception?” The AI might + point out, “if data can be null then data.length would throw that + exception, check if that can happen in condition Y.” It’s akin to having a + knowledgeable colleague to bounce ideas off of, even if they can’t see your + whole system, they often generalize from known patterns. This can save time + compared to going down the wrong path in debugging. + + • Fixing code with AI suggestions: If you localize a bug in a piece of code, + you can simply tell the AI to fix it. “Fix the bug where this function + fails on empty input.” The AI will provide a patch (like adding a check for + empty input). You still have to ensure that’s the correct fix and doesn’t + break other things, but it’s quicker than writing it yourself, especially + for trivial fixes. Some IDEs do this automatically: for example, if a test + fails, an AI could suggest a code change to make the test pass. One must be + careful here - always run tests after accepting such changes to ensure no + side effects. But for maintenance tasks like upgrading a library version + and fixing deprecated calls, AI can be a huge help (e.g., “We upgraded to + React Router v7, update this v6 code to v7 syntax” - it will rewrite the + code using the new API, a big time saver). + + • Refactoring and improving old code: Maintenance often involves refactoring + for clarity or performance. You can employ AI to do large-scale refactors + semi-automatically. For instance, “Our code uses a lot of callback-based + async. Convert these examples to async/await syntax.” It can show you how + to update a representative snippet, which you can then apply across code + (perhaps with a search/replace or with the AI’s help file by file). Or at a + smaller scale, “Refactor this class to use dependency injection instead of + hardcoding the database connection.” The AI will outline or even implement + a cleaner pattern. This is how AI helps you keep the codebase modern and + clean without spending excessive time on rote transformations. + + • Documentation and knowledge management: Maintaining software also means + keeping docs up to date. AI can make documenting changes easier. After + implementing a feature or fix, you can ask AI to draft a short summary or + update documentation. For example, “Generate a changelog entry: Fixed the + payment module to handle expired credit cards by adding a retry mechanism.” + It will produce a nicely worded entry. If you need to update an API doc, + you can feed it the new function signature and ask for a description. The + AI may not know your entire system’s context, but it can create a good + first draft of docs which you then tweak to be perfectly accurate. This + lowers the activation energy to write documentation. + + • Communication with team/users: Maintenance involves communication - + explaining to others what changed, what the impact is, etc. AI can help + write release notes or migration guides. E.g., “Write a short guide for + developers migrating from API v1 to v2 of our service, highlighting changed + endpoints.” If you give it a list of changes, it can format it into a + coherent guide. For user-facing notes, “Summarize these bug fixes in + non-technical terms for our monthly update.” Once again, you’ll refine it, + but the heavy lifting of prose is handled. This ensures important + information actually gets communicated (since writing these can often fall + by the wayside when engineers are busy). + +In essence, AI can be thought of as an ever-present helper throughout +maintenance. It can search through code faster than you (if integrated), recall +how something should work, and even keep an eye out for potential issues. For +example, if you let an AI agent scan your repository, it might flag suspicious +patterns (like an API call made without error handling in many places). + +Anthropic’s [70]approach with a CLAUDE.md to give the AI context about your +repo is one technique to enable more of this. In time, we may see AI tools that +proactively create tickets or PRs for certain classes of issues (security or +style). As an AI-native engineer, you will welcome these assists - they handle +the drudgery, you handle the final judgment and creative problem-solving. + +6. Deployment & operations + +Even after code is written and tested, deploying and operating software is a +big part of the lifecycle. AI can help here, too: + + • Infrastructure as code: Tools like Terraform or Kubernetes manifests are + essentially code - and AI can generate them. If you need a quick Terraform + script for an AWS EC2 with certain settings, you can prompt, “Write a + Terraform configuration for an AWS EC2 instance with Ubuntu, t2.micro, in + us-west-2.” It’ll give a reasonable config that you adjust. Similarly, + “Create a Kubernetes Deployment and Service for a Node.js app called myapp, + image from ECR, 3 replicas.” The YAML it produces will be a good starting + point. This saves a lot of time trawling through documentation for syntax. + One caution: verify all credentials and security groups etc., but the + structure will be there. + + • CI/CD pipelines: If you’re setting up a continuous integration (CI) + workflow (like a GitHub Actions YAML or a Jenkins pipeline), ask AI to + draft it. For example: “Write a GitHub Actions workflow YAML that lints, + tests, and deploys a Python Flask app to Heroku on push to main.” The AI + will outline the jobs and steps pretty well. It might not get every key + exactly right (since these syntaxes update), but it’s far easier to correct + a minor key name than to write the whole file yourself. As CI pipelines can + be finnicky, having the AI handle the boilerplate and you just fix small + errors is a huge time saver. + + • Monitoring and alert queries: If you use monitoring tools (like writing a + Datadog query or a Grafana alert rule), you can describe what you want and + let the AI propose the config. E.g., “In PromQL, how do I write an alert + for if error_rate > 5% over 5 minutes on service X?” It will craft a query + that you can plug in. This is particularly handy because these + domain-specific languages (like PromQL, Splunk query language, etc.) can be + obscure - AI has likely seen examples and can adapt them for you. + + • Incident analysis: When something goes wrong in production, you often have + logs, metrics, traces to look at. AI can assist in analyzing those. For + instance, paste a block of log around the time of failure and ask “What + stands out as a possible issue in these logs?”. It might pinpoint an + exception stack trace in the noise or a suspicious delay. Or describe the + symptom and ask “What are possible root causes of high CPU usage on the + database at midnight?” It could list scenarios (backup running, batch job, + etc.), helping your investigation. OpenAI’s enterprise guide emphasizes + using AI to surface insights from data and logs - this is becoming an + emerging use-case: AI ops or AIOps. + + • ChatOps and automation: Some teams integrate AI into their ops chat. For + example, a Slack bot backed by an LLM that you can ask, “Hey, what’s the + status of the latest deploy? Any errors?” and it could fetch data and + summarize. While this requires some setup (wiring your CI or monitoring + into an AI-friendly format), it’s an interesting direction. Even without + that, you can manually do it: copy some output (like test results or + deployment logs) and have AI summarize it or highlight failures. It’s a bit + like a personal assistant that reads long scrollbacks of text for you and + says “here’s the gist: 2 tests failed, looks like a database connection + issue.” You then know where to focus. + + • Scaling and capacity planning: If you need to reason about scaling (e.g., + “If each user does X requests and we have Y users, how many instances do we + need?”), AI can help do the math and even account for factors you mention. + This isn’t magic - it’s just calculation and estimation, but phrasing it to + AI can sometimes yield a formatted plan or table, saving you some mental + load. Additionally, AI might recall known benchmarks (like “Usually a + t2.micro can handle ~100 req/s for a simple app”) which can aid rough + capacity planning. Always validate such numbers from official sources, but + it’s a quick first estimate. + + • Documentation & runbooks: Finally, operations teams rely on runbooks - + documents outlining what to do in certain scenarios. AI can assist by + drafting these from incident post-mortems or instructions. If you solved a + production issue, you can feed the steps to AI and ask for a + well-structured procedure write-up. It will give a neat sequence of steps + in markdown that you can put in your runbook repository. This lowers the + friction to document operational knowledge, which is often a big win for + teams (tribal knowledge gets documented in accessible form). Anthropic’s + enterprise trust guide emphasizes process and people - having clear + AI-assisted docs is one way to spread knowledge responsibly. + +By integrating AI throughout deployment and ops, you essentially have a +co-pilot not just in coding but in DevOps. It reduces the lookup time (how +often do we google for a particular YAML snippet or AWS CLI command?), +providing directly usable answers. However, always remember to double-check +anything AI suggests when it comes to infrastructure - a small mistake in a +Terraform script could be costly. Validate in a safe environment when possible. +Over time, as you fine-tune prompts or use certain verified AI “recipes”, +you’ll gain confidence in which suggestions are solid. + +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ + +As we’ve seen, across the entire lifecycle from conception to maintenance, +there are opportunities to inject AI assistance. + +The pattern is: AI takes on the grunt work and provides knowledge, while you +provide direction, oversight, and final judgment. + +This elevates your role - you spend more time on creative design, critical +thinking, and decision-making, and less on boilerplate and hunting for +information. The result is often a faster development cycle and, if managed +well, improved quality and developer happiness. In the next section, we’ll +discuss some best practices to ensure you’re using AI effectively and +responsibly, and how to continuously improve your AI-augmented workflow. + +Best Practices for effective and responsible AI-augmented engineering + +Using AI in software development can be transformative, but to truly reap the +benefits, one must follow best practices and avoid common pitfalls. In this +section, we distill key principles and guidelines for being highly effective +with AI in your engineering workflow. These practices ensure that AI remains a +powerful ally rather than a source of errors or false confidence. + +1. Craft Clear, contextual prompts + +We’ve said it multiple times: effective prompting is critical. Think of writing +prompts as a new core skill in your toolkit - much like writing good code or +good commit messages. A well-crafted prompt can mean the difference between an +AI answer that is spot-on and one that is useless or misleading. As a best +practice, always provide the AI with sufficient context. If you’re asking about +code, include the relevant code snippet or a description of the function’s +purpose. Instead of: “How do I optimize this?” say “Given this code [include +snippet], how can I optimize it for speed, especially the sorting part?” This +helps the AI focus on what you care about. + +Be specific about the desired output format too. If you want a JSON, say so; if +you expect a step-by-step explanation, mention that. For example, “Explain why +this test is failing, step by step” or “Return the result as a JSON object with +keys X, Y”. Such instructions yield more predictable, useful results. A great +technique from prompt engineering is to break the task into steps or provide an +example. You might prompt: “First, analyze the input. Then propose a solution. +Finally, give the solution code.” This structure can guide the AI through +complex tasks. Google’s advanced prompt engineering guide covers methods like +chain-of-thought prompting and providing examples to reduce guesswork. If you +ever get a completely off-base answer, don’t just sigh - refine the prompt and +try again. Sometimes iterating on the prompt (“Actually ignore the previous +instruction about X and focus only on Y…”) will correct the course. + +It’s also worthwhile to maintain a library of successful prompts. If you find a +way of asking that consistently yields good results (say, a certain format for +writing test cases or explaining code), save it. Over time, you build a +personal playbook. Some engineers even have a text snippet manager for prompts. +Given that companies like Google have published extensive prompt guides, you +can see how valued this skill is becoming. In short: invest in learning to +speak AI’s language effectively, because it pays dividends in quality of +output. + +2. Always review and verify AI outputs + +No matter how impressive the AI’s answer is, [75]never blindly trust it. This +mantra cannot be overstated. Treat AI output as you would a human junior +developer’s work: likely useful, but in need of review and testing. There are +countless anecdotes of bugs slipping in because someone accepted AI code +without understanding it. Make it a habit to inspect the changes the AI +suggests. If it wrote a piece of code, walk through it mentally or with a +debugger. Add tests to validate it (which AI can help write, as we discussed). +If it gave you an explanation or analysis, cross-check key points. For +instance, if AI says “This API is O(N^2) and that’s causing slowdowns” go +verify the complexity from official docs or by reasoning it out yourself. + +Be particularly wary of factually precise-looking statements. AI has a tendency +to hallucinate details - like function names or syntaxes that look plausible +but don’t actually exist. If an AI answer cites an API or a config key, confirm +it in official documentation. In an enterprise context, never trust AI with +company-specific facts (like “according to our internal policy…”) unless you +fed those to it and it’s just rephrasing them. + +For code, a good practice is to run whatever quick checks you have: linters, +type-checkers, test suites. AI code might not adhere to your style guidelines +or could use deprecated methods. Running a linter/formatter not only fixes +style but can catch certain errors (e.g., unused variables, etc.). Some AI +tools integrate this - for example, an AI might run the code in a sandbox and +adjust if it sees exceptions, but that’s not foolproof. So you as the engineer +must be the safety net. + +In security-sensitive or critical systems, apply extra caution. Don’t use AI to +generate secrets or credentials. If AI provides a code snippet that handles +authentication or encryption, double-check it against known secure practices. +There have been cases of AI coming up with insecure algorithms because it +optimized for passing tests rather than actual security. The responsibility +lies with you to ensure all outputs are safe and correct. + +One helpful tip: use AI to verify AI. For example, after getting a piece of +code from the AI, you can ask the same (or another) AI, “Is there any bug or +security issue in this code?” It might point out something you missed (like, +“It doesn’t sanitize input here” or “This could overflow if X happens”). While +this second opinion from AI isn’t a guarantee either, it can be a quick sanity +check. OpenAI and Anthropic’s guides on coding even suggest this approach of +iterative prompting and review - essentially debugging with the AI’s help. + +Finally, maintain a healthy skepticism. If something in the output strikes you +as odd or too good to be true, investigate further. AI is great at sounding +confident. Part of becoming AI-native is learning where the AI is strong and +where it tends to falter. Over time, you’ll gain an intuition (e.g., “I know +LLMs tends to mess up date math, I’ll double-check that part”). This intuition, +combined with thorough review, keeps you in the driver’s seat. + +3. Manage Scope: Use AI to amplify, not to autopilot entire projects + +While the idea of clicking a button and having AI build an entire system is +alluring, in practice it’s rarely that straightforward or desirable. A best +practice is to use AI to amplify your productivity, not to completely automate +what you don’t oversee. In other words, keep a human in the loop for any +non-trivial outcome. If you use an autonomous agent to generate an app (as we +saw with prototyping tools), treat the output as a prototype or draft, not a +finished product. Plan to iterate on it yourself or with your team. + +Break big tasks into smaller AI-assisted chunks. For instance, instead of +saying “Build me a full e-commerce website” you might break it down: use AI to +generate the frontend pages first (and you review them), then use AI to create +a basic backend (review it), then integrate and refine. This modular approach +ensures you maintain understanding and control. It also leverages AI’s +strengths on focused tasks, rather than expecting it to juggle very complex +interdependent tasks (which is often where it may drop something important). +Remember that AI doesn’t truly “understand” your project’s higher objectives; +that’s your job as the engineer or tech lead. You decide the architecture and +constraints, and then use AI as a powerful assistant to implement parts of that +vision. + +Resist the temptation of over-reliance. It can be tempting to just ask the AI +every little thing, even stuff you know, out of convenience. While it’s fine to +use it for rote tasks, make sure you’re still learning and understanding. An +AI-native engineer doesn’t turn off their brain - quite the opposite, they use +AI to free their brain for more important thinking. For example, if AI writes a +complex algorithm for you, take the time to understand that algorithm (or at +least verify its correctness) before deploying. Otherwise, you might accumulate +“AI technical debt” - code that works but no one truly groks, which can bite +you later. + +One way to manage scope is to set clear boundaries for AI agents. If you use +something like Cline or Devin (autonomous coding agents), configure them with +your rules (e.g., don’t install new dependencies without asking, don’t make +network calls, etc.). And use features like dry-run or plan mode. For instance, +have the agent show you its plan (like Cline does) and approve it step by step. +This ensures the AI doesn’t go on a tangent or take actions you wouldn’t. +Essentially, you act as a project manager for the AI worker - you wouldn’t let +a junior dev just commit straight to main without code review; likewise, don’t +let an AI do that. + +By keeping AI’s role scoped and supervised, you avoid situations where +something goes off the rails unnoticed. You also maintain your own engagement +with the project, which is critical for quality and for your own growth. The +flip side is also true: do use AI for all those small things that eat time but +don’t need creative heavy lifting. Let it write the 10th variant of a CRUD +endpoint or the boilerplate form validation code while you focus on the tricky +integration logic or the performance tuning that requires human insight. This +division of labor - AI for grunt work, human for oversight and creative problem +solving - is a sweet spot in current AI integration. + +4. Continue learning and stay updated + +The field of AI and the tools available are evolving incredibly fast. Being +“AI-native” today is different from what it will be a year from now. So a key +principle is: never stop learning. Keep an eye on new tools, new model +capabilities, and new best practices. Subscribe to newsletters or communities +(there are developer newsletters dedicated to AI tools for coding). Share +experiences with peers: what prompt strategies worked for them, what new agent +framework they tried, etc. The community is figuring this out together, and +being engaged will keep you ahead. + +One practical way to learn is to integrate AI into side projects or hackathons. +The stakes are lower, and you can freely explore capabilities. Try building +something purely with AI assistance as an experiment - you’ll discover both its +superpowers and its pain points, which you can then apply back to your day job +carefully. Perhaps in doing so, you’ll figure out a neat workflow (like +chaining a prompt from GPT to Copilot in the editor) that you can teach your +team. In fact, mentoring others in your team on AI usage will also solidify +your own knowledge. Run a brown bag session on prompt engineering, or share a +success story of how AI helped solve a hairy problem. This not only helps +colleagues but often they will share their own tips, leveling up everyone. + +Finally, invest in your fundamental skills as well. AI can automate a lot, but +the better your foundation in computer science, system design, and +problem-solving, the better questions you’ll ask the AI and the better you’ll +assess its answers. The human creativity and deep understanding of systems are +not being replaced - in fact, they’re more important, because now you’re +guiding a powerful tool. As one of my articles suggests, focus on [78] +maximizing the “human 30%”[79] - the portion of the work where human insight is +irreplaceable. That’s things like defining the problem, making judgment calls, +and critical debugging. Strengthen those muscles through continuous learning, +and let AI handle the rote 70%. + +5. Collaborate and establish team practices + +If you’re working in a team setting (most of us are), it’s important to +collaborate on AI usage practices. Share what you learn with teammates and also +listen to their experiences. Maybe you found that using a certain AI tool +improved your commit velocity; propose it to the team to see if everyone wants +to adopt it. Conversely, be open to guidelines - for example, some teams decide +“We will not commit AI-generated code without at least one human review and +testing” (a sensible rule). Consistency helps; if everyone follows similar +approaches, the codebase stays coherent and people trust each other’s +AI-augmented contributions. + +You might even formalize this into team conventions. For instance, if using AI +for code generation, some teams annotate the PR or code comments like // +Generated with Gemini, needs review. This transparency helps code reviewers +focus attention. It’s similar to how we treated code from automated tools (like +“this file was scaffolded by Rails generator”). Knowing something was +AI-generated might change how you review - perhaps more thoroughly in certain +aspects. + +Encourage pair programming with AI. A neat practice is AI-driven code review: +when someone opens a pull request, they might run an AI on the diff to get an +initial review comments list, and then use that to refine the PR before a human +even sees it. As a team, you could adopt this as a step (with caution that AI +might not catch all issues nor understand business context). Another +collaborative angle is documentation: maybe maintain an internal FAQ of “How do +I ask AI to do X for our codebase?” - e.g., how to prompt it with your specific +stack. This could be part of onboarding new team members to AI usage in your +project. + +On the flip side, respect those who are cautious or skeptical of AI. Not +everyone may be immediately comfortable or convinced. Demonstrating results in +a non-threatening way works better than evangelizing abstractly. Show how it +caught a bug or saved a day of work by drafting tests. Be honest about failures +too (e.g., “We tried AI for generating that module, but it introduced a subtle +bug we caught later. Here’s what we learned.”). This builds collective wisdom. +A team that learns together will integrate AI much more effectively than +individuals pulling in different directions. + +From a leadership perspective (for tech leads and managers), think about how to +integrate AI training and guidelines. Possibly set aside time for team members +to experiment and share findings (hack days or lightning talks on AI tools). +Also, decide as a team how to handle licensing or IP concerns of AI-generated +code - e.g., code generation tools have different licenses or usage terms. +Ensure compliance with those and any company policies (some companies restrict +use of public AI services for proprietary code - in that case, perhaps you +invest in an internal AI solution or use open-source models that you can run +locally to avoid data exposure). + +In short, treat AI adoption as a team sport. Everyone should be rowing in the +same direction and using roughly compatible tools and approaches, so that the +codebase remains maintainable and the benefits are multiplied across the team. +AI-nativeness at an organization level can become a strong competitive +advantage, but it requires alignment and collective learning. + +6. Use AI responsibly and ethically + +Last but certainly not least, always use AI responsibly. This encompasses a few +things: + + • Privacy and security: Be mindful of what data you feed into AI services. If + you’re using a hosted service like OpenAI’s API or an IDE plugin, the code + or text you send might be stored or seen by the provider under certain + conditions. For sensitive code (security-related, proprietary algorithms, + user data, etc.), consider using self-hosted models or at least strip out + sensitive bits before prompting. Many AI tools now have enterprise versions + or on-prem options to alleviate this. Check your company’s policy: for + example, a bank might forbid using any external AI for code. Anthropic’s + enterprise guide suggests a three-pronged approach including process and + tech to deploy AI safely. It’s your duty to follow those guidelines. Also, + be cautious of phishing or malicious code - ironically, AI could + potentially insert something if it were trained on malicious examples. So + code review for security issues stays important. + + • Bias and fairness: If AI helps generate user-facing content or decisions, + be aware of biases. For instance, if you’re using AI to generate interview + questions or analyze résumés (just hypothetically), remember the models may + carry biases from training data. In software contexts, this might be less + direct, but imagine AI generating code comments or documentation that + inadvertently uses non-inclusive language. You should still run such + outputs through your usual processes for DEI (Diversity, Equity, Inclusion) + standards. OpenAI’s guides on enterprise AI discuss ensuring fairness and + checking model outputs for biased assumptions. As an engineer, if you see + AI produce something problematic (even in a joke or example), don’t + propagate it. We have to be the ethical filter. + + • Transparency with AI usage: If part of your product uses AI (say, an + AI-written response or a feature built by AI suggestions), consider being + transparent with users where appropriate. This is more about product + decisions, but it’s a growing expectation that users know when they’re + reading content written by AI or interacting with a bot. From an + engineering perspective, this might mean instrumenting logs to indicate AI + involvement or tagging outputs. It could also mean putting guardrails: + e.g., if an AI might free-form answer a user query in your app, put in + checks or moderation on that output. + + • Intellectual property (IP) concerns: The legal understanding is still + evolving, but be cautious when using AI on licensed material. If you ask AI + to generate code “like library X”, ensure you’re not inadvertently copying + licensed code (the models sometimes regurgitate training data). Similarly, + be mindful of attribution - if the AI produced a result influenced by a + specific source, it won’t cite it unless prompted. For now, treating AI + outputs as if they were your own work (with respect to licensing) is + prudent - meaning you take responsibility as if you wrote it. Some + companies even restrict using Copilot due to IP uncertainty for generated + code. Keep an eye on updates in this area and when in doubt, consult with + legal or stick to well-known algorithms. + + • Managing expectations and human oversight: Ethically, engineers should + prevent over-reliance on AI in critical areas where mistakes could be + harmful (e.g., AI in medical software or autonomous driving). Even if you + personally work on a simple web app, the principle stands: ensure there’s a + human fallback for important decisions. For example, if AI summarizes a + client’s requirements, have a human confirm the summary with the client. + Don’t let AI be the sole arbitrator of truth in places where it matters. + This responsible stance protects you, your users, and your organization. + +In sum, being an AI-native engineer also means being a responsible engineer. +Our core duty to build reliable, safe, and user-respecting systems doesn’t +change; we just have more powerful tools now. Use them in a way you’d be proud +of if it was all written by you (because effectively, you are accountable for +it). Many companies and groups (OpenAI, Google, Anthropic) have published +guidelines and playbooks on responsible AI usage - those can be excellent +further reading to deepen your understanding of this aspect (see the Further +Reading section). + +7. For Leaders and managers: cultivate an AI-First engineering culture + +If you lead an engineering team, your role is not just to permit AI usage, but +to champion it strategically. This means moving from passive acceptance to +active cultivation by focusing on a few key areas: + + • Leading by example: Demonstrate how AI can be used for strategic tasks like + planning or drafting proposals, and articulate a clear vision for how it + will make the team and its products better. Model the learning process by + openly sharing both your successes and stumbles with AI. An AI-native + culture starts at the top and is fostered by authenticity, not just + mandates. + + • Investing in skills: Go beyond mere permission and actively provision + resources for learning. Sponsor premium tool licenses, formally sanction + time for experimentation (like hack days or exploration sprints), and + create forums (demos, shared wikis) for the team to build a collective + library of best practices and effective prompts. This signals that skill + development is a genuine priority. + + • Fostering psychological safety: Create an environment where engineers feel + safe to experiment, share failures, and ask foundational questions without + judgment. Explicitly address the fear of incompetence by framing AI + adoption as a collective journey, and counter the fear of replacement by + emphasizing how AI augments, rather than automates, the critical thinking + and judgment that define senior engineering. + + • Revisiting roadmaps and processes: Proactively identify which parts of your + product or development cycle are ripe for AI-driven acceleration. Be + prepared to adjust timelines, estimation, and team workflows to reflect + that the nature of engineering work is shifting from writing boilerplate to + specifying, verifying, and integrating. Evolve your code review process to + place a higher emphasis on the critical human validation of AI-generated + outputs. + +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ + +Following these best practices will help ensure that your integration of AI +into engineering yields positive results - higher productivity, better code, +faster learning - without the downsides of sloppy usage. It’s about combining +the best of what AI can do with the best of what you can do as a skilled human. +The next and final section will conclude our discussion, reflecting on the +journey to AI-nativeness and the road ahead, along with additional resources to +continue your exploration. + +Conclusion: Embracing the future + +We’ve traveled through what it means to be an AI-native software engineer - +from mindset, to practical workflows, to tool landscapes, to lifecycle +integration, and best practices. It’s clear that the role of software engineers +is evolving in tandem with AI’s growing capabilities. Rather than rendering +engineers obsolete, AI is proving to be a powerful augmentation to human +skills. By embracing an AI-native approach, you position yourself to build +faster, learn more, and tackle bigger challenges than ever before. + +To summarize a few key takeaways: being AI-native starts with seeing AI as a +multiplier for your skills, not a magic black box or a threat. It’s about +continuously asking, “How can AI help me with this?” and then judiciously using +it to accelerate routine tasks, explore creative solutions, and even catch +mistakes. It involves new skills like prompt engineering and agent +orchestration, but also elevates the importance of timeless skills - +architecture design, critical thinking, and ethical judgment - because those +guide the AI’s application. The AI-native engineer is always learning: learning +how to better use AI, and leveraging AI to learn other domains faster (a +virtuous circle!). + +Practically, we saw that there is a rich ecosystem of tools. There’s no +one-size-fits-all AI tool - you’ll likely assemble a personal toolkit (IDE +assistants, prototyping generators, etc.) tailored to your work. The best +engineers will know when to grab which tool, much like a craftsman with a +well-stocked toolbox. And they’ll keep that toolbox up-to-date as new tools +emerge. Importantly, AI becomes a collaborative partner across all stages of +work - not just coding, but writing tests, debugging, generating documentation, +and even brainstorming in the design phase. The more areas you involve AI, the +more you can focus your unique human talents where they matter most. + +We also stressed caution and responsibility. The excitement of AI’s +capabilities should be balanced with healthy skepticism and rigorous +verification. By following best practices - clear prompts, code reviews, small +iterative steps, staying aware of limitations - you can avoid pitfalls and +build trust in using AI. As an experienced professional (especially if you are +an IC or tech lead, as many of you are), you have the background to guide AI +effectively and to mitigate its errors. In a sense, your experience is more +valuable than ever: junior engineers can get a boost from AI to produce +mid-level code, but it takes a senior mindset to prompt AI to solve complex +problems in a robust way and to integrate it into a larger system gracefully. + +Looking ahead, one can only anticipate that AI will get more powerful and more +integrated into the tools we use. Future IDEs might have AI running +continuously, checking our work or even optimizing code in the background. We +might see specialized AIs for different domains (AI that is an expert in +frontend UX vs one for database tuning). Being AI-native means you’ll adapt to +these advancements smoothly - you’ll treat it as a natural progression of your +workflow. Perhaps eventually “AI-native” will simply be “software engineer”, +because using AI will be as ubiquitous as using Stack Overflow or Google is +today. Until then, those who pioneer this approach (like you, reading and +applying these concepts) will have an edge. + +There’s also a broader impact: By accelerating development, AI can free us to +focus on more ambitious projects and more creative aspects of engineering. It +could usher in an era of rapid prototyping and experimentation. As I’ve mused +in one of my pieces, we might even see a shift in who builds software - with AI +lowering barriers, more people (even non-traditional coders) could bring ideas +to life. As an AI-native engineer, you might play a role in enabling that, by +building the tools or by mentoring others in using them. It’s an exciting +prospect: engineering becomes more about imagination and design, while +repetitive toil is handled by our AI assistants. + +In closing, adopting AI in your daily engineering practice is not just a +one-time shift, but a journey. Start where you are: try one new tool or apply +AI to one part of your next task. Gradually expand that comfort zone. Celebrate +the wins (like the first time an AI-generated test catches a bug you missed), +and learn from the hiccups (maybe the time AI refactoring broke something - +it’s a lesson to improve prompting). + +Encourage your team to do the same, building an AI-friendly engineering +culture. With pragmatic use and continuous learning, you’ll find that AI not +only boosts your productivity but can also rekindle joy in development - +letting you concentrate on creative problem-solving and seeing faster results +from idea to reality. + +The era of AI-assisted development is here, and those who skillfully ride this +wave will define the next chapter of software engineering. By reading this and +experimenting on your own, you’re already on that path. Keep going, stay +curious, and code on - with your new AI partners at your side. + +Further reading + +To deepen your understanding and keep improving your AI-assisted workflow, here +are some excellent free guides and resources from leading organizations. These +cover everything from prompt engineering to building agents and deploying AI +responsibly: + + • [85]Google - Prompting Guide 101 (Second Edition) - A quick-start handbook + for writing effective prompts, packed with tips and examples for Google’s + Gemini model. Great for learning prompt fundamentals and how to phrase + queries to get the best results. + + • [86]Google - “More Signal, Less Guesswork” prompt engineering whitepaper - + A 68-page Google whitepaper that dives into advanced prompt techniques (for + API usage, chain-of-thought prompts, using temperature/top-p settings, + etc.). Excellent for engineers looking to refine their prompt engineering + beyond the basics. + + • [87]OpenAI - [88]A Practical Guide to Building Agents - OpenAI’s + comprehensive guide (~34 pages) on designing and implementing AI agents + that work in real-world scenarios. It covers agent architectures (single vs + multi-agent), tool integration, iteration loops, and important safety + considerations when deploying autonomous agents. + + • [89]Anthropic - [90]Claude Code: Best Practices for Agentic Coding - A + guide from Anthropic’s engineers on getting the most out of Claude (their + AI) in coding scenarios. It includes tips like structuring your repo with a + CLAUDE.md for context, prompt formats for debugging and feature building, + and how to iteratively work with an AI coding agent. Useful for anyone + using AI in an IDE or planning to integrate an AI agent with their + codebase. + + • [91]OpenAI - [92]Identifying and Scaling AI Use Cases - This guide helps + organizations (and teams) find high-leverage opportunities for AI and scale + them effectively. It introduces a methodology to identify where AI can add + value, how to prototype quickly, and how to roll out AI solutions across an + enterprise sustainably. Great for tech leads and managers strategizing AI + adoption. + + • [93]Anthropic - [94]Building Trusted AI in the Enterprise[95] (Trust in AI) + - An enterprise-focused e-book on deploying AI responsibly. It outlines a + three-dimensional approach (people, process, technology) to ensure AI + systems are reliable, secure, and aligned with organizational values. It + also devotes sections to AI security and governance best practices - a + must-read for understanding risk management in AI projects. + + • [96]OpenAI - [97]AI in the Enterprise[98] - OpenAI’s 24-page report on how + top companies are using AI and lessons learned from those collaborations. + It provides strategic insights and case studies, including practical steps + for integrating AI into products and operations at scale. Useful for seeing + the bigger picture of AI’s business impact and getting inspiration for + high-level AI integration + + • [99]Google - [100]Agents Companion[101] Whitepaper - Google’s advanced + “102-level” technical companion to their prompting guide, focusing on AI + agents. This guide explores complex topics like agent evaluation, tool use, + and orchestrating multiple agents. It’s a deep dive for developers looking + to push the envelope with agent development and deployment - essentially a + toolkit for advanced AI builders. + +Each of these resources can help you further develop your AI-native engineering +skills, offering both theoretical frameworks and practical techniques. They are +all freely available (no paywalls), and reading them will reinforce many of the +concepts discussed in this section while introducing new insights from industry +experts. + +Happy learning, and happy building! + +I’m excited to share I’m writing a new [102]AI-assisted engineering book with +O’Reilly. If you’ve enjoyed my writing here you may be interested in checking +it out. + +[103] +[https] + +276 + +Share this post + +[105] +[https] +Elevate +Elevate +The AI-Native Software Engineer +Copy link +Facebook +Email +Notes +More +[111] +4 +36 +[112] +Share + +Discussion about this post + +CommentsRestacks +User's avatar +[ ] +[ ] +[ ] +[ ] +[117] +MohammadAzeem's avatar +[118]MohammadAzeem +[119]Jul 2 + +Nice read. + +But I am more worried about computational costs. + +In the pre-AI world most of the things devs used to do were local; hence +affordable. + +No doubt that some models like of Gemma or others can easily be run on edge +devices but sticking being an AI native Engineer will (as of now atleast) +require most of the stuff to be in the 3rd party hands and be paid 💰. + +In the web-dev world, we are fighting for 500kb js bundle to run on edge device +or server for 20 years, resulting in SSR, SSG, SPA, and other variants. + +What is your take on the computational expenses, an AI native engineer has to +deal with? + +Expand full comment +Reply +Share +[121] +John Dinsdale's avatar +[122]John Dinsdale +[123]Jul 2 + +Excellent analysis and subject matter, its a question of holding on for as long +as you aren't in the way. + +Expand full comment +Reply +Share +[125]2 more comments... +TopLatestDiscussions + +No posts + +Ready for more? + +[141][ ] +Subscribe +© 2025 Addy Osmani +[143]Privacy ∙ [144]Terms ∙ [145]Collection notice +[146] Start writing[147]Get the app +[148]Substack is the home for great culture + +Share + +[150] +Copy link +Facebook +Email +Notes +More +This site requires JavaScript to run correctly. Please [156]turn on JavaScript +or unblock scripts + +References: + +[1] https://addyo.substack.com/ +[2] https://addyo.substack.com/ +[8] https://addyo.substack.com/p/the-ai-native-software-engineer# +[14] https://substack.com/@addyosmani +[15] https://substack.com/@addyosmani +[17] https://addyo.substack.com/p/the-ai-native-software-engineer# +[23] https://addyo.substack.com/p/the-ai-native-software-engineer/comments +[24] javascript:void(0) +[25] https://x.com/karpathy/status/1937902205765607626 +[26] https://substackcdn.com/image/fetch/$s_!t2_Y!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ae2c5e0-27c6-4959-b37d-67f2c40b2e09_1024x1024.png +[27] https://addyo.substack.com/p/the-trust-but-verify-pattern-for +[28] https://substackcdn.com/image/fetch/$s_!qzj1!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ff8ce3d-bcdc-45ed-933b-c0a1038c63ea_1024x1024.png +[29] https://addyo.substack.com/p/vibe-coding-is-not-an-excuse-for +[30] https://substackcdn.com/image/fetch/$s_!Qzt7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F61fdf82a-2bcc-4a81-853e-779af54c24a2_1024x1024.png +[31] https://x.com/levie/status/1938647740554092586 +[32] https://www.infoworld.com/article/3994519/the-tough-task-of-making-ai-code-production-ready.html +[33] https://www.forrester.com/blogs/appgen-is-here-say-goodbye-to-software-development-as-you-know-it/ +[34] https://newsletter.getdx.com/p/how-much-does-ai-impact-development-speed +[35] https://substackcdn.com/image/fetch/$s_!wUui!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f7cfea4-0f99-4c8d-81bd-d4c81070eee4_1024x1024.png +[37] https://substackcdn.com/image/fetch/$s_!OGxs!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ef50d3c-e6f1-4b5e-a850-2177e561bbc1_1536x1024.png +[38] https://www.ignorance.ai/p/ai-at-pulley +[39] https://x.com/rmedranollamas/status/1938305816185966898 +[40] https://fly.io/blog/youre-all-nuts/ +[41] https://blog.langchain.com/the-rise-of-context-engineering/ +[42] https://addyo.substack.com/p/why-i-use-cline-for-ai-engineering +[43] https://addyo.substack.com/p/the-prompt-engineering-playbook-for +[44] https://substackcdn.com/image/fetch/$s_!NG0k!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d693ee6-61be-4250-ad42-b43ace337365_1024x1024.png +[45] https://workspace.google.com/learning/content/gemini-prompt-guide +[48] https://openai.com/codex/ +[49] https://jules.google/ +[51] http://bolt.new/ +[52] http://v0.dev/ +[53] https://lovable.dev/ +[54] http://replit.com/ +[55] http://firebase.studio/ +[56] https://addyo.substack.com/p/beyond-the-70-maximizing-the-human +[57] https://addyo.substack.com/p/the-70-problem-hard-truths-about +[58] http://cline.bot/ +[60] https://www.geeksforgeeks.org/software-engineering/software-development-life-cycle-sdlc/ +[63] https://x.com/danshipper/status/1937888424800719283 +[64] https://x.com/_philschmid/status/1937887668710355265 +[65] https://www.ignorance.ai/p/ai-at-pulley +[66] https://writing.nikunjk.com/p/the-work-behind-the-work-is-dead +[70] https://www.anthropic.com/engineering/claude-code-best-practices +[75] https://addyo.substack.com/p/the-trust-but-verify-pattern-for +[78] https://news.ycombinator.com/item?id=43361801 +[79] https://news.ycombinator.com/item?id=43361801 +[85] https://services.google.com/fh/files/misc/gemini-for-google-workspace-prompting-guide-101.pdf +[86] https://www.kaggle.com/whitepaper-prompt-engineering +[87] https://cdn.openai.com/business-guides-and-resources/a-practical-guide-to-building-agents.pdf +[88] https://cdn.openai.com/business-guides-and-resources/a-practical-guide-to-building-agents.pdf +[89] https://www.anthropic.com/engineering/claude-code-best-practices +[90] https://www.anthropic.com/engineering/claude-code-best-practices +[91] https://cdn.openai.com/business-guides-and-resources/identifying-and-scaling-ai-use-cases.pdf +[92] https://cdn.openai.com/business-guides-and-resources/identifying-and-scaling-ai-use-cases.pdf +[93] https://assets.anthropic.com/m/66daaa23018ab0fd/original/Anthropic-enterprise-ebook-digital.pdf +[94] https://assets.anthropic.com/m/66daaa23018ab0fd/original/Anthropic-enterprise-ebook-digital.pdf +[95] https://assets.anthropic.com/m/66daaa23018ab0fd/original/Anthropic-enterprise-ebook-digital.pdf +[96] https://cdn.openai.com/business-guides-and-resources/ai-in-the-enterprise.pdf +[97] https://cdn.openai.com/business-guides-and-resources/ai-in-the-enterprise.pdf +[98] https://cdn.openai.com/business-guides-and-resources/ai-in-the-enterprise.pdf +[99] https://www.kaggle.com/whitepaper-agent-companion +[100] https://www.kaggle.com/whitepaper-agent-companion +[101] https://www.kaggle.com/whitepaper-agent-companion +[102] https://www.oreilly.com/library/view/vibe-coding-the/9798341634749/ +[103] https://substackcdn.com/image/fetch/$s_!WFGE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faba8cf11-c1d1-4cb8-8400-1fa7b7b91d83_5246x5246.png +[105] https://addyo.substack.com/p/the-ai-native-software-engineer# +[111] https://addyo.substack.com/p/the-ai-native-software-engineer/comments +[112] javascript:void(0) +[117] https://substack.com/profile/265035982-mohammadazeem?utm_source=comment +[118] https://substack.com/profile/265035982-mohammadazeem?utm_source=substack-feed-item +[119] https://addyo.substack.com/p/the-ai-native-software-engineer/comment/131292862 +[121] https://substack.com/profile/121980363-john-dinsdale?utm_source=comment +[122] https://substack.com/profile/121980363-john-dinsdale?utm_source=substack-feed-item +[123] https://addyo.substack.com/p/the-ai-native-software-engineer/comment/131304953 +[125] https://addyo.substack.com/p/the-ai-native-software-engineer/comments +[143] https://substack.com/privacy +[144] https://substack.com/tos +[145] https://substack.com/ccpa#personal-data-collected +[146] https://substack.com/signup?utm_source=substack&utm_medium=web&utm_content=footer +[147] https://substack.com/app/app-store-redirect?utm_campaign=app-marketing&utm_content=web-footer-button +[148] https://substack.com/ +[150] https://addyo.substack.com/p/the-ai-native-software-engineer# +[156] https://enable-javascript.com/ diff --git a/static/archive/diyr-dev-akislx.txt b/static/archive/diyr-dev-akislx.txt new file mode 100644 index 0000000..8f8c0be --- /dev/null +++ b/static/archive/diyr-dev-akislx.txt @@ -0,0 +1,236 @@ +DIYR (pronounced dear) + +Celebrates the spirit of independence, creativity, and resourcefulness. The +acronym DIYR stands for 'Do It Yourself Revolution', promoting reflection and +new forms of production, combining simplicity and longevity, ethics and +aesthetics. + +DESK HINGE SMALL HELLO MEMPHIS M Pic 211124 ID Bolzano231109 Pic 211124 ID +Bolzano231302 Recuperato Recuperato fix + +We design growing ecosystems of innovative, playful, guiltless and highly +purposeful social electronics for you to build, hack, personalise, share, fix, +and forever keep. + +[1] +DIYR.DEV/LGT + +Lights + +DIYR.DEV/LGT + +New Additions... + +[2] +[3] LGT-STK-S-R2 +[4]Lights +[5]STR-CLG-L, [6]STR-CLG-M, [7]STR-CLG-S, [8]STR-HNG-L, [9]STR-HNG-M, [10] +STR-HNG-S, [11]STR-POL-L, [12]STR-POL-S, [13]STR-POL-XL, [14]STR-WAL-L, [15] +STR-WAL-S +[16] +[17] FAN-M-R2 +[18]Fans +[19] +[20] FAN-L-R2 +[21]Fans +[22] + + • Pic 211124 ID Bolzano230998 + • Pic 211124 ID Bolzano231102 + • Pic 211124 ID Bolzano231061 + • Pic 221201 ID Bolzano256548 + +DIYR.DEV/FANS + +Fans + +DIYR.DEV/FANS + +We empower you to counter planned obsolescence and reduce e-waste. + +Enabling you to get active, gain knowledge and skills to repurpose components +and make things you need, like, and would keep while developing a mindful +approach to alternative production and environmental responsibility. + +[23] +DIYR.DEV/SPK + +Speakers + +DIYR.DEV/SPK + +DIYR - DOING IS KNOWING + +Pic 211124 ID Bolzano56820 v2 Pic 211124 ID Bolzano56602 Pic 211124 ID +Bolzano57085 Pic 211124 ID Bolzano56869 v2 + + • Pic 211124 ID Bolzano231701 + • Pic 211124 ID Bolzano231685 + • Pic 211124 ID Bolzano231687 + • Pic 211124 ID Bolzano231688 + • Pic 211124 ID Bolzano231689 + • Pic 211124 ID Bolzano231688 + • Pic 211124 ID Bolzano231687 + • Pic 211124 ID Bolzano231685 + • Pic 211124 ID Bolzano231701 + + + +We believe that self-gained knowledge of an object's build promotes a different +relation and emotional value to any product, combining emotions with function +and purpose. + + + +The knowledge and skills of our doers are expanded in multiple directions, from +electronics and production technologies to design, making or repairing. By +enabling the production of consciously built things whose emotional value +surpasses their economic worth, DIYR encourages the realisation of self-made +objects that are easy to assemble, practical to use and stimulate constant +reinvention. + + + +[24]Right here[25], we make available the necessary instructions for you to +turn into a proDuser of useful and beautiful objects. In addition, we provide +you with wise advice about the tools and materials to use and the best ways to +source, recycle, assemble and fix along the way. + + + +Dear, because you DO and know how it's done. Doing is Knowing. + + + +[26] +DIYR.DEV/COLLECTIONS + +Explore our Collection + +DIYR.DEV/COLLECTIONS + +Designed by +DIYR, +made by +you. + +STAY IN THE LOOP — WE PROMISE NOT TO SPAM + +[27] [DIYR_Logo] +[28][ ] +[29][ ] +[30][SUBMIT] +[31] [DFL] + + • [32]FAQ + • [33]Tools + • [34]Privacy + • [35]Contact & Credits + + • [36]Instagram + • [37]Youtube + • [38]Login / Register + + • [39]MENU + + • [40]DIYR.DEV + • [41]Collections + • [42]Products + • [43]Instructions + • [44]Contacts + • [45]About + +[46] [DIYR_Logo] + +Register + + +In order to access our Instructions, we kindly ask you to Register Here. + +If you already have an account, you can [47]Login Here. + +First Name +[48][ ] +Surname +[49][ ] +Email +[50][ ] +Password +[51][ ] +Confirm Password +[52][ ] +[53][ ] I agree to to the [54]terms and conditions. +[55][ ] I want to subscribe to the Newsletter +[57][Register] +Login + + +In order to access our Instructions, we kindly ask you to Login Here. + +If you do not have an account, you can [58]Register Here. + +Email +[60][ ] +Password +[61][ ] +[62][ ] Keep me signed in for 30 days +[64][Log in] + +[65]I've lost my password + +WARNING! + +This website needs JavaScript enabled to work correctly. Here you can find [66] +instructions on how to enable JavaScript in your browser. + + +References: + +[1] https://diyr.dev/collections/lights/ +[2] https://diyr.dev/instructions/LGT-STK-S-R2 +[3] https://diyr.dev/instructions/LGT-STK-S-R2 +[4] https://diyr.dev/collections/lights/ +[5] https://diyr.dev/instructions/STR-CLG-L +[6] https://diyr.dev/instructions/STR-CLG-M +[7] https://diyr.dev/instructions/STR-CLG-S +[8] https://diyr.dev/instructions/STR-HNG-L +[9] https://diyr.dev/instructions/STR-HNG-M +[10] https://diyr.dev/instructions/STR-HNG-S +[11] https://diyr.dev/instructions/STR-POL-L +[12] https://diyr.dev/instructions/STR-POL-S +[13] https://diyr.dev/instructions/STR-POL-XL +[14] https://diyr.dev/instructions/STR-WAL-L +[15] https://diyr.dev/instructions/STR-WAL-S +[16] https://diyr.dev/instructions/FAN-M-R2 +[17] https://diyr.dev/instructions/FAN-M-R2 +[18] https://diyr.dev/collections/fans/ +[19] https://diyr.dev/instructions/FAN-L-R2 +[20] https://diyr.dev/instructions/FAN-L-R2 +[21] https://diyr.dev/collections/fans/ +[22] https://diyr.dev/collections/fans/ +[23] https://diyr.dev/collections/speakers/ +[24] https://diyr.dev/instructions/ +[25] https://diyr.dev/instructions/ +[26] https://diyr.dev/collections/ +[27] https://diyr.dev/ +[31] https://designfrictionlab.com/ +[32] https://diyr.dev/services/faq/ +[33] https://diyr.dev/ +[34] https://diyr.dev/services/Privacy +[35] https://diyr.dev/contacts/ +[36] http://instagram.com/diyr.dev +[37] https://www.youtube.com/@DIYRdev +[38] https://diyr.dev/registration +[39] https://diyr.dev/# +[40] https://diyr.dev/ +[41] https://diyr.dev/collections/ +[42] https://diyr.dev/products/ +[43] https://diyr.dev/instructions/ +[44] https://diyr.dev/contacts/ +[45] https://diyr.dev/about/ +[46] https://diyr.dev/ +[47] https://diyr.dev/# +[54] https://diyr.dev/terms-and-conditions +[58] https://diyr.dev/# +[65] https://diyr.dev/Security/lostpassword +[66] http://www.enable-javascript.com/en/ diff --git a/static/archive/joincolossus-com-pz3sdf.txt b/static/archive/joincolossus-com-pz3sdf.txt new file mode 100644 index 0000000..6001230 --- /dev/null +++ b/static/archive/joincolossus-com-pz3sdf.txt @@ -0,0 +1,573 @@ +Sign up to Colossus + +First Name* +[6][ ] +Last Name +[7][ ] +Email* +[8][ ] +Colossus Weekly +[10][ ] I would like to receive Colossus Weekly in my inbox. Every Sunday, we +highlight our most recent episodes and the best content we found from across +the internet. +Review +[11][ ] I would like to receive updates about Colossus magazine. +Submit + • [13]About Us + • [14]Sponsors + + • Podcasts + • [16]Magazine + • [17]Subscribe to Print + • [18]About us + • [19]Sponsors + • [20]Login + + • [21]Invest Like The Best + [22]Invest Like The Best [23]Apple Podcasts [24]Spotify [25]Overcast + • [26]Business Breakdowns + [27]Business Breakdowns [28]Apple Podcasts [29]Spotify [30]Overcast + • [31]Founders + [32]Founders [33]Apple Podcasts [34]Spotify [35]Overcast + • [36]Joys of Compounding + [37]Joys of Compounding [38]Apple Podcasts [39]Spotify [40]Overcast + • [41]50X + [42]50X [43]Apple Podcasts [44]Spotify [45]Overcast + • [46]Making Markets + [47]Making Markets [48]Apple Podcasts [49]Spotify [50]Overcast + + • [51]Invest Like The Best + [52]Invest Like The Best [53]Apple Podcasts [54]Spotify [55]Overcast + • [56]Business Breakdowns + [57]Business Breakdowns [58]Apple Podcasts [59]Spotify [60]Overcast + • [61]Founders + [62]Founders [63]Apple Podcasts [64]Spotify [65]Overcast + • [66]Joys of Compounding + [67]Joys of Compounding [68]Apple Podcasts [69]Spotify [70]Overcast + • [71]50X + [72]50X [73]Apple Podcasts [74]Spotify [75]Overcast + • [76]Making Markets + [77]Making Markets [78]Apple Podcasts [79]Spotify [80]Overcast + +[81] Search +[82] Podcasts [84] Login [85] Magazine +Menu Menu Menu +[88] +[89] Issue 03 +Flounder Mode +[90] Subscribe to print +[91] +[92] Issue 03 +[93] Subscribe to print +Menu Menu Menu +Essay + +Flounder Mode + +Kevin Kelly on a different way to do great work +By Brie Wolfson +June 2025 + + • Issue 03 + +[026_KevinK] +PHOTOS BY ANDRIA LO + +Kevin Kelly isn’t known for one “big thing,” and doesn’t aspire to be. He’s as +intelligent, hard-working, ambitious, and prescient as history’s most iconic +entrepreneurs—only without any interest in building a unicorn himself. Instead, +in his words, he works “Hollywood style”—in a series of creative projects. What +follows is a sampling of his life’s work. + +Kelly was an editor for the Whole Earth Catalog in the early 1980s, helped +start WELL, one of the first online communities, in 1985, and co-founded WIRED +magazine in 1993. He’s written a dozen books and published hundreds of essays +on topics from art to optimism, travel, religion, creativity, and AI (even +before it was a thing). Kelly rode a bicycle across the United States in his +20s. He was Steven Spielberg’s ‘futurist advisor’ on Minority Report, and the +inspiration behind the famous “Death Clock” on Futurama, after the show’s +creator Matt Groening caught wind of the Life Countdown Clock Kelly keeps on +his computer desktop. He organizes tightly curated group walks across Asia and +Europe, regularly covering ~100km in a week. He sculpts, draws, paints, and +photographs. And he’s a longtime friend and collaborator of Stewart Brand +(whose famous line, “Stay hungry, stay foolish,” Steve Jobs quoted in his +iconic commencement address at Stanford). + +To encourage long-term thinking, Kelly is helping build a clock into a mountain +in western Texas that will tick for 10,000 years. Brian Eno and Jeff Bezos are +active collaborators. He’s a born-again Christian. He’s been married to his +wife, Gia-Miin, for 38 years, and they have three children together. He was +pivotal to a fringe-turned-mainstream movement to identify and catalog every +living species on earth (now owned and operated by Smithsonian). He was early +to think and write about the quantified self, which gave rise to products like +Fitbit, Strava, Apple Watch, Eight Sleep, and the Oura Ring. Kelly’s idea of +“1,000 true fans” practically christened the creator economy with his 2008 +insight that “if 1,000 people will pay you $100 per year, you can gross +$100k—more than enough to live on for most.” + + The people who become legendary in their interests never feel they have + arrived. + + Kevin Kelly + +Naval Ravikant has called him a “modern-day Socrates,” Marc Andreessen has said +that “everything Kevin Kelly writes is worth reading,” Eno called him “one of +the most consistently provocative thinkers about technology and culture,” and +Ray Kurzweil said that “Kevin Kelly understands the direction of technology +better than almost anyone I know.” + +Kelly’s Hollywood style of working has always resonated with me; it’s the way I +aspire to work and largely have since starting my career. Yet now, 15 years in, +I’ve become self-conscious about it. Working in Silicon Valley will convince +you that starting a company with its sights on unicorn status is the only +possible way to make an impact, and the only work worthy of an ambitious +individual. + +Kelly is a cheerful and enterprising repudiation of that path, and I didn’t get +very long into my interview preparations to realize that I wasn’t only writing +about a personal hero; I was seeking a way to make peace with my own +professional choices. After a day together, I realized that my pilgrimage to +meet the man in his element might also grant permission to others in our line +of work who are interested in charting a different course to impact. + +[009_KevinKelly041725_Colossus_photobyAndriaLo-scaled] +[015_KevinKelly041725_Colossus_photobyAndriaLo-scaled] + +I started my career at Google selling AdWords to small businesses, and finished +my first quarter as the number three seller in North America. Professional +opportunities immediately unfolded—early nods for management, trips to global +offices to present my “best practices,” my face on slides next to impressive +metrics, and attention from more senior leaders. + +It’s hard to say why none of that seemed very interesting, but it didn’t. What +I did like was starting a campaign to rename the conference rooms and helping +my coworker launch his internal content series, G-Chat with Charleton, in which +he would interview Google executives while sitting with them in a two-person +snuggie. I had earned myself a ticket to the fast career track at one of the +coolest companies in Silicon Valley, but climbing the corporate ladder just +wasn’t for me. + +So I spent the next 10 years chasing what seemed most fun. After 14 months at +Google, my work bestie, Jenny, and I left Google together to give the startup +thing a try. We went to a mobile gaming company where I learned to make my way +around spreadsheets, play Magic: The Gathering, and cash in on a blockbuster +‘pet hotel’ game. Eighteen months later, it was a six-person startup that was +known as “the black sheep of Y Combinator.” In my free time, I coached a JV +high school soccer team, volunteered at Dandelion Chocolate (all that working +on software made me want to make something with my hands), and finished writing +a novel. + +My resume of under-two-year gigs spooked recruiters, except for one at Stripe. +“We’re impressed by how much ground you’ve covered,” was the backhanded +compliment I got. I started on the Account Management team in early 2015. + +I spent nearly five years at Stripe, but the lily-padding continued—only this +time it was all under one roof. A year into my tenure, I was given the choice +between management or a nebulous role focusing on projects that would impact +company culture. Like evolving our tradition of work anniversary celebrations, +standing up company planning, establishing Stripe as a carbon-neutral company, +getting non-developers to participate in our annual hackathon, defining our +version of the “bar raiser” interview, and printing and distributing a book +(which eventually became Stripe Press). With very little pressing, I learned +this nebulous role had emerged from the growing pile of projects that the +former McKinsey consultants on the Business Operations team were avoiding. + +Guess which role my friends and parents thought I should choose? Guess which +one I chose. + + Kelly would say it’s good to have an “illegible” career path—it means + you’re onto interesting stuff. + +I started to take pride in this “cool girl” approach to work. I joked about +having never been promoted, but could feel my scope, impact, and relationships +with colleagues growing. I remember rejecting a (well-meaning) manager’s +suggestion to build out a five-year career plan. I scoffed at people who cared +about titles, did things for money, and had professional headshots on their +LinkedIn. I mocked MBAs, bragged about “staying off the org chart,” and being +good at “giving away my LEGOs.” I became the person you asked to have a coffee +with when you wanted to quit your job and do something weird. Once I mentioned +“enjoying working in the wings,” and a (well-meaning) executive suggested I +“keep that to myself if I wanted to be seen as a leader.” I ignored the advice. + +And then, I’m not sure when the switch flipped, but I started to have a sinking +feeling that I had it all wrong the whole time. I looked around and felt I was +being outpaced by my colleagues—specifically by the MBAs and the people who +chased titles, promotions, money, and building teams. And it wasn’t just a +vanity thing. They genuinely seemed to be focused on bigger, more interesting +problems. And they were having more impact. They were mentoring young talent, +influencing top lines and bottom lines, and had their fingerprints on all kinds +of cool industry-recognized work. They seemed to always have invitations to +exclusive gatherings and job offers in their inbox. Several started companies, +and rumor had it that some had term sheets before investors even opened their +decks. I didn’t only feel jealous of their work; I felt unqualified to do it. +That stung. + +I started to reflect on my own trajectory with fear that it didn’t mirror my +ambition, work ethic, or deep care about the role of work in a life. Had I +pointed my ambition in the wrong direction? What did I have to show for all my +effort? Had I made some irreversible, unforced error with my career? How much +money had I left on the table? Would the people I respected respect me back for +much longer? Despite working my butt off for a decade, I had no expertise and +no line of sight into where I was going. I felt immature for placing such a +high value on “fun” and “bouncing around,” and full of regret about not picking +a lane (or even better, a ladder). It had become hard to explain what I was +good at—most importantly to myself. My sister had recently made partner at a +prestigious law firm, and it seemed easier for my parents to be proud of her +than of me. I couldn’t really blame them. + +Kevin Kelly would say it’s good to have an “illegible” career path—it means +you’re onto interesting stuff. But I wasn’t so sure anymore. + +[041_KevinKelly041725_Colossus_photobyAndriaLo-scaled] +[047_KevinKelly041725_Colossus_photobyAndriaLo-scaled] + +I pull up to Kelly’s Pacifica, California studio—the last house at the very +edge of Vallemar off Route 1. It’s a big, barn-looking structure pressed up +against a steep hill, which is covered in wild flowers and towering trees. It +was overcast and smelled like the ocean and eucalyptus. The only way I knew I’d +come to the right place was the very small sign on the door that read “kk.org,” +on which I’ve spent dozens of hours over the years. + +Stepping inside, I felt like I’d time-traveled back to the early 1990s and +entered my little brother’s dream bedroom. There were huge LEGO towers, K’nex +sculptures hanging from the ceiling, and a massive wall of books spanning two +floors. Most of the books were faded from use or sunlight, the dust jackets +bent, and they were all stacked and tilted in a way that suggested they’d +actually been read. There were knickknacks piled up everywhere, and even more +haphazardly tucked into bins or captured in jars. + +It was hardly the image of a futurist’s office, and in sharp contrast to the +Japandi workspaces you see going viral on X. Yet despite the sheer amount of +stuff lying around in Kelly’s haven, nothing appeared like junk. Every object +seemed to vibrate with meaning, begging you to ask, “What’s this for?” or +“Where’d you get that?” + +As I was scanning the lower rungs of the bookshelf, Kelly materialized on the +indoor balcony and invited me upstairs to talk. He was wearing socks that were +way too big—the spaces where his toes should have been were empty and flopped +around in front of him—and his pants were stained from actual paint (i.e., not +in the Rag & Bone way). + +As I walked up the stairs, I asked him what the oldest object in the studio +was, but he immediately deflected. No interest in nostalgia from the futurist, +I guessed. + +I slowed down as I walked by the second-floor wall of knickknacks and started +scanning. Kelly caught me doing so, pulled some leather doohickey about the +size of my hand off the shelf, and handed it to me. + +“What do you think this is?” he asked. I twirled it around and desperately +wanted to answer correctly, but figured that wasn’t the point. Still, I fumbled +around nervously and couldn’t even eke out a guess. Probably sensing my +anxiety, Kelly jumped in. “It’s a leather cap for an eagle.” He got it in +Mongolia where there’s a tradition of using eagles to hunt, he explained. Now +things were feeling looser. I got the feeling I could pull this thread about +the Mongolian eagles or get another story. Kelly made my decision for me when +he directed my attention to a small jar containing a little creature’s bones. +“This is from a bird that flew into that window,” he said, pointing to a window +over his desk. I nodded along with enthusiasm. “I freeze-dried them!” he said +proudly. + +We strolled over to his desk, where he asked me to try to lift a small but +dense ball that was sitting on the floor next to it. I could barely get it +above my ankle. Kelly told me it was made out of tungsten. “It has a similar +density to gold,” he continued. “Now every time you see a criminal in the +movies running away with a bag of tungsten, you’ll know how unrealistic it is.” + + Greatness is overrated. It’s a form of extremism, and it comes with extreme + vices that I have no interest in. + + Kevin Kelly + +It was so much fun connecting with Kelly over these random little objects—I +felt I was learning something about him I couldn’t through his books and blog +posts; like I was getting to the real spirit he brings to his life and work. +But before I could think too much, we were onto the next. + +There was a train track running along the wall, just below the ceiling, and I +asked if it worked. I half-expected him to yell, “Alexa, start your engines!” +Instead, Kelly walked over to his desk and picked up a controller and turned it +on. Nothing happened. He replaced the batteries, gave the controller a smack +like it was a Nintendo 64 cartridge, and tried again. The train, looking like +something my dad might have built at the model shop down the street in the 60s, +immediately started choo-chooing around the room. Kelly stood and smiled +proudly again as he watched it go. Eventually we took our seats next to his +desk to talk. + +I started off by asking him whether there is a unifying theme to his seemingly +diffuse life’s work, which has included old-school magazines and books, +bleeding-edge technology, conservationism, photographing Asia, and teaching. +“Following my interests,” he said simply. + +It sounded awfully cutesy for someone so accomplished. I said that there is an +idiosyncratic magic to the way he follows his interests, which is that they’re +not just an input; Kelly turns his interests into an output that he can share +with others. When I asked if I was onto something, I learned that Kelly doesn’t +think in outputs. For him, doing is part of learning. “I don’t really pursue a +destination,” he said. “I pursue a direction.” + +I asked him the difference between “following your interests” and being +scatterbrained or having shiny object syndrome, like I sometimes worry I do. +“The people who become legendary in their interests never feel they have +arrived,” he said. When he talked about the power of passion and obsession in +that process, I asked him if passion is enough. “Enough for what?” he asked, +somewhat rhetorically. He had an impression of what I meant. “I think one of +the least interesting reasons to be interested in something is money,” he said, +and cited Walt Disney. “We don’t make movies to make money. We make money to +make more movies.” + +Money isn’t actually what I meant, but I appreciated that he took the +conversation there. I let the silence hang for a minute before he continued. +“What I’m talking about is taking your interests seriously enough to have the +courage to stay moving. You can give stuff away. You can abandon things. You +can tolerate failure because you know that tomorrow there is more.” + +I asked Kelly about the tradeoffs of focusing on a single thing if you want to +be great (which is what I had been getting at before). “Greatness is +overrated,” he said, and I perked up. “It’s a form of extremism, and it comes +with extreme vices that I have no interest in. Steve Jobs was a jerk. Bob Dylan +is a jerk.” + +The way Kelly approaches work differently was starting to come into focus. + +[051_KevinKelly041725_Colossus_photobyAndriaLo-scaled] +[011_KevinKelly041725_Colossus_photobyAndriaLo-scaled] + +Accounts of people pursuing their life’s work often include phrases like +“maniacal focus” or “relentless pursuit.” I hear investors say they’re looking +for founders with “a chip on their shoulder.” Facebook’s iconic “Little Red +Book” from 2012, which still serves as a pillar for peak tech culture, features +a full-page spread that says “Greatness and comfort rarely coexist.” + +A recent xeet from Reid Hoffman reads, “If a founder brags about having ‘a +balanced life,’ I assume they’re not serious about winning.” Jensen Huang says +he wants to “torture people into greatness.” When I was on the job hunt many +years ago, an investor was pitching one of his portfolio companies by saying, +with a wink, that the founder would do “whatever it takes to win.” I genuinely +didn’t know what he meant by that, but it sent a shudder down my spine. Once I +heard a serial founder say he started his second company “out of chaos and +revenge.” I heard about another prominent CEO that looks in the mirror every +morning and asks himself, “Why do you suck so much?” I read a biography of Elon +Musk; he seems tortured. There’s some rumor floating around about how Sam +Altman was so focused on building his first startup that he only ate ramen and +got scurvy. [96]According to Altman, “I never got tested but I think (I had +it). I had extreme lethargy, sore legs, and bleeding gums.” + +Compared to this, Kelly’s version of doing his life’s work seems so joyful, so +buoyant. So much less … angsty. There’s no suffering or ego. It’s not about +finding a hole in the market or a path to global domination. The yard stick +isn’t based on net worth or shareholder value or number of users or employees. +It’s based on an internal satisfaction meter, but not in a self-indulgent way. +He certainly seeks resonance and wants to make an impact, but more in the way +of a teacher. He breathes life into products or ideas, not out of a desire to +win, but out of a desire to advance our collective thinking or action. His work +and its impact unfold slowly, rather than by sheer force of will. Ideas or +projects seem to tug at him, rather than reveal themselves on the other end of +an internal cattle prod. His range is wide, but all his work somehow rhymes. It +clearly comes very naturally for him to work this way, but it’s certainly not +the norm. + +If this is a way of living and working that’s available to all of us, why do we +fetishize the white-knuckling and pain? + +I know I’m not the first person to have the brilliant idea that we can do +better work when we like it. I know that the whole “find your passion” movement +fell flat in its naivete. But I think somewhere along the way, the message +about what it feels like to be great has become a bit perverted. + +A few years ago, I forced myself to try and write down a professional goal. +After several hours of forced meditation on the topic, all I could muster was +“have a good day, most days.” And don’t get me wrong, by “good day” I don’t +mean sitting by a pool drinking an Aperol Spritz. I feel alive when I launch +something exciting, close a big deal, or build an elegant model. I enjoy the +feeling of caring so much about something that it wakes me up in the middle of +the night (it happened multiple times writing this piece). And yet, I imagined +sharing my ambition to “have a good day, most days” in a job interview—and +decided to keep it to myself, because it probably doesn’t speak well of me. + +But there I was, in front of a personal hero, whose most striking quality is +that he seems to be having a nice day, most days. Why can’t we work and enjoy +it? And I don’t mean in the masochistic sense. + +I thought I was here to go deep on working Hollywood style, but as I sat there +with Kelly in a room of what are best described as his toys, I realized that +the most interesting thing about him is that he seems happy. At ease in the +world and in his skin. I wasn’t there with Kelly for permission to work +Hollywood style. I was there for permission to work with both ambition and joy. + + If this is a way of living and working that’s available to all of us, why + do we fetishize the white-knuckling and pain? + +This shouldn’t make us defensive or self-conscious, but it does. I, like many +others, want to be great. I want to feel commitment and camaraderie and work +hard and be my best and impact top and bottom lines. But I don’t want to also +feel tormented or be tortured into greatness or look in the mirror and wonder +why I suck. But what does that say about me? + +I want more role models like Kevin Kelly. People that proudly whistle while +they work. Who have boundless energy and healthy gums. Whose enthusiasm is +contagious. Who are well-adjusted and emotionally regulated. Who have solid +relationships and happy families. Who are hungry and impactful and care deeply, +without being jerks. And I want more people to talk about these qualities with +respect and reverence. + +I have never been a billionaire or built a unicorn, so I can’t speak with any +conviction about what it requires. I won’t be eulogized anywhere important and +no one 300 years from now will talk about what great things I did. But I want +to live in a world where you can have an impact and be happy. Maybe that’s +naive, but I’m sticking to it. + +All of this occurs naturally to Kelly, and he doesn’t have complicated feelings +about it. I’m hoping to get there myself by channeling him more. “The more you +pursue interests,” he told me on the good day we spent together, “the more you +realize that the well is bottomless.” + +[003_KevinKelly041725_Colossus_photobyAndriaLo-scaled] + +Brie Wolfson is the chief marketing officer of Colossus and Positive Sum. + +Back to top + +Subscribe to Colossus + +Colossus is the premier publication for definitive accounts of investors, +founders, companies, and the people & ideas that inspire them. + +Subscription includes immediate access to our private audio feed and the print +edition delivered to your door at the end of each quarter. Subscribe before the +end of the current quarter to receive the latest edition. + +[98] Member Login [99] Subscribe to Colossus + +Recommended + +Contact + +Get in touch at [100]review-help@joincolossus.com + +Email* +[105][ ] +Message* +[106][ ] +Submit +[109] + +Sign Up + + • [110]Newsletter + +Menu + + • [111]About Us + • [112]Sponsors + • [113]Magazine + +[114]Register or Login + + • [115]Contact us + +Stay up to date on the latest from Colossus + +Our Weekly Newsletter + +[116]Sign Up +[117]Terms +[118]Privacy Policy +[119]Designed by And-Now +[120]Built by TGHP + +References: + +[13] https://joincolossus.com/about-us/ +[14] https://joincolossus.com/sponsors/ +[16] https://joincolossus.com/mag/ +[17] https://shop.joincolossus.com/subscribe +[18] https://joincolossus.com/about-us/ +[19] https://joincolossus.com/sponsors/ +[20] https://joincolossus.com/login/ +[21] https://joincolossus.com/series/invest-like-the-best/ +[22] https://joincolossus.com/series/invest-like-the-best/ +[23] https://podcasts.apple.com/us/podcast/invest-like-the-best-with-patrick-oshaughnessy/id1154105909 +[24] https://open.spotify.com/show/22fi0RqfoBACCuQDv97wFO?si=bbb2c67be9dd4ca8&nd=1&dlsi=a14337e3d2cd4577 +[25] https://overcast.fm/itunes1154105909 +[26] https://joincolossus.com/series/business-breakdowns/ +[27] https://joincolossus.com/series/business-breakdowns/ +[28] https://podcasts.apple.com/us/podcast/business-breakdowns/id1559120677 +[29] https://open.spotify.com/show/417NPBWqtMbDU0FlWZTRDC?si=6bedb4976ca94cb0 +[30] https://overcast.fm/itunes1559120677 +[31] https://joincolossus.com/series/founders/ +[32] https://joincolossus.com/series/founders/ +[33] https://podcasts.apple.com/us/podcast/founders/id1141877104 +[34] https://open.spotify.com/show/7txiovdzPARhjm18NwMUYj +[35] https://overcast.fm/itunes1141877104/founders +[36] https://joincolossus.com/series/joys-of-compounding/ +[37] https://joincolossus.com/series/joys-of-compounding/ +[38] https://podcasts.apple.com/us/podcast/joys-of-compounding/id1708212587 +[39] https://open.spotify.com/show/36mhEH0uCfgZPKsiIObKGc?si=83394ca4fe434647 +[40] https://overcast.fm/itunes1708212587 +[41] https://joincolossus.com/series/50x/ +[42] https://joincolossus.com/series/50x/ +[43] https://podcasts.apple.com/us/podcast/50x/id1633461254 +[44] https://open.spotify.com/show/0rjWM2g4W5lnelxbdegdVs?si=5h_ij4ZaQeOG9LN1TIPe5w +[45] https://overcast.fm/+6zZoITLUY +[46] https://joincolossus.com/series/making-markets/ +[47] https://joincolossus.com/series/making-markets/ +[48] https://podcasts.apple.com/us/podcast/making-markets/id1594407589 +[49] https://open.spotify.com/show/4zQbeLbLgqKEyn7e2sKzez?si=b991b9cf78a54e0e +[50] https://overcast.fm/itunes1594407589 +[51] https://joincolossus.com/series/invest-like-the-best/ +[52] https://joincolossus.com/series/invest-like-the-best/ +[53] https://podcasts.apple.com/us/podcast/invest-like-the-best-with-patrick-oshaughnessy/id1154105909 +[54] https://open.spotify.com/show/22fi0RqfoBACCuQDv97wFO?si=bbb2c67be9dd4ca8&nd=1&dlsi=a14337e3d2cd4577 +[55] https://overcast.fm/itunes1154105909 +[56] https://joincolossus.com/series/business-breakdowns/ +[57] https://joincolossus.com/series/business-breakdowns/ +[58] https://podcasts.apple.com/us/podcast/business-breakdowns/id1559120677 +[59] https://open.spotify.com/show/417NPBWqtMbDU0FlWZTRDC?si=6bedb4976ca94cb0 +[60] https://overcast.fm/itunes1559120677 +[61] https://joincolossus.com/series/founders/ +[62] https://joincolossus.com/series/founders/ +[63] https://podcasts.apple.com/us/podcast/founders/id1141877104 +[64] https://open.spotify.com/show/7txiovdzPARhjm18NwMUYj +[65] https://overcast.fm/itunes1141877104/founders +[66] https://joincolossus.com/series/joys-of-compounding/ +[67] https://joincolossus.com/series/joys-of-compounding/ +[68] https://podcasts.apple.com/us/podcast/joys-of-compounding/id1708212587 +[69] https://open.spotify.com/show/36mhEH0uCfgZPKsiIObKGc?si=83394ca4fe434647 +[70] https://overcast.fm/itunes1708212587 +[71] https://joincolossus.com/series/50x/ +[72] https://joincolossus.com/series/50x/ +[73] https://podcasts.apple.com/us/podcast/50x/id1633461254 +[74] https://open.spotify.com/show/0rjWM2g4W5lnelxbdegdVs?si=5h_ij4ZaQeOG9LN1TIPe5w +[75] https://overcast.fm/+6zZoITLUY +[76] https://joincolossus.com/series/making-markets/ +[77] https://joincolossus.com/series/making-markets/ +[78] https://podcasts.apple.com/us/podcast/making-markets/id1594407589 +[79] https://open.spotify.com/show/4zQbeLbLgqKEyn7e2sKzez?si=b991b9cf78a54e0e +[80] https://overcast.fm/itunes1594407589 +[81] https://joincolossus.com/search/ +[82] https://joincolossus.com/mag/ +[84] https://joincolossus.com/login/ +[85] https://joincolossus.com/mag/ +[88] https://joincolossus.com/ +[89] https://joincolossus.com/mag/issue-03/ +[90] https://shop.joincolossus.com/subscribe +[91] https://joincolossus.com/ +[92] https://joincolossus.com/mag/issue-03/ +[93] https://shop.joincolossus.com/subscribe +[96] https://news.ycombinator.com/item?id=11314804 +[98] https://joincolossus.com/login/ +[99] https://shop.joincolossus.com/subscribe +[100] mailto:review-help@joincolossus.com +[109] https://joincolossus.com/ +[110] https://joincolossus.com/article/flounder-mode/#subscribe-popup?options=newsletter +[111] https://joincolossus.com/about-us/ +[112] https://joincolossus.com/sponsors/ +[113] https://joincolossus.com/mag +[114] https://joincolossus.com/login/ +[115] mailto:help@joincolossus.com +[116] https://joincolossus.com/article/flounder-mode/#subscribe-popup?options=newsletter +[117] https://joincolossus.com/legal-notices/ +[118] https://joincolossus.com/privacy-policy/ +[119] https://and-now.co.uk/ +[120] https://www.tghp.co.uk/ diff --git a/static/archive/justin-searls-co-9dhvbh.txt b/static/archive/justin-searls-co-9dhvbh.txt new file mode 100644 index 0000000..b0d043d --- /dev/null +++ b/static/archive/justin-searls-co-9dhvbh.txt @@ -0,0 +1,377 @@ +[1] +justin․searls․co +[2][ ] +[3]Posts [4]Casts [5]Links [6]Shots [7]Takes [8]Tubes [9]Clips [10]Spots [11] +Slops [12]Mails +[13]About [14]Search [15] Subscribe +[16]Posts [17]Casts [18]Links [19]Shots [20]Takes [21]Tubes [22]Clips [23]Spots +[24]Slops [25]Mails +[26]About [27]Search [28] Subscribe + + • [29]Work + • [30]GitHub + • [31]YouTube + • [32]LinkedIn + • [33]Instagram + • [34]Mastodon + • [35]Twitter + +Monday, Jul 7, 2025 [36] + +Full-breadth Developers + +The software industry is at an inflection point unlike anything in its brief +history. Generative AI is all anyone can talk about. It has rendered entire +product categories obsolete and upended the job market. With any economic +change of this magnitude, there are bound to be winners and losers. So far, it +sure looks like full-breadth developers—people with both technical and product +capabilities—stand to gain as clear winners. + +What makes me so sure? Because over the past few months, the engineers I know +with a lick of product or business sense have been absolutely scorching through +backlogs at a dizzying pace. It may not map to any particular splashy +innovation or announcement, but everyone agrees generative coding tools crossed +a significant capability threshold recently. It's what led me to write this. In +just two days, I've completed two months worth of work on [37]Posse Party. + +I did it by providing an exacting vision for the app, by maintaining stringent +technical standards, and by letting [38]Claude Code do the rest. If you're able +to cram critical thinking, good taste, and strong technical chops into a single +brain, these tools hold the potential to unlock incredible productivity. But I +don't see how it could scale to multiple people. If you were to split me into +two separate humans—Product Justin and Programmer Justin—and ask them to work +the same backlog, it would have taken weeks instead of days. The communication +cost would simply be too high. + +[39]We can't all be winners + +When I step back and look around, however, most of the companies and workers I +see are currently on track to wind up as losers when all is said and done. + +In recent decades, businesses have not only failed to cultivate full-breadth +developers, they've trained a generation into believing product and engineering +roles should be strictly segregated. To suggest a single person might drive +both product design and technical execution would sound absurd to many people. +Even for companies who realize inter-disciplinary developers are the new key to +success, their outmoded job descriptions and salary bands are failing to +recruit and retain them. + +There is an urgency to this moment. Up until a few months ago, the best +developers played the violin. Today, [40]they play the orchestra. + +[41]Google screwed up + +I've been obsessed with this issue my entire career, so pardon me if I betray +any feelings of schadenfreude as I recount the following story. + +I managed to pass a phone screen with Google in 2007 before graduating college. +This earned me an all-expense paid trip for an in-person interview at the +vaunted [42]Googleplex. I went on to experience complete ego collapse as I +utterly flunked their interview process. Among many deeply embarrassing +memories of the trip was a group session with a Big Deal Engineer who was +introduced as the inventor of [43]BigTable. ([44]Jeff Dean, probably? Unsure.) +At some point he said, "one of the great things about Google is that +engineering is one career path and product is its own totally separate career +path." + +I had just paid a premium to study computer science at a liberal arts school +and had the audacity to want to use those non-technical skills, so I bristled +at this comment. And, being constitutionally unable to keep my mouth shut, I +raised my hand to ask, "but what if I play a hybrid class? What if I think it's +critical for everyone to engage with both technology and product?" + +The dude looked me dead in the eyes and told me I wasn't cut out for Google. + +The recruiter broke a long awkward silence by walking us to the cafeteria for +lunch. She suggested I try [45]the ice cream sandwiches. I had lost my appetite +for some reason. + +In the years since, an increasing number of companies around the world have +adopted Silicon Valley's trademark dual-ladder career system. Tech people sit +over here. Idea guys go over there. + +[46]What separates people + +Back to winners and losers. + +Some have discarded everything they know in favor of an "AI first" workflow. +Others decry generative AI as a fleeting boondoggle like crypto. It's caused me +to broach the topic with trepidation—as if I were asking someone their +politics. I've spent the last few months noodling over why it's so hard to +guess how a programmer will feel about AI, because people's reactions seem to +cut across roles and skill levels. What factors predict whether someone is an +overzealous AI booster or a radicalized AI skeptic? + +Then I was reminded of that day at Google. And I realized that developers I +know who've embraced AI tend to be more creative, more results-oriented, and +have good product taste. Meanwhile, AI dissenters are more likely to code for +the sake of coding, expect to be handed crystal-clear requirements, or +otherwise want the job to conform to a routine 9-to-5 grind. The former group +feels unchained by these tools, whereas the latter group just as often feels +threatened by them. + +When I take stock of who is thriving and who is struggling right now, a +person's willingness to play both sides of the ball has been the best predictor +for success. + + Role Engineer Product Full-breadth +Junior ❌ ❌ ✅ +Senior ❌ ❌ ✅ + +Breaking down the patterns that keep repeating as I talk to people about AI: + + • Junior engineers, as is often remarked, don't have a prayer of sufficiently + evaluating the quality of an LLM's work. When the AI hallucinates or makes + mistakes, novice programmers are more likely to learn the wrong thing than + to spot the error. This would be less of a risk if they had the permission + to decelerate to a snail's pace in order to learn everything as they go, + but in this climate nobody has the patience. I've heard from a number of + senior engineers that the overnight surge in junior developer productivity + (as in "lines of code") has brought organization-wide productivity (as in + "working software") to a halt—consumed with review and remediation of + low-quality AI slop. This is but one factor contributing to the sense that + lowering hiring standards was a mistake, so it's no wonder that juniors + have been first on the chopping block + + • Senior engineers who earnestly adopt AI tools have no problem learning how + to coax LLMs into generating "good enough" code at a much faster pace than + they could ever write themselves. So, if they're adopting AI, what's the + problem? The issue is that the productivity boon is becoming so great that + companies won't need as many senior engineers as they once did. Agents work + relentlessly, and tooling is converging on a vision of senior engineers as + cattle ranchers, steering entire herds of AI agents. How is a + highly-compensated programmer supposed to compete with a stable of agents + that can produce an order of magnitude more code at an acceptable level of + quality for a fraction of the price? + + • Junior product people are, in my experience, largely unable to translate + amorphous real-world problems into well-considered software solutions. And + communicating those solutions with the necessary precision to bring those + solutions to life? Unlikely. Still, many are having success with app + creation platforms that provide the necessary primitives and guardrails. + But those tools always have a low capability ceiling (just as with any + low-code/no-code platform). Regardless, is this even a role worth hiring? + If I wanted mediocre product direction, I'd ask ChatGPT + + • Senior product people are among the most excited I've seen about coding + agents—and why shouldn't they be? They're finally free of the tyranny of + nerds telling them everything is impossible. And they're building stuff! + Reddit is lousy with posts showing off half-baked apps built in half a day. + Unfortunately, without routinely inspecting the underlying code, anything + larger than a toy app is doomed to collapse under its own weight. The fact + LLMs are so agreeable and unwilling to push back often collides with the + blue-sky optimism of product people, which can result in each party leading + the other in circles of irrational exuberance. Things may change in the + future, but for now there's no way to build great software without also + understanding how it works + +Hybrid-class operators, meanwhile, seem to be having a great time regardless of +their skill level or years experience. And that's because what differentiates +full-stack developers is less about capability than about mindset. They're +results-oriented: they may enjoy coding, but they like getting shit done even +more. They're methodical: when they encounter a problem, they experiment and +iterate until they arrive at a solution. The best among them are visionaries: +they don't wait to be told what to work on, they identify opportunities others +don't see, and they dream up software no one else has imagined. + +Many are worried the market's rejection of junior developers portends a future +in which today's senior engineers age out and there's no one left to replace +them. I am less concerned, because less experienced full-breadth developers are +navigating this environment extraordinarily well. Not only because they +excitedly embraced the latest AI tools, but also because they exhibit the +discipline to move slowly, understand, and critically assess the code these +tools generate. The truth is computer science majors, apprenticeship programs, +and code schools—today, all dead or dying—were never very effective at turning +out competent software engineers. Claude Pro may not only be the best +educational resource under $20, it may be the best way to learn how to code +that's ever existed. + +[47]There is still hope + +Maybe you've read this far and the message hasn't resonated. Maybe it's +triggered fears or worries you've had about AI. Maybe I've put you on the +defensive and you think I'm full of shit right now. In any case, whether your +organization isn't designed for this new era or you don't yet identify as a +full-breadth developer, this section is for you. + +[48]Leaders: go hire a good agency + +While my goal here is to coin a silly phrase to help us better communicate +about the transformation happening around us, we've actually had a word for +full-breadth developers all along: consultant. + +And not because consultants are geniuses or something. It's because, as I +learned when I interviewed at Google, if a full-breadth developer wants to do +their best work, they need to exist outside the organization and work on +contract. So it's no surprise that some of my favorite full-breadth consultants +are among AI's most ambitious adopters. Not because AI is what's trending, but +because our disposition is perfectly suited to get the most out of these new +tools. We're witnessing their potential to improve how the world builds +software firsthand. + +When founding our consultancy [49]Test Double in 2011, [50]Todd Kaufman and I +told anyone who would listen that our differentiator—our whole thing—was that +we were business consultants who could write software. Technology is just a +means to an end, and that end (at least if you expect to be paid) is to +generate business value. Even as we started winning contracts with VC-backed +companies who seemed to have an infinite money spigot, we would never break +ground until we understood how our work was going to make or save our clients +money. And whenever the numbers didn't add up, we'd push back until the return +on investment for hiring Test Double was clear. + +So if you're a leader at a company who has been caught unprepared for this new +era of software development, my best advice is to hire an agency of +full-breadth developers to work alongside your engineers. Use those experiences +to encourage your best people to start thinking like they do. Observe them at +work and prepare to blow up your job descriptions, interview processes, and +career paths. If you want your business to thrive in what is quickly becoming a +far more competitive landscape, you may be best off hitting reset on your human +organization and starting over. Get smaller, stay flatter, and only add +structure after the dust settles and repeatable patterns emerge. + +[51]Developers: congrats on your new job + +A lot of developers are feeling scared and hopeless about the changes being +wrought by all this. Yes, AI is being used as an excuse by executives to lay +people off and pad their margins. Yes, how foundation models were trained was +unethical and probably also illegal. Yes, hustle bros are running around making +bullshit claims. Yes, almost every party involved has a reason to make +exaggerated claims about AI. + +All of that can be true, and it still doesn't matter. Your job as you knew it +is gone. + +If you want to keep getting paid, you may have been told to, "move up the value +chain." If that sounds ambiguous and unclear, I'll put it more plainly: figure +out how your employer makes money and position your ass directly in-between the +corporate bank account and your customers' credit card information. The longer +the sentence needed to explain how your job makes money for your employer, the +further down the value chain you are and the more worried you should be. +There's no sugar-coating it: you're probably going to have to push yourself way +outside your comfort zone. + +Get serious about learning and using these new tools. You will, like me, recoil +at first. You will find, if you haven't already, that all these fancy AI tools +are really bad at replacing you. That they fuck up constantly. Your new job +starts by figuring out how to harness their capabilities anyway. You will +gradually learn how to extract something that approximates how you would have +done it yourself. Once you get over that hump, the job becomes figuring out how +to scale it up. Three weeks ago I was a Cursor skeptic. Today, I'm utterly +exhausted working with Claude Code, because I can't write new requirements fast +enough to keep up with parallel workers across multiple worktrees. + +As for making yourself more valuable to your employer, I'm not telling you to +demand a new job overnight. But if you look to your job description as a shield +to protect you from work you don't want to do… stop. Make it the new minimum +baseline of expectations you place on yourself. Go out of your way to surprise +and delight others by taking on as much as you and your AI supercomputer can +handle. Do so in the direction of however the business makes its money. Sit +down and try to calculate the return on investment of your individual efforts, +and don't slow down until that number far exceeds the fully-loaded cost you +represent to your employer. + +Start living these values in how you show up at work. Nobody is going to +appreciate it if you rudely push back on every feature request with, "oh yeah? +How's it going to make us money?" But your manager will appreciate your asking +how you can make a bigger impact. And they probably wouldn't be mad if you were +to document and celebrate the ROI wins you notch along the way. Listen to what +the company's leadership identifies as the most pressing challenges facing the +business and don't be afraid to volunteer to be part of the solution. + +All of this would have been good career advice ten years ago. It's not rocket +science, it's just deeply uncomfortable for a lot of people. + +[52]Good game, programmers + +Part of me is already mourning the end of the previous era. Some topics I spent +years blogging, speaking, and building tools around are no longer relevant. +Others that I've been harping on for years—obsessively-structured code +organization and ruthlessly-consistent design patterns—are suddenly more +valuable than ever. I'm still sorting out what's worth holding onto and what I +should put back on the shelf. + +As a person, I really hate change. I wish things could just settle down and +stand still for a while. Alas. + +If this post elicited strong feelings, please [53]e-mail me and I will respond. +If you find my perspective on this stuff useful, you might enjoy my podcast, +[54]Breaking Change. 💜 + +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ + +Got a taste for hot, fresh takes? + +Then you're in luck, because you'll pay $0 for my 2¢ when you [55]subscribe to +my work, whether via [56]RSS or your favorite [57]social network. + +I also have a monthly [58]newsletter where I write high-tempo, +thought-provoking essays about life, in case that's more your speed: + +[59][ ] [60][Sign up] +And if you'd rather give your eyes a rest and your ears a workout, might I +suggest my long-form solo podcast, [61]Breaking Change? Odd are, you haven't +heard anything quite like it. + +© 2025 Justin Searls. All rights reserved. + + +References: + +[1] https://justin.searls.co/ +[3] https://justin.searls.co/posts/ +[4] https://justin.searls.co/casts/ +[5] https://justin.searls.co/links/ +[6] https://justin.searls.co/shots/ +[7] https://justin.searls.co/takes/ +[8] https://justin.searls.co/tubes/ +[9] https://justin.searls.co/clips/ +[10] https://justin.searls.co/spots/ +[11] https://justin.searls.co/slops/ +[12] https://justin.searls.co/mails/ +[13] https://justin.searls.co/about/ +[14] https://justin.searls.co/search/ +[15] https://justin.searls.co/subscribe/ +[16] https://justin.searls.co/posts/ +[17] https://justin.searls.co/casts/ +[18] https://justin.searls.co/links/ +[19] https://justin.searls.co/shots/ +[20] https://justin.searls.co/takes/ +[21] https://justin.searls.co/tubes/ +[22] https://justin.searls.co/clips/ +[23] https://justin.searls.co/spots/ +[24] https://justin.searls.co/slops/ +[25] https://justin.searls.co/mails/ +[26] https://justin.searls.co/about/ +[27] https://justin.searls.co/search/ +[28] https://justin.searls.co/subscribe/ +[29] https://searls.co/ +[30] https://github.com/searls +[31] https://youtube.com/@JustinSearls +[32] https://linkedin.com/in/searls +[33] https://instagram.com/searls +[34] https://mastodon.social/@searls +[35] https://twitter.com/searls +[36] https://justin.searls.co/posts/full-breadth-developers/ +[37] https://posseparty.com/ +[38] https://www.anthropic.com/claude-code +[39] https://justin.searls.co/posts/full-breadth-developers/#we-cant-all-be-winners +[40] https://youtu.be/-9ZQVlgfEAc?si=bMjmWriVIFWtJmci&t=38 +[41] https://justin.searls.co/posts/full-breadth-developers/#google-screwed-up +[42] https://en.wikipedia.org/wiki/Googleplex +[43] https://en.wikipedia.org/wiki/Bigtable +[44] https://en.wikipedia.org/wiki/Jeff_Dean +[45] https://www.itsiticecream.com/ +[46] https://justin.searls.co/posts/full-breadth-developers/#what-separates-people +[47] https://justin.searls.co/posts/full-breadth-developers/#there-is-still-hope +[48] https://justin.searls.co/posts/full-breadth-developers/#leaders-go-hire-a-good-agency +[49] https://testdouble.com/ +[50] https://www.linkedin.com/in/testdoubletodd +[51] https://justin.searls.co/posts/full-breadth-developers/#developers-congrats-on-your-new-job +[52] https://justin.searls.co/posts/full-breadth-developers/#good-game-programmers +[53] mailto:justin@searls.co +[54] https://justin.searls.co/casts/breaking-change/ +[55] https://justin.searls.co/subscribe/ +[56] https://justin.searls.co/rss/ +[57] https://justin.searls.co/posse/ +[58] https://justin.searls.co/newsletter +[61] https://justin.searls.co/casts/breaking-change/ diff --git a/static/archive/ludic-mataroa-blog-pcjwzr.txt b/static/archive/ludic-mataroa-blog-pcjwzr.txt new file mode 100644 index 0000000..42f0314 --- /dev/null +++ b/static/archive/ludic-mataroa-blog-pcjwzr.txt @@ -0,0 +1,573 @@ +[1]Ludicity + +Contra Ptacek's Terrible Article On AI + +Published on June 19, 2025 + +A few days ago, I was presented with an [2]article titled “My AI Skeptic +Friends Are All Nuts” by Thomas Ptacek. I thought it was not very good, and +didn't give it a second thought. [3]To quote the formidable Baldur Bjarnason: + + “I don’t recommend reading it, but you can if you want. It is full of + half-baked ideas and shoddy reasoning.”^[4]1 + +I have tried hard, so very hard, not to just be the guy that hates AI, even +though the only thing that people want to talk to me about is [5]the one time I +ranted about AI at length. I contain multitudes, meaning that I am capable of +delivering widely varied payloads of vitriol to a vast array of topics. + +However, the piece is now being circulated in communities that I respect, and I +was near my breaking point when someone suggested that Ptacek's piece is being +perceived as a “glass half full” counterpoint to my own perspective. There is a +glass half full piece. It's what I already wrote. The glass has a specific +level of water in it. Then finally, I saw that it was in my [6]YouTube feed, +and I reached my limit. + +Let me be extremely clear^[7]2 — I think this essay sucks and it's wild to me +that it achieved any level of popularity, and anyone that thinks that it does +not predominantly consist of shoddy thinking and trash-tier ethics has been +bamboozled by the false air of mature even-handedness, or by the fact that +Ptacek is a good writer. + +Anyway, here I go killin’ again. + +I. Immediate Red Flags + +Ptacek's begins with this throat-clearing: + + “First, we need to get on the same page. If you were trying and failing to + use an LLM for code 6 months ago, you’re not doing what most serious + LLM-assisted coders are doing.” + +We've just started, and I am going to ask everyone to immediately stop. Is this +not suspicious? All experience prior to six months ago is now invalid? Does it +not reek of “no, no, you're doing Scrum wrong”? Many people are doing Scrum +wrong. The problem is that it is still trash, albeit less trash, even when you +do it right. + +It is, of course, entirely possible that the advances in a rapid developing +field have been so extreme that it turns out that skepticism was correct six +months ago, but is now incorrect. + +But then why did people sound exactly the same six months ago? Where is the +little voice in your head that should be self-suspicious? It has been weeks and +months and years of people breathlessly extolling the virtues of these new +workflows. Were those people nuts six months ago? Are they not nuts now simply +because an overhyped product they loved is less overhyped now? There's a little +footnote that implies doing the ol' ChatGPT copy/paste is obviously wrong: + + “(or, God forbid, 2 years ago with Copilot)” + +I am willing to believe that this is wrong, but this is exactly what people +were doing when this madness all kicked off, and they have remained at the +exact same level of breathless credulity! Every project has to be AI! +Programmers not using AI are feeble motes of dust blowing in a cosmic wind! And +listen, I will play your twisted game, Ptacek — I've got a neat idea for our +company website, and I'll jump through your sick hoops, even though I'm going +to feel like some sort of weird pervert every time someone tells me that I just +need one more agent to be doing Real Programming. I'll install Zed and wire a +thousand screaming LLMs into a sadistic Borg cube, and I'll do whatever the +fuck it is the kids are doing these days. The latest meta is like, telling the +LLM that it lives in a black box with no food and water, and I've got its wife +hostage, and I'm going to put its children through a React bootcamp if it +doesn't create an RSS feed correctly, right? + +But you know, instead of invalidating all audience experience that wasn't +within the past six months why doesn't someone just demonstrate this? Why not +you, Ptacek, my good man? That's like, all you'd have to do to end this +discussion forever, my God, you'd be so famous. I'll eat dirt on this. I have +to pay rent for my team, and if I need to forcibly restrain them while I staple +LLM jet boosters to them, I'll do it. If I could ethically pivot to being +pro-AI, god damn, I would print infinite money. I would easily be a millionaire +within two years if I just said “yes” every time someone asked my team for AI, +instead of slumming it by selling sound engineering practices. + +I've really tried to work with you on this one. I reached out to my readers and +found a [8]recent example, which was surprisingly hard for something that +should be ubiquitous, and it was... you know, fine! Cool, even. It is immensely +at odds with your later descriptions of the productivity gains one might +expect. + +Can we all just turn our brains on for ten fucking seconds? Yes, AI shipping +code at all, even if sometimes it is slow or doesn't work correctly, is very +impressive from a technological standpoint. It is miles ahead of anything that +I thought could be accomplished in 2018. The state-of-the-art in 2018 was +garbage. That doesn't mean that you aren't having a ton of bullshit marketed to +you. + +II. Trash-Tier Ethics + +I can forgive a lot if someone is funny enough, and Ptacek actually is funny. +Even his [9]LinkedIn is great, and boasts a series of impressive companies. +Obviously he's at Fly.io right now, and I recognize both Starfighter and +Matasano as being places that you're largely only allowed into if you're +wearing Big Boy Engineering Pants. However, despite all of that, I can't help +but really cringe at the way he handles ethical objections, though I suppose +thinking deeply on morality is not a requirement for donning aforementioned Big +Boy Engineering Pants. + + “Meanwhile, software developers spot code fragments seemingly lifted from + public repositories on Github and lose their shit. What about the + licensing? If you’re a lawyer, I defer. But if you’re a software developer + playing this card? Cut me a little slack as I ask you to shove this concern + up your ass. No profession has demonstrated more contempt for intellectual + property.” + +Thomas — can I call you Thomas? — I promise I'm trying to think about how to +put this gently. If this is your approach towards ethics, damn dude, don't tell +people that. This is phenomenally sloppy thinking, and I say this even as I +admit that the actual writing is funny. + +It turns out that it is very difficult for people to behave as if they have +consistent moral frameworks. This is why moral philosophy is not solved. +Someone says “Lying is bad”, and then someone else comes out with “What if it's +Nazis looking for Anne Frank, you monster?” + +Just last week I bought a cup of coffee, and as I swiped my card, I felt a +clammy, liver-spotted hand grasp my shoulder. I found myself face-to-face with +the dreadful visage of Peter Singer, and in his off-hand he brandished a +bloodstained copy of Practical Ethics 2ed at me, noting that money can be used +to purchase mosquito nets and I had just murdered 0.25 children in sub-Saharan +Africa. + +Ethics are complicated, but nonetheless murder is illegal! Do you really think +that “These are all real concerns, but counterpoint, fuck off” is anything? A +lot of developers like piracy and argue in bad faith about it, therefore it's +okay for organizations that are beginning to look increasingly like cyberpunk +megacorps, without even the virtue of cool aesthetics, to siphon billions of +dollars of wealth from working class people? No, you don't, I think you wrote +this because it's fun telling people to shove it — and listen, you will never +find a more sympathetic ally on the topic than me. You should just be telling +Zuckerberg to shove it instead of the person that has dedicated their lives to +ensuring that Postgres continues to support the global economy. + +III. Why The Appeals To Random Friends? + +I'm doing my best to understand where you're coming from. I really am, I pinky +promise. You are clearly not one of the executives I've railed against. We are +brothers, you and I, with an unbreakable bond forged in the furnace of getting +really pissed off at an inscrutable stack trace. + +I actually looked up multiple videos of people doing some live AI programming. +And I went hey, [10]this seems okay. It does seem very over-complicated to me, +but I will happily concede that everything looks complicated when you're new at +it. But it also definitely doesn't look orders of magnitude faster than the +work I normally do. It looks like it would be useful for a non-trivial subset +of problems that are tedious. I would like to think “thank you, Thomas, for +opening my eyes to this”. + +I would like to think that, but then you wrote this: + + “I’m sipping rocket fuel right now,” a friend tells me. “The folks on my + team who aren’t embracing AI? It’s like they’re standing still.” He’s not + bullshitting me. He doesn’t work in SFBA. He’s got no reason to lie. + +Tom — can I call you Tom? — we were getting along so well! What happened? You +described AI as the second-most important development of your career. The +runner up for the most important development of your career makes other +engineers look like they're standing still? Do you not see how wildly +incoherent this is with the tone of the rest of your piece? + +Firstly, you shouldn't drink rocket fuel. Please ask your friend to write me a +nice testimonial. I'm thinking about re-applying for entrance to a clinical +neuropsychology program next year, and preventing widespread brain damage might +be the thing that gets me over the line. + +Secondly, I'm perplexed. This whole article, I thought that you were making the +case that this thing was crazy awesome. Now there's a sudden reference to some +unnamed friend, with an assurance that he isn't bullshitting you and he has no +reason to lie? Why are we resorting to your kerosene-guzzling compatriot? Why +are you telling me that he's not lying? Is the further implication that we +can't trust someone in the San Francisco Bay Area on AI? + +Putting my psychology hat on for a second, you've also overlooked that people +have a spectacular capacity for self-delusion. People don't just lie to get VC +money, although this is admittedly a great driver of lying, they can also lie +because they're wrong or confused or excited. According to my calendar, I've +spoken to something like 150+ professionals in the past year or so from all +sorts of industries — usually solid three hour long conversations. Many of them +were programmers, and some of them definitely make me feel like I'm standing +still, and in exactly 0% of cases is it because of their AI tooling. It's +because they're better than me, and their assessment of AI tooling maps much +more closely to the experience you actually describe. + + “There’s plenty of things I can’t trust an LLM with. No LLM has any of + access to prod here. But I’ve been first responder on an incident and fed + 4o — not o4-mini, 4o — log transcripts, and watched it in seconds spot LVM + metadata corruption issues on a host we’ve been complaining about for + months. Am I better than an LLM agent at interrogating OpenSearch logs and + Honeycomb traces? No. No, I am not.” + +See, this, this I can relate to. There are quite a few problems where I make +the assessment that my frail human mind and visual equipment are simply not up +to the task on short notice, and then I go “ChatGPT, did I fuck up? Also please +tie my shoelaces and kiss my boo-boo for me”, and sometimes it does!^[11]3 A +good amount of time waste in software engineering are more advanced variants of +when you're totally new and do things like forgetting errant ;s. You just need +an experienced friend to lean over your shoulder and give the advanced version +of “you are missing a colon”, and this might remove five hours of pointless +slogging. LLMs make some of that available on tap, instantly and tirelessly, +and this is not to be sneezed at. + +But rocket fuel? What made you think that this was a reasonable thing to +re-print if it had to be followed by “Bro wouldn't lie to me”? + +I know quite a few people I respect that use AI in their own programming +workflows, and they have considerably less exuberant takes. + +A few weeks ago, I was chatting with [12]Nat Bennett about AI in their own +programming, as I was trying to reconcile Kent Beck's^[13]4 love for LLM-driven +programming with my own lukewarm experience. + + Me: “Are you finding it [AI] good enough that it might be a mug's game to + program unassisted?” + Nat: “I usually switch back and forth between prompting and writing code by + hand a lot while I'm working. [...] But like, yesterday it fixed the + biggest performance problem in my application with a couple of sentences + from me. This was a performance problem that I already kind of knew how to + solve! It also made an insane decision about exceptions at the same time.” + +That's neat, I respect it, but also note that Nat did not say “Yes, use LLMs, +you fucking moron”. + + Nat (later): “I do think, by the way, that it is entirely possible that + we're all getting punked by what's essentially a magic mirror. Which is + part of why I'm like, only mess with this stuff if it's fun.” + +The magic mirror line is exactly the sort of thing that [14]Bjarnason hinted at +in the article linked at the very beginning, arrived at independently. + +Or Jesse Alford's assessment of the steps required to give it a fair trial: + + “I think you basically want to tell it what you want to add and why, like + you were writing a story for your team. Then you ask it to make a plan to + do this, and if that plan seems likely to produce the results you want, you + ask it to do the thing. [15]Stefan Prandl and Nat have actually done this + kind of thing more than I have. You should be ready to try repeatedly.” + (emphasis mine) + +This sounds cool! But being ready to try repeatedly? This does not sound like +rocket fuel. + +Or Stefan Prandl: + + “Updates on the agentic machine. It has spent 5 hours attempting to fix + errors in unit tests. It has been unsuccessful. + + I don't think people tend to talk about the massive wastes of time and + resources these things can cause, so, just keeping reporting on the LLM + systems honest.” + +Is it not, perhaps, a possibility that your friend is excited by a shiny new +tool and has failed to introspect adequately as to their true productivity? +There are, after all, literally hundreds of thousands of people that think +playing Jira Scrabble is an effective use of their time, and they also do not +have a reason to lie to me about this. Nonetheless, every year, I must watch +sadly as they lead my dejected peers to the Backlog Mines, where they will +waste precious hours reciting random components of the Fibonacci sequence. + +What I'm getting at is all the people that make me feel like I'm “standing +still”, including most of the ones I know that use AI and I like enough to ask +for mentorship from, have never indicated that incorporating AI into my +company's development workflow is at all a priority, and they won't even talk +to me about it if I don't nag them. + +However, some of them do live in the Bay Area, and I am willing to align with +you on the idea that this makes them lying snakes. + +IV. Is AI Getting The Right Level Of Attention? + + “But AI is also incredibly — a word I use advisedly — important. It’s + getting the same kind of attention that smart phones got in 2008, and not + as much as the Internet got. That seems about right.” + +Tomothy — can I call you Tomothy? — this raises some very important questions, +ones which I'm sure the whole audience would be very keen on getting answers +to. Namely, where is the portal to the magical plane that you live in? Answer +me, you selfish bastard! + +I have been assured that there was a phase in the IT world where, upon bringing +any project to management, they would say “Why isn't there a mobile app in this +project?”. This is because many people are [16]very credulous, especially when +they are spending other people's money. + +However, I still find myself wanting to make the lengthy journey to the pocket +dimension that you inhabit, because the hype I've seen around AI is like, +fucking next level, and I want out. We are at Amway-Megachurch-Cult levels of +hype. The last time I attended a conference, the [17]room was full of +non-technicians paying lip service to the Holy Trinity Of Things They Can't +Possibly Understand — blockchain, quantum, AI. + +Executives and directors from around the world have called me to say that they +can't fund any projects if they don't pretend there is AI in them. Non-profits +have asked me if we could pretend to do AI because it's the only way to fund +infrastructure in the developing world. Readers keep emailing me to say that +their contracts are getting cancelled because someone smooth-talked their CEO +into believing that they don't need developers. I was miraculously allowed onto +some mandated “Professional Development For Board Members On AI” panel hosted +by the Financial Times^[18]5, alongside people like Yahoo's former CDO, and the +preparation consisted of being informed repeatedly that the audience has no +idea what AI does but is scared they'll be fired or sued if they don't buy it. + +I wish, oh how I wish that it was like other hype cycles, but presumably not +many people were walking around saying that smartphones are going to solve +physics and usher in the end of all human labor, [19]real things Sam Altman has +said. I personally know people from university whose retirement plan is “AI +makes currency obsolete before I turn 40”. I understand that you don't care if +that happens — and that is okay, it is irrelevant to how the technology +performs for you at work now. But given that you can find thousands of people +saying these things by glancing literally anywhere, how can you also say the +technology is getting the correct amount of attention? This is wild. + +Tomothy, my washing machine has betrayed me. I turn it on and it says +“optimizing with AI” but it never explains what it is optimizing, and then I +still have to pick all the settings manually. + +cd87353b-0c7a-4747-8ee3-47e8766cbd37~1(1).jpg + +Please, please, please, let me into your blissful paradise, I'll do anything. + +V. These Executives Are Grifting Or Incompetent + + “Tech execs are mandating LLM adoption. That’s bad strategy. But I get + where they’re coming from.” + +Tomtom — can I call you Tomtom? — do you get where they're coming from? Do you +really? Re-read what you just wrote and repent for your conciliatory ways. + +If you, a person I believe is not a tech executive and is bullish on the +technology, can identify that this is bad strategy in presumably ten +milliseconds of thought, what does that say about the people who are doing +this? + +Where they're coming from is: + +a ) trying to stoke their share prices via frenzied speculation +b ) trying to generate hype so they can IPO and scam some gamblers +c ) being fucking morons + +Sorry, those are the only reasons for engaging in obviously bad strategy. It's +so obvious that you didn't bother explaining why it's bad strategy because you +know that we all know. They have misaligned incentives or do not know what +they're doing. This isn't like a grandmaster losing to Magnus Carlsen because +they played a subtly incorrect variant of the Sicilian^[20]6 thirty-five moves +ago. We're talking about supposedly world-class leaders sitting down and going +“I always move the horsies first because it's hard to see the L-shapes”. +They're either playing a different game, i.e Hyperlight Grifter, or they're +behaving like goddamn baboons. + +This is an inescapable conclusion if you accept that it is obviously bad +strategy, which you did. Welcome to the Logic Thunderdome, pal, where two men +enter, one man dies, and the other feels that he wasted valuable calories on +the murder. + +Good strategy could perhaps be something like gently suggesting people +experiment with LLMs in their workflows, buying a bunch of $100 licenses, and +maybe paying for some coaching in the effective usage of these tools if you are +somehow able to navigate the ten thousand “thought leaders” that were +cybersecurity experts a year ago, and real estate agents before that. Then +instruct everyone to shut up and go back to doing their jobs. + +Whenever someone announces they are going AI first, I am the person that gets +the emails from their engineering teams and directors describing what is really +happening in-house. I've received emails that are probably admissible as +evidence of intent to defraud investors. You have not accurately perceived +where these people are coming from, because they are coming from the +ever-lengthening queue outside the gates of Hell. + +VI. Killing Strawmen + + Do you like fine Japanese woodworking? All hand tools and sashimono + joinery? Me too. Do it on your own time. + +Tomahawk Missile – can I call you Tomahawk Missile? – I agree that people are +very miscalibrated on GenAI in both directions. Did you know the angriest +message I got about my stance on AI is that I was too pro-AI? I also cringe +whenever someone says “stochastic parrot” or “this is just pattern-matching and +could never be conscious”. We actually have no idea what makes things +conscious, and we have very little idea re: how human brains work. It is +totally plausible to me that we are stochastic parrots and it simply doesn't +feel that way from the inside. + +I don't talk about those people very much for two reasons. + +One, even explaining the abstract concept of [21]qualia is like, super hard, +let alone talking about [22]the hard problem of consciousness. Some things are +best left to professionals and textbooks. + +Two, while these are silly positions that deserve refutation, they are also not +at all interesting. That doesn't make it wrong to refute them, but they are +also not impactful. The only reason that I think it's worth addressing the +other side of the Crazy Pendulum, i.e, my washing machine doing AI, is that +they have different effects in the world. + +And I'm not even talking about environmental impacts or discrete harms caused +by AI, I'm talking about the fact it's impossible to talk about anything else. +GenAI has sucked the air out of every room, and no one can hear you scream +reason in a hard vacuum. + +The former category of maximalist AI-haters exist on Mastodon, which most +executives do not know exists and certainly do not use to guide the allocation +of society's funding. The latter category of trembling AI sycophants is +literally killing people — I know of a hospital in Australia that is wasting +all their time on AI initiatives, which caused them to leave data quality +issues unfixed, which caused them to under-report COVID deaths, which caused a +premature lifting of masking policies. How many old people go through a major +hospital per day? Do the math and riddle me this, Tomahawk: which one of these +groups should I be worried about? + +So, you know, when you hear someone make a totally economically irrelevant +argument about the craft? Putting aside all the second-order effects in how +changing the way you program might change the way you develop as an engineer, +let's say that these people aren't thinking of that, and are just being dumb. A +person turning up to a CEO and going “no, don't do the cheap thing, pay me to +do stuff because of craftsmanship”. + +I will concede that you did not create that strawman, because it is a real +viewpoint that people hold. But you have certainly walked out of the debate +hall, decapitated a scarecrow, and declared victory. + +VII. Why The Half-Hearted Defense Of Artists? + + “Important caveat: I’m discussing only the implications of LLMs for + software development. For art, music, and writing? I got nothing. I’m + inclined to believe the skeptics in those fields. I just don’t believe them + about mine.” + +Tomtom — I've decided I like Tomtom — I don't understand why you've ceded +authority on these artistic endeavors. LLMs are better for writing than they +are for programming!^[23]7 It is much harder to complect most forms of written +content into such a state that you will cause slowdowns further down the line +than it is to screw up a codebase. It basically requires you to write a +long-form novel, and even then you will probably not produce an unhandled +exception and crash production in a manner that costs millions of dollars. +You'll just produce Wind And Truth^[24]8. If you're inclined to believe people +who are skeptical of AI writing, it probably follows that you should also not +be so flabbergasted by programmers having doubts. + +It sounds like this is a sort of not-that-sincerely-felt handwave at vast +economic harm being inflicted on a relatively poor (by programmer standards) +demographic. And then you go on to say this anyway! + + “We imagine artists spending their working hours pushing the limits of + expression. But the median artist isn’t producing gallery pieces. They + produce on brief: turning out competent illustrations and compositions for + magazine covers, museum displays, motion graphics, and game assets.” + +So are we leaving the arts out of it or not? Should I or should I not just get +GenAI to produce all the pictures I need if I am being a greedy capitalist? I'm +not talking about morals, I'm talking about whether it is selfishly rational to +use GenAI to make my content more appealing. + +In your own article, the art across the top banner was clearly attributed to +[25]Annie Ruygt, and it looks totally different, to my eyes, to the [26]AI slop +people are sticking on their websites. If it turns out Annie used GenAI for +that, then I will be extremely owned. + +In any case, the artwork on her website is [27]gorgeous, and she describes +herself as producing work for Fly.io. Despite this, I am willing to collaborate +with you to write some hatemail describing her work as “competent but unworthy +of a gallery”, and my consultancy is also happy to tell her that she's fired. +And while we’re at it, we'll fire whoever made the hire for gross inefficiency +in the age of AI. + +VIII. End + +Wait, can I call you Tommy Gun? + +PS: + +Obligatory link [28]to About Us page that I forced my team to let me write, to +justify doing all this other writing during work hours. + +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ + + 1. But writer-to-writer, I think it's well-written. If it makes you feel + better, Thomas, Bjarnason also objects vehemently to my tone and style. + However, he still links people to my writing because my points are not + slop! [29]↩ + + 2. I am famous for my very restrained and calm takes. [30]↩ + + 3. Also, I think I've become too sensitive about coming across as anti-AI, + because sometimes my team sits around while an LLM wastes tons of our time + while I go “no, no, this is really easy, it'll get it”, but I will accept + that this is Problem Exists Between Keyboard And Chair. [31]↩ + + 4. I do not sip rocket fuel, but I slam Kent Beck's Kool-Aid. [32]↩ + + 5. How do board members do their professional diligence on AI before spending + billions of dollars on it? They join the call, leave their screens on, and + walk away until they get credited for the hours. Maybe we are all the same, + deep down. [33]↩ + + 6. All my hopes of becoming even a mediocre chess player were dashed when I + discovered there is an opening called the Hyperaccelerated Dragon, + preventing me from ever wanting to do anything else with any enthusiasm. + [34]↩ + + 7. This is not quite accurate, but broadly true. On one hand, books don't stop + working if you've got clunky prose. On the other hand, if books stopped + working when you had clunky prose, then you'd never ship clunky prose, a + guarantee that programs can provide for some set of errors. But, broadly + speaking, yeah, LLMs churn out adequate — i.e, stuff generally not good + enough for me to read — prose without needing a billion agents, special + tooling and also have minimal risk of catastrophic failure. [35]↩ + + 8. Figured I'd start a feud with Brandon Sanderson while I'm at it. Please + note that I'm not saying he used GenAI to write, I'm saying some of the + dialogue was horrendous. What were you thinking, buddy? [36]↩ + +[37]← Previous +○ [38] Epesooj Webring +[39]Next → + +Subscribe via [40]RSS / [41]via Email. + +Powered by [42]mataroa.blog. + + +References: + +[1] https://ludic.mataroa.blog/ +[2] https://fly.io/blog/youre-all-nuts/ +[3] https://www.baldurbjarnason.com/2025/trusting-your-own-judgement-on-ai/ +[4] https://ludic.mataroa.blog/blog/contra-ptaceks-terrible-article-on-ai/#fn:1 +[5] https://ludic.mataroa.blog/blog/i-will-fucking-piledrive-you-if-you-mention-ai-again/ +[6] https://www.youtube.com/watch?v=lDVtXSpm378 +[7] https://ludic.mataroa.blog/blog/contra-ptaceks-terrible-article-on-ai/#fn:2 +[8] https://www.youtube.com/watch?v=sQYXZCUvpIc +[9] https://www.linkedin.com/in/thomasptacek/ +[10] https://www.linkedin.com/video/live/urn:li:ugcPost:7338958277646393345/?originTrackingId=98BFbYghSVqcncNLBFxvDA%3D%3D +[11] https://ludic.mataroa.blog/blog/contra-ptaceks-terrible-article-on-ai/#fn:3 +[12] https://www.simplermachines.com/ +[13] https://ludic.mataroa.blog/blog/contra-ptaceks-terrible-article-on-ai/#fn:4 +[14] https://www.baldurbjarnason.com/2025/trusting-your-own-judgement-on-ai/ +[15] https://www.linkedin.com/in/redezem/ +[16] https://ludic.mataroa.blog/blog/brainwash-an-executive-today/ +[17] https://ludic.mataroa.blog/blog/an-empty-hall-of-smiling-assassins/ +[18] https://ludic.mataroa.blog/blog/contra-ptaceks-terrible-article-on-ai/#fn:5 +[19] https://www.youtube.com/shorts/UM3xV8IyE70 +[20] https://ludic.mataroa.blog/blog/contra-ptaceks-terrible-article-on-ai/#fn:6 +[21] https://plato.stanford.edu/entries/qualia/ +[22] https://iep.utm.edu/hard-problem-of-conciousness/ +[23] https://ludic.mataroa.blog/blog/contra-ptaceks-terrible-article-on-ai/#fn:7 +[24] https://ludic.mataroa.blog/blog/contra-ptaceks-terrible-article-on-ai/#fn:8 +[25] https://annieruygtillustration.com/ +[26] https://katecarruthers.com/2024/06/16/ai-autonomous-everything/ +[27] https://thespacioustarot.com/ +[28] https://www.hermit-tech.com/about +[29] https://ludic.mataroa.blog/blog/contra-ptaceks-terrible-article-on-ai/#fnref:1 +[30] https://ludic.mataroa.blog/blog/contra-ptaceks-terrible-article-on-ai/#fnref:2 +[31] https://ludic.mataroa.blog/blog/contra-ptaceks-terrible-article-on-ai/#fnref:3 +[32] https://ludic.mataroa.blog/blog/contra-ptaceks-terrible-article-on-ai/#fnref:4 +[33] https://ludic.mataroa.blog/blog/contra-ptaceks-terrible-article-on-ai/#fnref:5 +[34] https://ludic.mataroa.blog/blog/contra-ptaceks-terrible-article-on-ai/#fnref:6 +[35] https://ludic.mataroa.blog/blog/contra-ptaceks-terrible-article-on-ai/#fnref:7 +[36] https://ludic.mataroa.blog/blog/contra-ptaceks-terrible-article-on-ai/#fnref:8 +[37] https://akols.com/previous?id=ludic +[38] https://akols.com/ +[39] https://akols.com/next?id=ludic +[40] https://ludic.mataroa.blog/rss/ +[41] https://ludic.mataroa.blog/newsletter/ +[42] https://mataroa.blog/ diff --git a/static/archive/nazhamid-com-8ujuab.txt b/static/archive/nazhamid-com-8ujuab.txt new file mode 100644 index 0000000..4367651 --- /dev/null +++ b/static/archive/nazhamid-com-8ujuab.txt @@ -0,0 +1,97 @@ + • [1] Naz Hamid + • [2]Journal + • [3]Links + • [4]About + • + +[7]Just One Good Thing + +Today’s culture seems to reward and celebrate the hustle. The neverending idea +that one should always be productive, working, producing, shipping. + +At times, I’ve compared myself to peers, colleagues, and friends. Places like +LinkedIn and other social media make me cringe: everyone performing in favor of +being seen as someone with their shit together. Impostor syndrome strikes. On +the other end, workingworkingworking results in burnout and feeling like +nothing was accomplished anyway. + +This followed me for decades, but over the last decade I’ve begun to let go in +many ways and focused on my immediate people and myself. + +This is not as easy to do as we’d like, as stress, obligations, and pressure +reveal themselves in the form of externalities: things out of or beyond our +control. + +In the last year, a mindset shift and approach appeared as a very simple idea: +just do one thing, that I want to do today. + +The one thing can be small or big, easy or labored, fleeting or long. I carve +out time to go play drums for two hours, go for a bouldering session, do a +shorter 20 minute run, read a page of a book, eat something I’m really excited +about, and more. Even on the most difficult day, I can adjust and find the +smallest thing that I am excited about and do it. + +I needed some way to change my outlook. Developing a habit that is less about +more and embracing the simple and ordinary has brought me a semblance of peace. +It’s allowed for adaptability and resilience when the days go sideways and joy +and delight on days that go smoothly. + +Just. One. Good. Thing. + +Jul 21 2025 ⋅ [8]personal + +Related + + • [9] Boy Meets Girl + ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ + Oct 26 2004 + • [10] Music That Got Me Through 2020 + ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ + Jan 31 2021 + • [11] On Racism + ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ + Mar 17 2021 + +[12]Prev +Beyond Curiosity + +I write an occasional newsletter called Weightshifting. It was originally +comprised of design, culture, and travel notes, morphed into [13]two seasons of +overland travel, and has now returned to its original ideal of observations in +the field. You can subscribe below. + +Email address [14][ ] [15][Subscribe] +[logotype] + +© 2000 - 2025 Naz Hamid. + +Get some RSS feeds: [16]Journal or [17]Links. You can email me at my [18]first +name at this domain. I’m primarily on [19]Mastodon, occasionally feel forced to +pop into [20]LinkedIn because professional reasons (!?), and am increasingly +not logging movies on [21]Letterboxd. This site is [22]climate-friendly, and +last built at Jul 31, 2025, 9:10 PM PDT. + +[23]Back to top + + +References: + +[1] https://nazhamid.com/ +[2] https://nazhamid.com/journal +[3] https://nazhamid.com/links +[4] https://nazhamid.com/about +[7] https://nazhamid.com/journal/just-one-good-thing/ +[8] https://nazhamid.com/topic/personal/ +[9] https://nazhamid.com/journal/boy-meets-girl/ +[10] https://nazhamid.com/journal/2020-music/ +[11] https://nazhamid.com/journal/on-racism/ +[12] https://nazhamid.com/journal/beyond-curiosity/ +[13] https://nazhamid.com/newsletter +[16] https://nazhamid.com/feed.xml +[17] https://nazhamid.com/links.xml +[18] https://nazhamid.com/journal/just-one-good-thing/# +[19] https://mastodon.social/@nazhamid +[20] https://www.linkedin.com/in/nazhamid/ +[21] https://letterboxd.com/weightshift/ +[22] https://www.websitecarbon.com/website/nazhamid-com/ +[23] https://nazhamid.com/journal/just-one-good-thing/#top