NSWE

All Posts

  • Published on
    The jump from a text-heavy menu to an AI-enhanced visual guide is more than a UX trick, it is a concrete example of Software 3.0. Building on Andrej Karpathy’s Sequoia talk, this post explores the transition from Software 1.0 (explicit code) to Software 2.0 (trained neural networks) to Software 3.0 (LLMs as interpreters). As models increasingly operate directly on user context, many “middleman” apps and interfaces will disappear. The engineer’s value shifts from writing glue code to directing outcomes with judgment, taste, and systems-level understanding.
  • Published on
    AI-generated outputs from tools like Claude and GitHub Copilot are not independent artifacts but the direct result of how they are guided through prompts, context, and constraints. This means the engineer fully owns both the strengths and flaws of the output. Selective attribution, where success is claimed and failure is blamed on the model, is inconsistent and undermines standards. Effective use of AI requires deliberate input, rigorous review, and full accountability, with the understanding that anything produced and shipped is ultimately the engineer’s responsibility.
  • Published on
    Open-source AI in cybersecurity is accelerating both offense and defense. Small teams now have capabilities that previously required large budgets, while companies simultaneously rethink security headcount and role design. This post breaks down the opportunity, the disruption risk, and the practical adaptation strategy for engineers and security professionals who want to stay ahead.