• Software 3.0 is speeding up coding - but delivery is a different

    From TechnologyDaily@1337:1/100 to All on Tuesday, April 21, 2026 11:45:24
    Software 3.0 is speeding up coding - but delivery is a different story

    Date:
    Tue, 21 Apr 2026 10:29:45 +0000

    Description:
    AI writes code faster than ever but smart teams know speed isn't the whole story.

    FULL STORY ======================================================================Copy link Facebook X Whatsapp Reddit Pinterest Flipboard Threads Email Share this article 0 Join the conversation Follow us Add us as a preferred source on Google Newsletter Tech Radar Pro Are you a pro? Subscribe to our newsletter Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed! Become a Member in Seconds Unlock instant access to exclusive member features. Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over. You are
    now subscribed Your newsletter sign-up was successful Join the club Get full access to premium articles, exclusive features and a growing list of member rewards. Explore An account already exists for this email address, please log in. Subscribe to our newsletter AI has made writing code dramatically faster. That much is undeniable. But what I consistently see across teams building mobile and digital platforms is that the speed gain rarely translates
    directly into faster delivery. It often just shifts the bottleneck.

    Code that once took days to write now appears in hours, only to queue up in code review or wait for testing. The coding phase accelerates; everything around it struggles to keep pace. Jerzy Biernacki Social Links Navigation

    Chief AI Officer at Miquido. AI researcher Andrej Karpathy described this shift as Software 3.0: instead of writing every line manually, teams now describe what they want the system to do and let AI produce large sections of the implementation. Article continues below You may like AI hype and the quality hangover 'AI coding tools are now the default': Top engineering teams double their output as nearly two-thirds of code production shifts to AI-Generation and could reach 90% within a year Testing AI is not like testing software and most companies haven't figured that out yet

    In a recent interview, Karpathy revealed that by late 2024 his own working ratio had flipped, from writing roughly 80% of code himself to delegating 80% to agents. The new verb, he argues, is no longer coding but manifesting, expressing intent to systems that implement it.

    The agentic era is here, as tools like Claude Code, released in May 2025, and OpenAIs Codex agent, released in October 2025, have moved far beyond autocomplete. They can now autonomously plan, write, and debug entire features.

    The initial phase of any project feels almost frictionless. You can go from a vague idea to a working proof of concept in a single afternoon. However, complications appear once that initial version has to fit into the actual product.

    The new code still needs to work with existing services, handle real user traffic, and stay reliable as the rest of the platform evolves. Faster generation doesnt remove these steps. It moves them downstream, and concentrates them. Are you a pro? Subscribe to our newsletter Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed! Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners
    or sponsors By submitting your information you agree to the Terms &
    Conditions and Privacy Policy and are aged 16 or over.

    Engineering teams end up spending more time reviewing, integrating, and stabilizing output that was produced quickly but without full visibility into the wider system. Code review queues grow. Test suites have to work harder.

    Features that look complete in isolation reveal subtle inconsistencies only once everything is connected and running under realistic load.

    There is a subtler challenge that rarely gets discussed: what developers actually do while they wait. Working with an agent means delegating a task
    and then sitting with downtime. What to read next What vibe coding means for API platforms and the future of DevRel The pilot phase is over. Heres whats next for enterprise AI automation Why LLMs are plateauing and what that
    means for software security

    The developers who use that time well, by preparing the next prompt, spinning up a parallel agent on another part of the system or reviewing architecture, are seeing compounding gains. Those who dont lose their deep work rhythm entirely. In practice, many developers also gravitate toward AI tools to reduce effort rather than multiply output.

    Individual efficiency rises, but team delivery velocity doesnt always follow. I see this regularly in our own teams and in the organizations we work with. Managing this gap requires active project leadership, clear expectations, and a genuine shift in developer mindset. The tools are only half the story. When the whole process catches up The teams that do achieve real delivery acceleration across the full cycle, not just the coding phase, have
    redesigned how they work, not just which tools they use. Three things make
    the difference. First, upfront architecture investment.

    Agents produce far better output when given clear structural constraints. Investing serious time in system design before prompting pays back many times over in review and integration effort saved.

    Second, agents checking agents. This means using dedicated review agents to check generated code for security vulnerabilities, architectural consistency, and compliance with your quality standards. These agents catch issues early before they move further down the pipeline.

    It also includes test generation agents that create tests from tester-written specifications and run them continuously. On large projects, regression testing that once took weeks of manual effort now runs in a fraction of the time.

    Third, giving agents the right context and capabilities. An agent working
    from vague instructions will produce vague results. This starts with how requirements are written: well-structured product requirements documents that are precise and detailed enough for an agent to execute from, not just for humans to read and interpret.

    It extends to connecting agents to the right sources of truth: your design system so UI output stays consistent, your project management tools so agents understand current requirements, your documentation so they are not working from guesswork. This is where institutional knowledge compounds into a
    durable edge.

    Adoption looks different depending on context. Startups are leading the charge. With funding harder to secure than a few years ago, there is real pressure to show results fast, and early-stage teams can afford to move quickly without deep security or compliance constraints. Vibe coding a first version is now simply how startups operate.

    Larger enterprises tend to move more cautiously because their systems are
    more complex, compliance requirements are tighter, and the risks to their reputation are much greater. Adoption is happening, but generated code goes through significantly more review before reaching production.

    According to the JetBrains State of Developer Ecosystem 2025 survey, 85% of developers now regularly use AI coding tools and 41% of all code written in 2025 was AI-generated. The tools are ubiquitous, however the discipline
    around them is not. The changing role of engineers What is shifting most fundamentally is the nature of the engineering role itself. Developers are becoming system directors rather than implementers. The day-to-day work is
    now less about writing beautiful code and more about defining architecture, managing agent output, ensuring security, and thinking about scalability.

    The weight has moved from writing to verifying and orchestrating. Karpathy puts it precisely: the bottleneck is no longer the keyboard. Strong engineers can now work effectively in languages they have never used before. The barriers between frontend and backend are dissolving.

    Entire MVPs ship from teams of one or two people. A proof of concept that would once have taken weeks can be built in an afternoon and sent to a client the same day, something that genuinely changes competitive dynamics in
    pitches and early engagements.

    The advantages are clearest where patterns are well-established: standard integrations, repeatable workflows, and routine business logic. The further you move from that territory into complex, long-lived systems with years of accumulated context, or into questions of security and scalability, the more human judgment remains essential.

    Software 3.0 is real. The acceleration in the coding phase is genuine and significant.

    But the teams extracting the most value are not the ones generating the most code, instead they are the ones who have rebuilt their processes around the new reality: investing in architecture up front, using agents to verify agents, giving agents the right context to work from, and managing the human dynamics of a fundamentally changed working day.

    The bottleneck is no longer writing code. It is judgment about what to build, how to structure it, and whether what the agent produced actually belongs in
    a system that has to perform reliably under real conditions. That is what engineering discipline looks like in the Software 3.0 era. We've featured the best Large Language Models (LLMs) for coding. This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here
    are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro



    ======================================================================
    Link to news story: https://www.techradar.com/pro/software-3-0-is-speeding-up-coding-but-delivery- is-a-different-story


    --- Mystic BBS v1.12 A49 (Linux/64)
    * Origin: tqwNet Technology News (1337:1/100)