top of page

VIBE Coding

Introduction

VIBE coding – short for “Visual, Interactive, Bot-assisted Engineering” – is an emerging programming paradigm where developers work continuously with AI code assistants to build software in a conversational, iterative way. Instead of manually writing every line, the programmer describes goals or fixes in natural language, and a large language model (LLM) generates or modifies the code accordingly. This concept was popularized in early 2025 by AI researcher Andrej Karpathy, who described “fully giv[ing] in to the vibes” of AI-generated code. In practice, vibe coding shifts the human role from coding syntax to guiding, testing, and refining the AI’s output. Over 2025, vibe coding has gone from a viral idea to a significant trend in software development, promising faster development and broader access to programming – but also raising challenges around code quality, security, and trust.

 
Enterprise Adoption and Mission-Critical Use Cases

Large software companies and enterprises are actively exploring vibe coding to boost developer productivity. By 2025, many organizations have integrated AI coding assistants (like GitHub Copilot or internal LLMs) into their development workflows. For example, ANZ Bank (Australia) was an early adopter of GitHub Copilot and reported that over 7% of its code was AI-generated in a six-month period, a figure expected to rise. Citi bank likewise announced plans to roll out Copilot to its entire 40,000-person developer workforce in 2024. Table 1 highlights these and other notable adoption examples:

  • Organization / ExampleDateKey VIBE Coding Adoption Highlight

  • ANZ Bank (Australia)2024Early Copilot adopter; ~7% of code in mid-2024 was AI-generated.

  • Citi (U.S.)2024Deploying GitHub Copilot to ~40k developers enterprise-wide.

  • Y Combinator Startups2025~25% of Winter 2025 startups had ~95% of code AI-generated.

  • Solo Dev Game (fly.pieter.com)2025Single developer used AI to build a flight sim, reaching $1M ARR in 17 days.

Large tech firms (e.g. Microsoft, Google, IBM) similarly leverage AI-assisted coding internally, though often under strict guidelines to ensure quality and security. Enterprises see productivity gains as a primary driver – early case studies showed AI pair-programmers can speed up coding significantly. Y Combinator’s CEO Garry Tan even touted moving from “10x speedups” to “100x productivity gains” in months, allowing teams one-fifth the size to accomplish the same work. Such claims, while optimistic, reflect the pressure on companies to adopt AI to remain competitive. Companies also view vibe coding as a way to optimize resources – enabling smaller engineering teams to deliver more, and democratize development by empowering non-traditional coders (like domain experts or product managers) to contribute via natural language.

That said, enterprises remain cautious about mission-critical applications. Industry analysts warn that vibe coding lacks the rigorous guarantees needed for safety-critical or regulated systems. In high-stakes software (financial transaction engines, medical devices, aerospace control systems, etc.), companies still rely on traditional development with thorough review and testing – AI assistance is applied only in peripheral ways. For example, vibe coding might be used to prototype a user interface or internal tool, but not to auto-generate the core of a life-critical algorithm without human scrutiny. Enterprises often limit AI coding to non-sensitive code or require that all AI-generated code be reviewed and tested by experienced engineers before merging. As Simon Willison put it, “vibe coding your way to a production codebase is clearly risky” – professional teams emphasize maintainable, correct code, regardless of who (or what) wrote it. In 2025, the prevailing approach at large firms is to embrace AI coding for its speed, but to “trust, yet verify” every output when it comes to mission-critical software.

Startups and Rapid Innovation with VIBE Coding

Startups have arguably been the most aggressive adopters of vibe coding. With limited resources and time, new companies use AI copilots to accelerate product development and reduce the need for large engineering teams. Y Combinator reported that in its Winter 2025 batch, 25% of startups had codebases ~95% generated by AI. This is a remarkable shift – it means many new products are being built with only minimal human-written code. The efficiency gains let startups reach milestones with far fewer employees than historically possible. In fact, some early-stage companies in 2024–25 achieved $1–10 million in revenue with under 10 employees, “something that’s really never happened before in early stage venture”. VIBE coding is a key enabler of these “tiny but mighty” startups, allowing a couple of founders to build software that used to require dozens of engineers.

One vivid example is the story of Pieter Levels’ flight simulator game. Pieter, a solo entrepreneur with no prior game development experience, used an AI-powered IDE (Cursor) and natural language prompts to create a 3D multiplayer flight sim (fly.pieter.com) in just 3 hours. He simply described the desired game (“make a 3D flying game in a web browser”) and let the AI generate the code, tweaking through dialogue. The result went viral – within 17 days the game had 320,000 players and was earning about $87k in monthly revenue (>$1M ARR) via in-game ads. This “vibe-coded” product, built almost entirely by one person guiding an AI, demonstrates how startups (or even individual creators) can launch viable products at breakneck speed. It’s given rise to the notion of the “Minimum Vibable Product” (MVP) – an initial product version built largely through AI assistance, functional enough to attract users and even revenue. Entrepreneurs increasingly aim to quickly spin up an MVP via vibe coding, then iterate based on market feedback.

Beyond games, startups are using vibe coding to build SaaS apps, mobile apps, and internal tools rapidly. Non-engineer founders can prototype ideas by describing features to an AI (often using plain English). For instance, journalist Kevin Roose (a self-professed non-coder) built a “LunchBox Buddy” app by conversing with an AI co-developer, simply telling it what he wanted. Other novice developers have created everything from retro-style video games to personalized AI assistants using conversational code generation tools. This trend suggests software creation is becoming more accessible – a savvy individual with a problem to solve can “talk” an application into existence without deep programming expertise. In 2025, countless startups are effectively “AI-first” in their development approach, relying on LLMs to handle the heavy lifting of coding while they focus on product vision and domain knowledge.

Tools and Platforms Enabling VIBE Coding

A rich ecosystem of tools, platforms, and frameworks has emerged to support vibe coding workflows. These range from enhanced IDEs with built-in AI, to powerful code-generating chatbots, to agent frameworks that can autonomously write and refactor code. Some of the notable categories and examples include:

  • AI Pair Programming Assistants: The most widespread tools are AI coding assistants integrated into editors. GitHub Copilot, launched in 2021 and continually improved, is a prime example. Copilot suggests code completions and entire functions in real-time as developers code or even when they ask questions in natural language. By 2025, Copilot has a chat mode (“Copilot Chat”) that developers can consult for bug fixes or generating unit tests. Competing products like Amazon CodeWhisperer and Google’s Codey (in Google Cloud) offer similar code suggestion capabilities trained on vast code corpora. These assistants act like “pair programmers”, watching the context and offering help – allowing a single developer to effectively work with an AI partner. Studies in enterprise settings have shown such tools can improve productivity and even code quality when used carefully.

  • AI-First IDEs and Editors: A new breed of development environments is built around AI from the ground up. Cursor (by AnySphere) is a standalone code editor forked from VS Code that tightly embeds a conversational agent into the coding workflow. Developers can type or speak commands like “add a function for user login” and Cursor will generate and insert the code, or even execute high-level tasks in “agentic mode” with minimal hand-holding. Similarly, Windsurf (from the Codeium team) brands itself as the first “agentic IDE”, with deep project-wide context awareness and the ability to coordinate multi-file edits intelligently. These IDEs often include features like voice input, global codebase understanding, and one-click running of generated code – aiming to keep the developer in a creative “flow state” while the AI handles boilerplate and syntax. They blur the line between writing code and instructing an intelligent agent.

  • Editor Extensions and Autonomous Agents: For developers who prefer traditional IDEs, there are extensions that add vibe coding capabilities. Cline is an open-source VS Code extension that can plan and execute development tasks via natural language commands. It operates in a two-phase approach: a Plan Mode where it discusses design and proposes a solution (without writing code), and an Act Mode where it implements the agreed plan into code under human supervision. This ensures the developer stays in control (no surprise code changes). Roo Code goes a step further – it’s an extension that acts as an autonomous junior developer, cycling through plan, code, run, and debug steps with minimal intervention. Roo Code can even adopt different “personas” like Architect or QA, to analyze the code from different angles during its autonomous loop. These agent-style tools are experimental but point towards a future where parts of coding (especially trivial or repetitive tasks) can be delegated entirely to an AI agent running inside your IDE.

  • Natural Language App Builders (No-Code AI): Another class of platforms lets users build entire applications by describing them, without writing any code manually. For example, Lovable and Vitara (new in 2025) provide browser-based studios where you can say “Create a web app with a React frontend and a Supabase backend for a todo list” and the platform will generate the project structure, code, and even deploy it. These tools combine the concepts of no-code/low-code platforms with LLM-powered generation. They handle integration with databases, authentication, and deployment while giving the user high-level control via chat. Such platforms often emphasize “full code ownership” (letting users download or modify the code) and support exporting to standard frameworks. While currently they may be limited to specific tech stacks (e.g. React + Supabase), their appeal is strong for startups and non-developers who want to materialize an idea quickly. We are also seeing niche AI coding tools like Sweep AI, which integrates with GitHub to automatically generate pull request fixes for issues, and Devin (by Cognition AI), an AI assistant specialized in debugging code. Dozens of such tools exist, and more are launching each month as the demand for AI-driven development grows.

 

Notably, nearly every major developer tool vendor has added AI features by 2025. JetBrains introduced AI assistance in preview for its popular IDEs (e.g. IntelliJ, PyCharm), and Visual Studio has built-in AI-powered IntelliCode. This ubiquity of AI tooling means vibe coding is increasingly accessible regardless of one’s environment. However, developers must still apply software engineering discipline on top of these tools. Experts recommend treating AI suggestions as a first draft – to be reviewed, tested, and improved by the human developer. As one guide put it, “coding without solid engineering practices can be a real vibe killer”. Therefore, new best practices and “vibe coding guidelines” have emerged to help teams use these tools effectively (for example, setting up AI code review checks, maintaining clear documentation even for AI-written code, and preventing sensitive code from being sent to external APIs). The tooling is powerful, but how it’s integrated and governed determines whether vibe coding leads to maintainable, high-quality software or a tangle of AI-generated bugs.

 
Integration into Development Workflows and Pipelines

A critical aspect of vibe coding’s evolution is how it integrates into broader development workflows. Rather than existing in a vacuum, AI code generation is being woven into each stage of the software lifecycle:

  • AI-Augmented Pair Programming: In day-to-day development, a common pattern is human + AI pair programming. Developers “pair” with an AI agent, whether through an in-IDE chatbot or an auto-complete engine. This is essentially an always-available collaborator that can suggest code, explain errors, or even brainstorm solutions in natural language. Companies like Microsoft have noted that this shifts the developer’s job more towards code review. Instead of writing from scratch, engineers spend more time reading AI-generated code and validating it. Some teams formally adopt “AI partner” as part of their workflow – e.g. one engineer writes a spec or high-level prompt, then the AI writes an implementation, then a second engineer reviews and refines it. This resembles a pair programming rotation, except one “partner” is non-human. It speeds up writing boilerplate or doing repetitive refactors, while the humans focus on logic and verification.

  • Code Review and Quality Assurance: Vibe coding has spurred new tools for code review and QA integration. Linters and CI pipelines now often include AI-based static analysis that can catch common errors in AI-written code. There are GitHub apps (like Sweep or Amazon CodeGuru) that will automatically comment on a pull request with potential issues or even propose fixes using an LLM. Teams are also experimenting with AI-in-the-loop testing: for instance, when a CI test fails, an AI could analyze the failing test and propose a code patch to fix the bug. Early agent frameworks such as Patchwork aim to automate grunt work like opening PRs for minor bug fixes or dependency updates using AI. While full automation is not yet widespread, these assistive behaviors are increasingly common. Developers might receive a GitHub issue description and invoke an AI to draft the solution code, then iterate. The AI can even generate the unit tests for the new code, which the developer then approves and merges. In essence, vibe coding is expanding from the editor to the repository and CI/CD level.

  • Continuous Integration & Deployment (CI/CD): In 2025 we see AI being used to maintain the DevOps pipeline itself. Build configs, deployment scripts, and infrastructure-as-code can be managed with natural language prompts to AI tools. For example, a developer could “vibe code” a GitHub Actions workflow by describing the desired CI steps, rather than writing YAML by hand. Testing is also enhanced by AI – LLMs can generate test cases based on code or even based on past bugs to increase coverage. Some companies incorporate LLM-based code explainers in their CI, so when a build fails or a security scan flags something, an AI generates a report explaining the issue in plain language for the team. Moreover, the concept of ephemeral environments for vibe coding is emerging: platforms like Shipyard suggest spinning up temporary testing environments for AI-generated code to immediately verify it in isolation. By plugging these AI agents into CI/CD, organizations aim to catch mistakes from vibe coding early and use the AI itself to remediate them.

  • Project Management and Collaboration: Vibe coding blurs roles, so workflows have adapted to keep everyone in sync. When non-developers (like product managers or designers) contribute via natural language prompts, version control and collaboration tools must track those changes. Solutions like Nasuni’s UniFS global file system have promoted features to track both human and AI-generated changes, ensuring version history and auditability for code produced through vibe coding. Some teams practice an “AI change log” – documenting what portions of the code were generated by AI and what prompts were used, which can be useful for compliance or later debugging. We’re also seeing AI assistants in ticketing systems: e.g. an issue in Jira might have an AI suggestion attached, or a Slack bot could monitor a chat channel and spontaneously offer code snippets to solve a discussed problem. All these integrations seek to incorporate the AI agent as a true team member in the development process.

  • Multi-Agent Systems: A forward-looking trend is the use of multiple AI agents that collaborate on development tasks. In experimental setups, one agent might generate code, another review it, and a third run tests – mimicking a team of virtual developers. For instance, an “Architect” agent can break a feature request into subtasks, then a “Coder” agent implements them, and a “Tester” agent writes tests or checks the execution. While this is mostly in research or early-stage tools (like the aforementioned Roo Code’s personas), it showcases how CI pipelines might one day have autonomous agent teams handling certain classes of changes end-to-end. Even today, though, the human developers act as the critical reviewers and gatekeepers for any AI-driven changes. Organizations integrate vibe coding in pipelines with a strong human-in-the-loop principle: AI helps draft and even execute tasks, but human engineers approve and guide the overall direction.

 

In summary, vibe coding is increasingly interwoven with standard DevOps practices. By 2025 it’s common to find AI assistance from the moment a feature is conceived (AI helping spec it out) to when the code is written (AI generating it) to when it’s tested and deployed (AI verifying and even adjusting it). This tight integration is changing how software teams operate: development cycles are shorter and more continuous, and the traditional boundaries between coding, testing, and ops are fading as AI can fluidly jump between these contexts. The challenge for teams is to harness these AI integrations to enhance quality and speed, without letting the “automation” run away unchecked. Thus far, the most successful teams treat AI as a cooperative agent – deeply integrated, but always under mindful human oversight.

LLM Self-Improvement via VIBE Coding Techniques

An intriguing frontier in 2025 is using vibe coding not just to build software, but to improve the AI models themselves. Large language models can employ the same iterative, code-based approach to refine their own outputs or capabilities. In essence, an LLM can “vibe code” as a form of self-improvement: writing code, executing it, and learning from the results in a feedback loop. Researchers are experimenting with several mechanisms to enable LLMs to self-correct and learn in this way:

  • Automated Feedback Loops: Rather than relying solely on human feedback, an LLM can generate its own feedback by testing its outputs. For example, if the task is to write a function, the model might also generate a set of unit tests, run the code, and then analyze the failures to adjust its answer. This approach has been shown to significantly improve accuracy on coding challenges – one method known as “self-reflection” had GPT-4 iteratively refine its code and outperformed the standard GPT-4 by over 20% on certain problems. Essentially, the model engages in a dialogue with itself: propose solution → evaluate → refine. OpenAI’s tools (such as the Code Interpreter mode) hint at this, as they allow the model to execute Python code during a session and use the results to inform the next step. This auto-evaluation paradigm gives the model a form of memory and learning within a single session, akin to a developer running and debugging their code.

  • Reinforcement Learning (RL) with Self-Generated Data: To achieve more durable improvements, researchers in late 2024 began applying reinforcement learning so that LLMs learn from the process of coding and correcting. A notable work is SCoRe (Self-Correction via Reinforcement), which trains a model to identify and fix its own mistakes via multi-turn interactions. The innovation of SCoRe is having the model generate its own training data: it produces initial answers and then corrects them, using the success of those corrections as a reward signal to update the model. This avoids needing an external “teacher” model. Early results showed that LLMs are largely poor at self-correcting out-of-the-box (they tend to repeat errors or require a smarter model to guide them). But with reinforcement learning on its own correction attempts, a model can internalize a strategy for error-finding and fixing. In other words, the LLM practices vibe coding on itself during training – writing potential solutions and learning from failures – and becomes better at problem-solving in the process. Researchers reported that this approach yielded more reliable self-corrections and could be generalized to various domains beyond coding.

  • Learning from Software Evolution (Logs as Feedback): Another promising technique is feeding the model data from real software development cycles to teach it how to improve. In 2025, a team introduced SWE-RL, an RL-based method that trained an LLM on hundreds of thousands of code changes from open-source projects (issues, pull requests, bug fixes, etc.). Essentially, they treated the history of code edits and bug resolutions as demonstrations of how to go from a flawed state to an improved state – i.e., how to debug and refine code. By rewarding the model for producing the correct patch given an issue, the LLM learned to emulate the reasoning of developers fixing their code. The result was a model that could solve real-world coding issues (as tested on a benchmark of GitHub issues) at a success rate on par with some top proprietary models. Interestingly, this training on code evolution also boosted the model’s general reasoning abilities, not just coding – suggesting that learning the “process of iterative improvement” is a powerful form of self-improvement for LLMs. In summary, by exposing LLMs to the process of vibe coding (propose -> critique -> fix), either through direct practice or via historical data, we can make them more adept problem solvers.

  • Autonomous Agents and Self-Play: Taking inspiration from self-play in game AI, there are efforts to let multiple instances of an LLM engage in a collaborative improvement loop. For instance, one agent generates a solution, another evaluates it (perhaps tries to find flaws or adversarial cases), and the first agent learns from that feedback. This can be done iteratively until they converge on a high-quality solution. In the coding realm, an LLM might play the role of a “student” and “teacher” simultaneously – generating code and then explaining or grading it. If the grading finds issues, the student part tries again. While this is still experimental, it mirrors techniques that made systems like AlphaGo succeed via self-play. We also see domain-specific versions: an LLM might improve its understanding by writing code to query a database or simulate a scenario (for example, writing a short program to test a hypothesis and reading the output). By treating code as a tool for reasoning, LLMs can extend their effective intelligence. A noteworthy example is Voyager (2023), an agent that learned to play Minecraft by continuously writing, executing, and refining code (scripts) to achieve in-game goals. Voyager used the game environment as a sandbox to get feedback on its code’s effectiveness and adjusted accordingly, acquiring more skills over time without additional human training. This demonstrates how an LLM can leverage vibe coding in an external environment to iteratively become more capable.

 

Looking ahead, these self-improvement methods suggest that LLMs will increasingly “learn on the job”. Instead of only being updated through large offline training runs, future models might have ongoing improvement loops where they use vibe coding techniques to refine their knowledge. For example, an AI coding assistant could monitor which of its suggested code snippets were accepted vs. rejected by developers and adapt its suggestions (a form of implicit feedback learning). Techniques like reinforcement learning from human feedback (RLHF) are already standard; what’s coming is reinforcement learning from AI’s own feedback. By the end of 2025, we anticipate more LLMs will be equipped with built-in evaluation routines – essentially, an inner critic that helps them polish their answers before presenting them. This is a direct outgrowth of the vibe coding philosophy: try something, see what happens, and improve through many small iterations.

Future Trends and Predictions for 2025

Vibe coding’s trajectory through 2025 points to several key trends and changes in practice that we expect to deepen:

  • Mainstream Enterprise Integration: We predict that by late 2025, most large development organizations will have formally incorporated vibe coding tools into their standard toolchains. With companies like Microsoft and Amazon promoting these assistants, and success stories from banks and tech firms, the hesitation around AI coding is fading. Enterprises will invest in secure, internal LLM solutions to alleviate concerns about proprietary code leakage (for example, using on-premises or fine-tuned models so that prompts don’t leave their network). Mission-critical code will still have human oversight, but as confidence grows and AI models improve, the threshold of what is “safe” to let AI handle will expand. We may see the first instances of AI-generated code being certified for use in regulated industries under strict validation. By year’s end, it wouldn’t be surprising if over 10% of new code in enterprise codebases is being machine-generated, given current trajectories (indeed some organizations are already near this range).

  • Evolution of Developer Roles: The role of a software developer will continue shifting from coder to architect and orchestrator. As one industry blog noted, developers are becoming “orchestrators of outcomes” – focusing on communicating intent and ensuring the final product meets requirements, rather than crafting every algorithm by hand. Skills like prompt engineering, system design, and quality assurance will be at a premium. The most valuable engineers will be those with strong product intuition and domain knowledge, who know what to build and can guide AI on how it should be built. We’ll likely see job titles or roles like “AI-assisted developer” or “prompt engineer” solidify, and training programs adjusted to teach collaboration with AI (for example, how to write effective prompts, how to debug AI-generated code, etc.). Meanwhile, the line between developer and non-developer blurs: a savvy business analyst might build an app via vibe coding, while a traditional coder might spend more time curating and verifying AI outputs. Organizations will adapt by fostering cross-functional teams where product managers and designers directly partake in vibe coding sessions, and engineers act as mentors/curators in that process.

  • Improved AI Reliability and Trust: Through 2025, we expect significant improvements in the reliability of code generated by LLMs. The ongoing research in self-correcting models and better training data (like incorporating more real-world code fixes) will reduce instances of “fragile” or incorrect code from AI. Current AI assistants sometimes produce syntactically correct but subtly wrong code or insecure code. By year’s end, advances such as RL-trained code models and larger context windows (allowing models to consider an entire codebase at once) will make AI suggestions more context-aware and safer. Tools will also provide more transparency – for example, an AI might explain why it generated a certain approach, or highlight uncertainties (“I am not fully sure about concurrency issues here, please double-check this part.”). This could increase human trust in using vibe coding for larger chunks of a project. We also anticipate improved AI governance tools: e.g. systems that can detect if an AI’s code was not reviewed or if it deviates from best practices, and then flag it before it reaches production (an extension of today’s static analysis, but AI-focused).

  • Integration of AI into Creative Design and UX: Vibe coding isn’t just about back-end logic; it’s influencing how we design software. In 2025, many UI/UX design tools (Figma, Adobe XD, etc.) are adding AI features to generate interface code or styling from natural language (“Make this layout responsive and in dark mode”). The concept of “intent is the new syntax” means that designers can express the intent (“I want a dashboard with these data visuals”) and have AI draft the implementation. This trend will accelerate, leading to tighter coupling between design and development phases via AI. For instance, a workflow could involve designing visually, having AI generate the React code for that design, and then a developer finalizing the integration. By the end of 2025, we might see early versions of fully AI-generated modules in production – for example, a minor feature entirely specced by product, coded by AI, and approved by dev, without a single line typed manually. Continuous deployment pipelines might eventually handle such features autonomously for A/B testing (with human oversight on the outcomes).

  • Growth of VIBE Coding Communities and Practices: As vibe coding solidifies, expect the ecosystem to mature similarly to how agile or open-source movements did. We’ll see more community-driven best practices, workshops, and hackathons (like the “Orange Vibe Jam” hackathons mentioned in the community) to share knowledge. Open-source “vibe-coded” projects may become common, where much of the code is AI-generated and the project maintainers focus on curating prompts and test cases. There may also be ethical and educational discussions: for example, how does one learn programming in an era of AI assistance? Curricula might include vibe coding exercises but also stress fundamental coding knowledge to avoid over-reliance on AI. Some skeptics foresee a backlash or “hype cycle” dip – indeed, not everyone is convinced vibe coding will suit complex long-term software development. We anticipate a balanced view emerging: vibe coding will not replace the need for skilled engineers, but it will become an indispensable part of the programmer’s toolkit. Like a calculator to a mathematician, AI code assistants will handle routine work so developers can tackle higher-level problems.

  • Self-Improving AI Agents: The research progress in self-improving LLMs suggests that by late 2025 we might see more autonomous coding agents that can update themselves. For instance, an AI coding assistant might notice it keeps making a particular mistake and adjust its outputs on the fly (perhaps by doing a quick internal training round or retrieving a corrected pattern from a knowledge base). While full online learning in production LLMs is a hard problem (with risks of drift), constrained forms of it may appear. Imagine an AI that, after a debugging session, stores the error and solution so it won’t repeat it. This could be done via plug-in modules or memory systems attached to the model. Projects like Facebook’s SWE-RL already proved benefits of training on software evolution data; productizing that means AI systems could continuously ingest anonymized code diffs from many users to get better (with privacy safeguards). Such feedback loops, if realized, will create a virtuous cycle: the more vibe coding is used, the smarter the AIs get, leading to even wider adoption.

 

In conclusion, 2025 is shaping up as the year vibe coding transitions from novelty to normality. Large enterprises are onboard (albeit carefully), startups are scaling faster with smaller teams, and a plethora of tools are making “AI-in-the-loop” development a daily reality. Software development is becoming more conversational and iterative, with natural language and AI collaboration at its core. By continuously refining how we integrate these AI helpers – and by improving the helpers themselves – the industry is moving toward a future where writing software is less about wrestling with syntax and more about shaping ideas into reality. Vibe coding exemplifies this shift, and its evolution through 2025 will likely set the stage for how humans and AI build technology hand-in-hand in the years to come.

Sources:

  1. Willison, S. (2025). Not all AI-assisted programming is vibe coding (but vibe coding rocks).

  2. Wikipedia. Vibe coding.

  3. Liddle, J. (2025). Is Your Enterprise Organization Vibe Coding? (Nasuni Blog).

  4. Everest Group (2025). Intent Is The New Syntax: Why Vibe Coding Represents a Shift....

  5. Fairbank, B. (2025). Minimum Vibable Product: Vibe Coding an Application.

  6. Naughton, J. (2025). Now you don’t even need code to be a programmer... (The Guardian).

  7. OnePromptMan (2025). Incredible! The 2025 Vibe Coding Game Jam Story.

  8. LLM Watch (2025). Don’t Believe the Vibe: Best Practices for Coding with AI Agents.

  9. iTnews (2024). Seven percent of ANZ code in past six months generated by AI.

  10. TreasurUp (2024). Generative AI in Banking – Recent Developments.

  11. LinkedIn – Pascal Biese (2024). LLMs Are Improving Themselves.

  12. Wei et al. (2025). SWE-RL: Advancing LLM Reasoning via RL on Software Evolution.

bottom of page