Large language models (LLMs) are reshaping software engineering by enabling vibe coding—building software primarily through prompts rather than writing code. Although widely publicized as a productivity breakthrough, little is known about how practitioners actually define and engage in these practices. To shed some light on this emerging phenomenon, we conducted a grounded theory study of 20 vibe-coding videos, including 7 live-streamed coding sessions (approximately 16 hours, 254 prompts) and 13 opinion videos (approximately 5 hours), supported by additional analysis of activity durations and intents of prompts. Our findings reveal a spectrum of behaviors: some vibe coders rely almost entirely on AI without inspecting code, while others examine and adapt generated outputs. Across approaches, all must contend with the stochastic nature of generation, with debugging and refinement described as “rolling the dice.” Further, divergent mental models, shaped by vibe coders’ expertise and reliance on AI, influence prompting strategies, evaluation practices, and levels of trust. Through additional quantitative analysis, vibe coders spend over 20% of session time on average waiting for model responses, with some sessions exceeding 50%. We also observe prompt redundancy: for some participants, nearly 40% of prompts repeat prior intents. These findings open new directions for research on the future of software engineering and point to practical opportunities for tool design and education.
Check the paper at ACM Digital Library, arXiv, or ResearchGate