Vibe coding is more than just letting AI write your code—it’s about finding a rhythm between human creativity and machine intelligence. Over the past couple of months, I’ve tried both asynchronous and synchronous vibe coding using VSCode Copilot and GPT-5.
A successful asynchronous session and synchronous session output.

Image Credit: Wikideas1, CC0, via Wikimedia Commons
Here are my top five takeaways from this journey.
1. 🧠 You Should Control the Software Architecture
One of my realizations was this: the agent should assist you—not architect for you. When you define the structure, modules, and data flow upfront, GPT-5 and Copilot become powerful collaborators. But if you let them dictate architecture, you risk ending up with fragmented logic, inconsistent naming conventions, or bloated abstractions.
This is especially critical in healthcare informatics, where modularity and traceability are non-negotiable. I’ve found that sketching out the architecture (even in pseudocode or markdown) before invoking the agent ensures coherence and maintainability. It’s like setting the tempo before the jam session begins.
2. 🔁 Asynchronous Works Best with Clear Requirements
The Issue → PR workflow works when your requirements are crisp. If you can articulate the constraints and expected behaviour clearly, GPT-5 can generate surprisingly robust pull requests. This async mode is ideal for batch tasks, refactoring, or implementing well-scoped features.
However, when ambiguity creeps in—say, during exploratory prototyping or edge-case handling—synchronous chat in VSCode becomes indispensable. Real-time back-and-forth lets you clarify intent, iterate quickly, and debug collaboratively. It’s the difference between sending a memo and having a conversation.
3. 🧐 Comprehend the Generated Code
This one’s non-negotiable, especially for students and junior developers: always understand what the agent produces. AI-generated code can be syntactically correct but semantically off. It might use outdated libraries, introduce subtle bugs, or violate domain-specific constraints.
I encourage my students to treat Copilot’s output as a draft (applies to any LLM output, not just code!). Read it line by line. Ask why it chose a particular approach. Refactor it if needed. This habit not only improves code quality but also deepens your understanding of the language and the problem space.
4. 🧩 Use AGENTS.md or Similar Configs to Tune Behaviour
Another helpful thing is the use of configuration files like AGENTS.md to define agent behavior, tone, and scope. By setting expectations—e.g., “use functional programming,” “avoid external dependencies,” or “optimize for readability”—you steer the agent toward your coding philosophy. This is especially useful in collaborative settings or open-source projects. It ensures consistency across contributions and reduces the cognitive load of reviewing AI-generated code. Think of it as a vibe contract between you and your assistant.
Final Thoughts
Vibe coding isn’t just a productivity hack—it’s a mindset shift. It invites us to blend intuition with structure, creativity with control. But like any powerful tool, it demands intentionality. Define your architecture. Choose the right mode for the task. Understand the code. Evaluate early. Configure wisely.
Whether you’re mentoring students, building clinical decision support tools, or prototyping the next big thing, these lessons can help you vibe with confidence—and code with clarity.
- Four takeaways from vibe coding - October 29, 2025
- CRISP-T: Bridging Text, Numbers, and AI for Smarter Qualitative Research - October 14, 2025
- Vibe Coding FHIR to OMOP - August 22, 2025