Using Cursor and VS Code as Hybrid
Sustainable hybrid development workflow combining VS Code, GitHub Copilot, Cursor, and external LLM engines for long-term stability and high-performance AI-assisted engineering.
This section is where I document how I use AI tools in my development workflow. The objective is to effectively use AI assistance inside the editor while keeping execution authority, cost, and context under control.
The currently used approach is the Cursor and VS Code hybrid workflow.
Cursor is used where deep multi-file reasoning and persistent AI context are required.
VS Code remains the authoritative execution environment connected to containers, remote servers, and runtime shells.
Web AI subscriptions ≠ API model access ≠ IDE-integrated access
These are three different ways to think about AI access:
Services like ChatGPT Plus, Claude Pro, and Gemini Advanced give you access to models in the browser.
Cursor gives you access to multiple models inside an IDE-like interface. It keeps context tied to the project and remembers chats across sessions.
Cursor plans meter usage internally based on tokens, but you don’t manually supply API keys. (it is possible but not convenient)
Copilot (used in vscode) uses a Premium Request system:
GitHub lets you set budgets, monitor usage, and watch trends in the billing UI.
Internally models still operate on token context windows, but billing is abstracted as requests, not per-token charges.
This is the balance I need:
Cursor is my reasoning workspace.
VS Code is my execution workspace.
I keep these separate so:
| Function | Tool |
|---|---|
| Persistent AI reasoning and multi-file refactors | Cursor |
| Code execution, containers, remote shells | VS Code |
| Runtime validation and deployment | VS Code |
| Long-running design context | Cursor |
This separation avoids attaching multiple editors to the same runtime container while maintaining continuous AI context.
This pattern keeps costs predictable and context manageable.
Note on Copilot Chat Persistence: VS Code stores Copilot chat history per workspace, and you can export/import sessions via VS Code commands. This is not stored in your GitHub account web history.
The prompting conventions in the sub-articles remain authoritative:
How I work with both editor operationally, including summary and context handoff.
→ Using Cursor and VS Code as a Hybrid Environment
While I am using the Cursor / Code hybrid approach, the information in this is still relevant as it talks about the different agents and their data peristance.
→ AI Multi-Engine Workflow in VS Code
Managing context windows, request budgets, and token efficiency in the hybrid workflow.
AI is treated as an interactive reasoning instrument — not an autonomous code generator.
Execution authority always remains with the human operator and the runtime environment.
Sustainable hybrid development workflow combining VS Code, GitHub Copilot, Cursor, and external LLM engines for long-term stability and high-performance AI-assisted engineering.
Operational guide and prompt templates for GPT, Gemini, Claude, and GitHub Copilot integration in VS Code workflow.
How do we get the best return for our limited token and request resources, using all available options for access to our AI systems