Give Copilot context from your entire AI stack
Swylink gives GitHub Copilot intelligent, persistent memory. Decisions from Cursor, Claude Code, and Windsurf are searchable — better suggestions from richer context.
Connect GitHub Copilot in 4 steps
- 01Installnpx swylink@latest init
- 02Authenticatenpx swylink auth
- 03Connectnpx swylink connect
- 04CodeOpen GitHub Copilot & go
Where Swylink writes the GitHub Copilot config
.vscode/mcp.jsonRunning npx swylink connect automatically detects GitHub Copilot and writes the MCP bridge configuration to this path. No manual editing required.
Why GitHub Copilot needs cross-tool persistent memory
GitHub Copilot is the most widely adopted AI coding assistant, with inline suggestions that complete code as you type and Copilot Chat for conversational coding in VS Code. But Copilot operates with a narrow context window — it sees the current file, open tabs, and recent chat history. It has no memory of what happened in previous sessions, and it has no visibility into decisions made in other AI tools.
With MCP support now available in VS Code, Swylink gives Copilot a persistent memory layer that extends far beyond its built-in context. When you spent an hour in Claude Code debugging a race condition and documenting the fix, Copilot can now access that context. When Cursor's Composer restructured your API layer last week, Copilot's inline suggestions reflect those architectural decisions.
This transforms Copilot from a stateless autocomplete engine into a context-aware coding partner. Its inline suggestions become more accurate because they are informed by the full history of decisions across your AI stack. Copilot Chat conversations become more productive because the AI already knows what has been tried, what worked, and what was deliberately avoided. The workspace context that Copilot already uses is now enriched with cross-tool, cross-session intelligence.
How Copilot inline decisions flow to other tools
Copilot interactions generate specific types of context that Swylink captures: code completion patterns that reveal preferred coding conventions, Copilot Chat decisions about implementation approaches, workspace-level architectural discussions, test generation strategies and coverage decisions, code review feedback from Copilot-assisted reviews, and refactoring patterns chosen during Chat-guided restructuring. This context flows to every other tool in your stack — when Claude Code starts a new session, it can search for what Copilot Chat discussed and continue from there.
Frequently asked questions about Swylink and GitHub Copilot
Does Swylink work with Copilot Chat in VS Code?
Yes. Swylink integrates with VS Code's MCP support, which means Copilot Chat can save and search context through the Swylink MCP server. Decisions made during Copilot Chat conversations are captured with structured metadata and become searchable from any other AI tool.
Where is the MCP config for GitHub Copilot?
GitHub Copilot in VS Code reads MCP configuration from .vscode/mcp.json in your project root. Running npx swylink connect detects VS Code and writes the Swylink server block to this file automatically.
Will Swylink slow down Copilot's inline suggestions?
No. Swylink operates as a separate MCP server that Copilot Chat calls on demand. Inline code completions are unaffected — they continue to use Copilot's own model and context window. Swylink enriches the broader context available to Copilot Chat without interfering with the real-time suggestion engine.