# Codetyper.nvim - LLM Documentation > This file helps LLMs understand the Codetyper.nvim plugin structure and functionality. ## Overview Codetyper.nvim is a Neovim plugin written in Lua that acts as an AI-powered coding partner. It integrates with multiple LLM APIs (Claude, OpenAI, Gemini, Copilot, Ollama) to help developers write code faster using a unique prompt-based workflow. ## Core Concept Instead of having an AI generate entire files, Codetyper lets developers maintain control: 1. Developer opens a source file (e.g., `index.ts`) 2. A companion "coder file" is created (`index.coder.ts`) 3. Developer writes prompts using special tags: `/@ prompt @/` 4. When the closing tag is typed, the LLM generates code 5. Generated code is injected into the target file ## Plugin Architecture ``` lua/codetyper/ ├── init.lua # Main entry, setup function, module initialization ├── config.lua # Configuration management, defaults, validation ├── types.lua # Lua type definitions for LSP/documentation ├── utils.lua # Utility functions (file ops, notifications) ├── commands.lua # Vim command definitions (:Coder, :CoderOpen, etc.) ├── window.lua # Split window management (open, close, toggle) ├── parser.lua # Parses /@ @/ tags from buffer content ├── gitignore.lua # Manages .gitignore entries for coder files ├── autocmds.lua # Autocommands for tag detection, filetype, auto-index ├── inject.lua # Code injection strategies ├── health.lua # Health check for :checkhealth ├── tree.lua # Project tree logging (.coder/tree.log) ├── logs_panel.lua # Standalone logs panel UI ├── cost.lua # LLM cost tracking with persistent history ├── credentials.lua # Secure credential storage (API keys, models) ├── llm/ │ ├── init.lua # LLM interface, provider selection │ ├── claude.lua # Claude API client (Anthropic) │ ├── openai.lua # OpenAI API client (with custom endpoint support) │ ├── gemini.lua # Google Gemini API client │ ├── copilot.lua # GitHub Copilot client (uses OAuth from copilot.lua/vim) │ └── ollama.lua # Ollama API client (local LLMs) ├── agent/ │ ├── init.lua # Agent system entry point │ ├── ui.lua # Agent panel UI │ ├── logs.lua # Logging system with listeners │ ├── tools.lua # Tool definitions (read_file, edit_file, write_file, bash) │ ├── executor.lua # Tool execution logic │ ├── parser.lua # Parse tool calls from LLM responses │ ├── queue.lua # Event queue with priority heap │ ├── patch.lua # Patch candidates with staleness detection │ ├── confidence.lua # Response confidence scoring heuristics │ ├── worker.lua # Async LLM worker wrapper │ ├── scheduler.lua # Event scheduler with completion-awareness │ ├── scope.lua # Tree-sitter scope resolution │ └── intent.lua # Intent detection from prompts ├── ask/ │ ├── init.lua # Ask panel entry point │ └── ui.lua # Ask panel UI (chat interface) └── prompts/ ├── init.lua # System prompts for code generation └── agent.lua # Agent-specific prompts and tool instructions ``` ## .coder/ Folder The plugin automatically creates and maintains a `.coder/` folder in your project: ``` .coder/ ├── tree.log # Project structure, auto-updated on file changes ├── cost_history.json # LLM cost tracking history (persistent) ├── brain/ # Knowledge graph storage │ ├── nodes/ # Learning nodes by type │ ├── indices/ # Search indices │ └── deltas/ # Version history ├── agents/ # Custom agent definitions └── rules/ # Project-specific rules ``` ## Key Features ### 1. Multiple LLM Providers ```lua llm = { provider = "claude", -- "claude", "openai", "gemini", "copilot", "ollama" claude = { api_key = nil, model = "claude-sonnet-4-20250514" }, openai = { api_key = nil, model = "gpt-4o", endpoint = nil }, gemini = { api_key = nil, model = "gemini-2.0-flash" }, copilot = { model = "gpt-4o" }, ollama = { host = "http://localhost:11434", model = "deepseek-coder:6.7b" }, } ``` ### 2. Agent Mode Autonomous coding assistant with tool access: - `read_file` - Read file contents - `edit_file` - Edit files with find/replace - `write_file` - Create or overwrite files - `bash` - Execute shell commands ### 3. Transform Commands Transform `/@ @/` tags inline without split view: - `:CoderTransform` - Transform all tags in file - `:CoderTransformCursor` - Transform tag at cursor - `:CoderTransformVisual` - Transform selected tags ### 4. Auto-Index Automatically create coder companion files when opening source files: ```lua auto_index = true -- disabled by default ``` ### 5. Logs Panel Real-time visibility into LLM operations with token usage tracking. ### 6. Cost Tracking Track LLM API costs across sessions: - **Session tracking**: Monitor current session costs in real-time - **All-time tracking**: Persistent history in `.coder/cost_history.json` - **Per-model breakdown**: See costs by individual model - **50+ models**: Built-in pricing for GPT, Claude, O-series, Gemini Cost window keymaps: - `q`/`` - Close window - `r` - Refresh display - `c` - Clear session costs - `C` - Clear all history ### 7. Automatic Ollama Fallback When API rate limits are hit (e.g., Copilot free tier), the plugin: 1. Detects the rate limit error 2. Checks if local Ollama is available 3. Automatically switches provider to Ollama 4. Notifies user of the provider change ### 8. Credentials Management Store API keys securely outside of config files: ```vim :CoderAddApiKey ``` **Features:** - Interactive prompts for provider, API key, model, endpoint - Stored in `~/.local/share/nvim/codetyper/configuration.json` - Supports all providers: Claude, OpenAI, Gemini, Copilot, Ollama - Switch providers at runtime with `:CoderSwitchProvider` **Credential priority:** 1. Stored credentials (via `:CoderAddApiKey`) 2. Config file settings (`require("codetyper").setup({...})`) 3. Environment variables (`OPENAI_API_KEY`, etc.) ### 9. Event-Driven Scheduler Prompts are treated as events, not commands: ``` User types /@...@/ → Event queued → Scheduler dispatches → Worker processes → Patch created → Safe injection ``` **Key concepts:** - **PromptEvent**: Captures buffer state (changedtick, content hash) at prompt time - **Optimistic Execution**: Ollama as fast scout, escalate to remote LLMs if confidence low - **Confidence Scoring**: 5 heuristics (length, uncertainty, syntax, repetition, truncation) - **Staleness Detection**: Discard patches if buffer changed during generation - **Completion Safety**: Defer injection while autocomplete popup visible **Configuration:** ```lua scheduler = { enabled = true, -- Enable event-driven mode ollama_scout = true, -- Use Ollama first escalation_threshold = 0.7, -- Below this → escalate max_concurrent = 2, -- Parallel workers completion_delay_ms = 100, -- Wait after popup closes } ``` ### 10. Tree-sitter Scope Resolution Prompts automatically resolve to their enclosing function/method/class: ```lua function foo() /@ complete this function @/ -- Resolves to `foo` end ``` **Scope types:** `function`, `method`, `class`, `block`, `file` For replacement intents (complete, refactor, fix), the entire scope is extracted and sent to the LLM, then replaced with the transformed version. ### 11. Intent Detection The system parses prompts to detect user intent: | Intent | Keywords | Action | |--------|----------|--------| | complete | complete, finish, implement | replace | | refactor | refactor, rewrite, simplify | replace | | fix | fix, repair, debug, bug | replace | | add | add, create, insert, new | insert | | document | document, comment, jsdoc | replace | | test | test, spec, unit test | append | | optimize | optimize, performance, faster | replace | | explain | explain, what, how, why | none | ### 12. Tag Precedence Multiple tags in the same scope follow "first tag wins" rule: - Earlier (by line number) unresolved tag processes first - Later tags in same scope are skipped with warning - Different scopes process independently ## Commands All commands can be invoked via `:Coder {subcommand}` or dedicated aliases. ### Core Commands | Command | Alias | Description | |---------|-------|-------------| | `:Coder open` | `:CoderOpen` | Open coder split view | | `:Coder close` | `:CoderClose` | Close coder split view | | `:Coder toggle` | `:CoderToggle` | Toggle coder split view | | `:Coder process` | `:CoderProcess` | Process last prompt in coder file | | `:Coder status` | - | Show plugin status and configuration | | `:Coder focus` | - | Switch focus between coder/target windows | | `:Coder reset` | - | Reset processed prompts | | `:Coder gitignore` | - | Force update .gitignore | ### Ask Panel (Chat Interface) | Command | Alias | Description | |---------|-------|-------------| | `:Coder ask` | `:CoderAsk` | Open Ask panel | | `:Coder ask-toggle` | `:CoderAskToggle` | Toggle Ask panel | | `:Coder ask-close` | - | Close Ask panel | | `:Coder ask-clear` | `:CoderAskClear` | Clear chat history | ### Agent Mode (Autonomous Coding) | Command | Alias | Description | |---------|-------|-------------| | `:Coder agent` | `:CoderAgent` | Open Agent panel | | `:Coder agent-toggle` | `:CoderAgentToggle` | Toggle Agent panel | | `:Coder agent-close` | - | Close Agent panel | | `:Coder agent-stop` | `:CoderAgentStop` | Stop running agent | ### Agentic Mode (IDE-like Multi-file Agent) | Command | Alias | Description | |---------|-------|-------------| | `:Coder agentic-run ` | `:CoderAgenticRun ` | Run agentic task | | `:Coder agentic-list` | `:CoderAgenticList` | List available agents | | `:Coder agentic-init` | `:CoderAgenticInit` | Initialize .coder/agents/ and .coder/rules/ | ### Transform Commands (Inline Tag Processing) | Command | Alias | Description | |---------|-------|-------------| | `:Coder transform` | `:CoderTransform` | Transform all /@ @/ tags in file | | `:Coder transform-cursor` | `:CoderTransformCursor` | Transform tag at cursor | | - | `:CoderTransformVisual` | Transform selected tags (visual mode) | ### Project & Index Commands | Command | Alias | Description | |---------|-------|-------------| | - | `:CoderIndex` | Open coder companion for current file | | `:Coder index-project` | `:CoderIndexProject` | Index entire project | | `:Coder index-status` | `:CoderIndexStatus` | Show project index status | ### Tree & Structure Commands | Command | Alias | Description | |---------|-------|-------------| | `:Coder tree` | `:CoderTree` | Refresh .coder/tree.log | | `:Coder tree-view` | `:CoderTreeView` | View .coder/tree.log | ### Queue & Scheduler Commands | Command | Alias | Description | |---------|-------|-------------| | `:Coder queue-status` | `:CoderQueueStatus` | Show scheduler/queue status | | `:Coder queue-process` | `:CoderQueueProcess` | Manually trigger queue processing | ### Processing Mode Commands | Command | Alias | Description | |---------|-------|-------------| | `:Coder auto-toggle` | `:CoderAutoToggle` | Toggle automatic/manual processing | | `:Coder auto-set ` | `:CoderAutoSet ` | Set mode (auto/manual) | ### Memory & Learning Commands | Command | Alias | Description | |---------|-------|-------------| | `:Coder memories` | `:CoderMemories` | Show learned memories | | `:Coder forget [pattern]` | `:CoderForget [pattern]` | Clear memories | ### Brain Commands (Knowledge Graph) | Command | Alias | Description | |---------|-------|-------------| | - | `:CoderBrain [action]` | Brain management (stats/commit/flush/prune) | | - | `:CoderFeedback ` | Give feedback (good/bad/stats) | ### LLM Statistics & Feedback | Command | Description | |---------|-------------| | `:Coder llm-stats` | Show LLM provider accuracy stats | | `:Coder llm-feedback-good` | Report positive feedback | | `:Coder llm-feedback-bad` | Report negative feedback | | `:Coder llm-reset-stats` | Reset LLM accuracy stats | ### Cost Tracking | Command | Alias | Description | |---------|-------|-------------| | `:Coder cost` | `:CoderCost` | Show LLM cost estimation window | | `:Coder cost-clear` | - | Clear session cost tracking | ### Credentials Management | Command | Alias | Description | |---------|-------|-------------| | `:Coder add-api-key` | `:CoderAddApiKey` | Add/update LLM provider credentials | | `:Coder remove-api-key` | `:CoderRemoveApiKey` | Remove provider credentials | | `:Coder credentials` | `:CoderCredentials` | Show credentials status | | `:Coder switch-provider` | `:CoderSwitchProvider` | Switch active provider | ### UI Commands | Command | Alias | Description | |---------|-------|-------------| | `:Coder type-toggle` | `:CoderType` | Show Ask/Agent mode switcher | | `:Coder logs-toggle` | `:CoderLogs` | Toggle logs panel | ## Configuration Schema ```lua { llm = { provider = "claude", -- "claude" | "openai" | "gemini" | "copilot" | "ollama" claude = { api_key = nil, -- string, uses ANTHROPIC_API_KEY env if nil model = "claude-sonnet-4-20250514", }, openai = { api_key = nil, -- string, uses OPENAI_API_KEY env if nil model = "gpt-4o", endpoint = nil, -- custom endpoint for Azure, OpenRouter, etc. }, gemini = { api_key = nil, -- string, uses GEMINI_API_KEY env if nil model = "gemini-2.0-flash", }, copilot = { model = "gpt-4o", -- uses OAuth from copilot.lua/copilot.vim }, ollama = { host = "http://localhost:11434", model = "deepseek-coder:6.7b", }, }, window = { width = 25, -- percentage (25 = 25% of screen) position = "left", -- "left" | "right" border = "rounded", }, patterns = { open_tag = "/@", close_tag = "@/", file_pattern = "*.coder.*", }, auto_gitignore = true, auto_open_ask = true, auto_index = false, -- auto-create coder companion files scheduler = { enabled = true, -- enable event-driven scheduler ollama_scout = true, -- use Ollama as fast scout escalation_threshold = 0.7, max_concurrent = 2, completion_delay_ms = 100, }, } ``` ## LLM Integration ### Claude API - Endpoint: `https://api.anthropic.com/v1/messages` - Uses `x-api-key` header for authentication - Supports tool use for agent mode ### OpenAI API - Endpoint: `https://api.openai.com/v1/chat/completions` (configurable) - Uses `Authorization: Bearer` header - Supports tool use for agent mode - Compatible with Azure, OpenRouter, and other OpenAI-compatible APIs ### Gemini API - Endpoint: `https://generativelanguage.googleapis.com/v1beta/models` - Uses API key in URL parameter - Supports function calling for agent mode ### Copilot API - Uses GitHub OAuth token from copilot.lua/copilot.vim - Endpoint from token response (typically `api.githubcopilot.com`) - OpenAI-compatible format ### Ollama API - Endpoint: `{host}/api/generate` or `{host}/api/chat` - No authentication required for local instances - Tool use via prompt-based approach ## Agent Tool Definitions ```lua tools = { read_file = { path: string }, edit_file = { path: string, find: string, replace: string }, write_file = { path: string, content: string }, bash = { command: string, timeout?: number }, } ``` ## Code Injection Strategies 1. **Refactor**: Replace entire file content 2. **Add**: Insert at cursor position in target file 3. **Document**: Insert above current function/class 4. **Generic**: Prompt user for action ## File Naming Convention | Target File | Coder File | |-------------|------------| | `index.ts` | `index.coder.ts` | | `utils.py` | `utils.coder.py` | | `main.lua` | `main.coder.lua` | Pattern: `name.coder.extension` ## Dependencies - **Required**: Neovim >= 0.8.0, curl - **Optional**: telescope.nvim (enhanced file picker), copilot.lua or copilot.vim (for Copilot provider) ## Contact - Author: cargdev - Email: carlos.gutierrez@carg.dev - Website: https://cargdev.io