feat: add event-driven architecture with scope resolution

- Add event queue system (queue.lua) with priority-based processing
- Add patch system (patch.lua) with staleness detection via changedtick
- Add confidence scoring (confidence.lua) with 5 weighted heuristics
- Add async worker wrapper (worker.lua) with timeout handling
- Add scheduler (scheduler.lua) with completion-aware injection
- Add Tree-sitter scope resolution (scope.lua) for functions/methods/classes
- Add intent detection (intent.lua) for complete/refactor/fix/add/etc
- Add tag precedence rules (first tag in scope wins)
- Update autocmds to emit events instead of direct processing
- Add scheduler config options (ollama_scout, escalation_threshold)
- Update prompts with scope-aware context
- Update README with emojis and new features
- Update documentation (llms.txt, CHANGELOG.md, doc/codetyper.txt)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
2026-01-13 21:55:44 -05:00
parent 6268a57498
commit 8a3ee81c3f
28 changed files with 6055 additions and 2687 deletions

View File

@@ -7,9 +7,116 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [Unreleased]
## [0.4.0] - 2026-01-13
### Added
- **Event-Driven Architecture** - Complete rewrite of prompt processing system
- Prompts are now treated as events with metadata (buffer state, priority, timestamps)
- New modules: `queue.lua`, `patch.lua`, `confidence.lua`, `worker.lua`, `scheduler.lua`
- Priority-based event queue with observer pattern
- Buffer snapshots for staleness detection
- **Optimistic Execution** - Ollama as fast local scout
- Use Ollama for first attempt (fast local inference)
- Automatically escalate to remote LLM if confidence is low
- Configurable escalation threshold (default: 0.7)
- **Confidence Scoring** - Response quality heuristics
- 5 weighted heuristics: length, uncertainty phrases, syntax completeness, repetition, truncation
- Scores range from 0.0-1.0
- Determines whether to escalate to more capable LLM
- **Staleness Detection** - Safe patch application
- Track `vim.b.changedtick` and content hash at prompt time
- Discard patches if buffer changed during generation
- Prevents stale code injection
- **Completion-Aware Injection** - No fighting with autocomplete
- Defer code injection while completion popup visible
- Works with native popup, nvim-cmp, and coq_nvim
- Configurable delay after popup closes (default: 100ms)
- **Tree-sitter Scope Resolution** - Smart context extraction
- Automatically resolves prompts to enclosing function/method/class
- Falls back to heuristics when Tree-sitter unavailable
- Scope types: function, method, class, block, file
- **Intent Detection** - Understands what you want
- Parses prompts to detect: complete, refactor, fix, add, document, test, optimize, explain
- Intent determines injection strategy (replace vs insert vs append)
- Priority adjustment based on intent type
- **Tag Precedence Rules** - Multiple tags handled cleanly
- First tag in scope wins (FIFO ordering)
- Later tags in same scope skipped with warning
- Different scopes process independently
### Configuration
New `scheduler` configuration block:
```lua
scheduler = {
enabled = true, -- Enable event-driven mode
ollama_scout = true, -- Use Ollama first
escalation_threshold = 0.7,
max_concurrent = 2,
completion_delay_ms = 100,
}
```
---
## [0.3.0] - 2026-01-13
### Added
- **Multiple LLM Providers** - Support for additional providers beyond Claude and Ollama
- OpenAI API with custom endpoint support (Azure, OpenRouter, etc.)
- Google Gemini API
- GitHub Copilot (uses existing copilot.lua/copilot.vim authentication)
- **Agent Mode** - Autonomous coding assistant with tool use
- `read_file` - Read file contents
- `edit_file` - Edit files with find/replace
- `write_file` - Create or overwrite files
- `bash` - Execute shell commands
- Real-time logging of agent actions
- `:CoderAgent`, `:CoderAgentToggle`, `:CoderAgentStop` commands
- **Transform Commands** - Transform /@ @/ tags inline without split view
- `:CoderTransform` - Transform all tags in file
- `:CoderTransformCursor` - Transform tag at cursor
- `:CoderTransformVisual` - Transform selected tags
- Default keymaps: `<leader>ctt` (cursor/visual), `<leader>ctT` (all)
- **Auto-Index Feature** - Automatically create coder companion files
- Creates `.coder.` companion files when opening source files
- Language-aware templates with correct comment syntax
- `:CoderIndex` command to manually open companion
- `<leader>ci` keymap
- Configurable via `auto_index` option (disabled by default)
- **Logs Panel** - Real-time visibility into LLM operations
- Token usage tracking (prompt and completion tokens)
- "Thinking" process visibility
- Request/response logging
- `:CoderLogs` command to toggle panel
- **Mode Switcher** - Switch between Ask and Agent modes
- `:CoderType` command shows mode selection UI
### Changed
- Improved code generation prompts to explicitly request only raw code output (no explanations, markdown, or code fences)
- Window width configuration now uses percentage as whole number (e.g., `25` for 25%)
- Improved code extraction from LLM responses
- Better prompt templates for code generation
### Fixed
- Window width calculation consistency across modules
---
## [0.2.0] - 2026-01-11
@@ -87,6 +194,8 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- **Fixed** - Bug fixes
- **Security** - Vulnerability fixes
[Unreleased]: https://github.com/cargdev/codetyper.nvim/compare/v0.2.0...HEAD
[Unreleased]: https://github.com/cargdev/codetyper.nvim/compare/v0.4.0...HEAD
[0.4.0]: https://github.com/cargdev/codetyper.nvim/compare/v0.3.0...v0.4.0
[0.3.0]: https://github.com/cargdev/codetyper.nvim/compare/v0.2.0...v0.3.0
[0.2.0]: https://github.com/cargdev/codetyper.nvim/compare/v0.1.0...v0.2.0
[0.1.0]: https://github.com/cargdev/codetyper.nvim/releases/tag/v0.1.0

646
README.md
View File

@@ -7,29 +7,36 @@
## ✨ Features
- **🪟 Split View**: Work with your code and prompts side by side
- **💬 Ask Panel**: Chat interface for questions and explanations (like avante.nvim)
- **🏷️ Tag-based Prompts**: Use `/@` and `@/` tags to write natural language prompts
- **🤖 Multiple LLM Providers**: Support for Claude API and Ollama (local)
- **📝 Smart Injection**: Automatically detects prompt type (refactor, add, document)
- **🔒 Git Integration**: Automatically adds `.coder.*` files and `.coder/` folder to `.gitignore`
- **🌳 Project Tree Logging**: Automatically maintains a `tree.log` tracking your project structure
- **⚡ Lazy Loading**: Only loads when you need it
- 📐 **Split View**: Work with your code and prompts side by side
- 💬 **Ask Panel**: Chat interface for questions and explanations
- 🤖 **Agent Mode**: Autonomous coding agent with tool use (read, edit, write, bash)
- 🏷️ **Tag-based Prompts**: Use `/@` and `@/` tags to write natural language prompts
- **Transform Commands**: Transform prompts inline without leaving your file
- 🔌 **Multiple LLM Providers**: Claude, OpenAI, Gemini, Copilot, and Ollama (local)
- 📋 **Event-Driven Scheduler**: Queue-based processing with optimistic execution
- 🎯 **Tree-sitter Scope Resolution**: Smart context extraction for functions/methods
- 🧠 **Intent Detection**: Understands complete, refactor, fix, add, document intents
- 📊 **Confidence Scoring**: Automatic escalation from local to remote LLMs
- 🛡️ **Completion-Aware**: Safe injection that doesn't fight with autocomplete
- 📁 **Auto-Index**: Automatically create coder companion files on file open
- 📜 **Logs Panel**: Real-time visibility into LLM requests and token usage
- 🔒 **Git Integration**: Automatically adds `.coder.*` files to `.gitignore`
- 🌳 **Project Tree Logging**: Maintains a `tree.log` tracking your project structure
---
## 📋 Table of Contents
## 📚 Table of Contents
- [Requirements](#-requirements)
- [Installation](#-installation)
- [Quick Start](#-quick-start)
- [Configuration](#%EF%B8%8F-configuration)
- [Configuration](#-configuration)
- [LLM Providers](#-llm-providers)
- [Commands Reference](#-commands-reference)
- [Usage Guide](#-usage-guide)
- [How It Works](#%EF%B8%8F-how-it-works)
- [Keymaps](#-keymaps-suggested)
- [Agent Mode](#-agent-mode)
- [Keymaps](#-keymaps)
- [Health Check](#-health-check)
- [Contributing](#-contributing)
---
@@ -37,7 +44,7 @@
- Neovim >= 0.8.0
- curl (for API calls)
- Claude API key **OR** Ollama running locally
- One of: Claude API key, OpenAI API key, Gemini API key, GitHub Copilot, or Ollama running locally
---
@@ -48,16 +55,16 @@
```lua
{
"cargdev/codetyper.nvim",
cmd = { "Coder", "CoderOpen", "CoderToggle" },
cmd = { "Coder", "CoderOpen", "CoderToggle", "CoderAgent" },
keys = {
{ "<leader>co", "<cmd>Coder open<cr>", desc = "Coder: Open" },
{ "<leader>ct", "<cmd>Coder toggle<cr>", desc = "Coder: Toggle" },
{ "<leader>cp", "<cmd>Coder process<cr>", desc = "Coder: Process" },
{ "<leader>ca", "<cmd>CoderAgentToggle<cr>", desc = "Coder: Agent" },
},
config = function()
require("codetyper").setup({
llm = {
provider = "claude", -- or "ollama"
provider = "claude", -- or "openai", "gemini", "copilot", "ollama"
},
})
end,
@@ -93,8 +100,6 @@ using regex, return boolean @/
**3. The LLM generates code and injects it into `utils.ts` (right panel)**
That's it! You're now coding with AI assistance. 🎉
---
## ⚙️ Configuration
@@ -103,37 +108,66 @@ That's it! You're now coding with AI assistance. 🎉
require("codetyper").setup({
-- LLM Provider Configuration
llm = {
provider = "claude", -- "claude" or "ollama"
provider = "claude", -- "claude", "openai", "gemini", "copilot", or "ollama"
-- Claude (Anthropic) settings
claude = {
api_key = nil, -- Uses ANTHROPIC_API_KEY env var if nil
model = "claude-sonnet-4-20250514",
},
-- OpenAI settings
openai = {
api_key = nil, -- Uses OPENAI_API_KEY env var if nil
model = "gpt-4o",
endpoint = nil, -- Custom endpoint (Azure, OpenRouter, etc.)
},
-- Google Gemini settings
gemini = {
api_key = nil, -- Uses GEMINI_API_KEY env var if nil
model = "gemini-2.0-flash",
},
-- GitHub Copilot settings (uses copilot.lua/copilot.vim auth)
copilot = {
model = "gpt-4o",
},
-- Ollama (local) settings
ollama = {
host = "http://localhost:11434",
model = "codellama",
model = "deepseek-coder:6.7b",
},
},
-- Window Configuration
window = {
width = 0.25, -- 25% of screen width (1/4) for Ask panel
position = "left", -- "left" or "right"
border = "rounded", -- Border style for floating windows
width = 25, -- Percentage of screen width (25 = 25%)
position = "left",
border = "rounded",
},
-- Prompt Tag Patterns
patterns = {
open_tag = "/@", -- Tag to start a prompt
close_tag = "@/", -- Tag to end a prompt
open_tag = "/@",
close_tag = "@/",
file_pattern = "*.coder.*",
},
-- Auto Features
auto_gitignore = true, -- Automatically add coder files to .gitignore
auto_gitignore = true, -- Automatically add coder files to .gitignore
auto_open_ask = true, -- Auto-open Ask panel on startup
auto_index = false, -- Auto-create coder companion files on file open
-- Event-Driven Scheduler
scheduler = {
enabled = true, -- Enable event-driven prompt processing
ollama_scout = true, -- Use Ollama for first attempt (fast local)
escalation_threshold = 0.7, -- Below this confidence, escalate to remote
max_concurrent = 2, -- Max parallel workers
completion_delay_ms = 100, -- Delay injection after completion popup
},
})
```
@@ -141,334 +175,238 @@ require("codetyper").setup({
| Variable | Description |
|----------|-------------|
| `ANTHROPIC_API_KEY` | Your Claude API key (if not set in config) |
| `ANTHROPIC_API_KEY` | Claude API key |
| `OPENAI_API_KEY` | OpenAI API key |
| `GEMINI_API_KEY` | Google Gemini API key |
---
## 📜 Commands Reference
## 🔌 LLM Providers
### Main Command
### Claude (Anthropic)
Best for complex reasoning and code generation.
```lua
llm = {
provider = "claude",
claude = { model = "claude-sonnet-4-20250514" },
}
```
### OpenAI
Supports custom endpoints for Azure, OpenRouter, etc.
```lua
llm = {
provider = "openai",
openai = {
model = "gpt-4o",
endpoint = "https://api.openai.com/v1/chat/completions", -- optional
},
}
```
### Google Gemini
Fast and capable.
```lua
llm = {
provider = "gemini",
gemini = { model = "gemini-2.0-flash" },
}
```
### GitHub Copilot
Uses your existing Copilot subscription (requires copilot.lua or copilot.vim).
```lua
llm = {
provider = "copilot",
copilot = { model = "gpt-4o" },
}
```
### Ollama (Local)
Run models locally with no API costs.
```lua
llm = {
provider = "ollama",
ollama = {
host = "http://localhost:11434",
model = "deepseek-coder:6.7b",
},
}
```
---
## 📝 Commands Reference
### Main Commands
| Command | Description |
|---------|-------------|
| `:Coder {subcommand}` | Main command with subcommands below |
| `:Coder {subcommand}` | Main command with subcommands |
| `:CoderOpen` | Open the coder split view |
| `:CoderClose` | Close the coder split view |
| `:CoderToggle` | Toggle the coder split view |
| `:CoderProcess` | Process the last prompt |
### Subcommands
### Ask Panel
| Subcommand | Alias | Description |
|------------|-------|-------------|
| `open` | `:CoderOpen` | Open the coder split view for current file |
| `close` | `:CoderClose` | Close the coder split view |
| `toggle` | `:CoderToggle` | Toggle the coder split view on/off |
| `process` | `:CoderProcess` | Process the last prompt and generate code |
| `status` | - | Show plugin status and project statistics |
| `focus` | - | Switch focus between coder and target windows |
| `tree` | `:CoderTree` | Manually refresh the tree.log file |
| `tree-view` | `:CoderTreeView` | Open tree.log in a readonly split |
| `ask` | `:CoderAsk` | Open the Ask panel for questions |
| `ask-toggle` | `:CoderAskToggle` | Toggle the Ask panel |
| `ask-clear` | `:CoderAskClear` | Clear Ask chat history |
| Command | Description |
|---------|-------------|
| `:CoderAsk` | Open the Ask panel |
| `:CoderAskToggle` | Toggle the Ask panel |
| `:CoderAskClear` | Clear chat history |
---
### Agent Mode
### Command Details
| Command | Description |
|---------|-------------|
| `:CoderAgent` | Open the Agent panel |
| `:CoderAgentToggle` | Toggle the Agent panel |
| `:CoderAgentStop` | Stop the running agent |
#### `:Coder open` / `:CoderOpen`
### Transform Commands
Opens a split view with:
- **Left panel**: The coder file (`*.coder.*`) where you write prompts
- **Right panel**: The target file where generated code is injected
| Command | Description |
|---------|-------------|
| `:CoderTransform` | Transform all /@ @/ tags in file |
| `:CoderTransformCursor` | Transform tag at cursor position |
| `:CoderTransformVisual` | Transform selected tags (visual mode) |
```vim
" If you have index.ts open:
:Coder open
" Creates/opens index.coder.ts on the left
```
### Utility Commands
**Behavior:**
- If no file is in buffer, opens a file picker (Telescope if available)
- Creates the coder file if it doesn't exist
- Automatically sets the correct filetype for syntax highlighting
---
#### `:Coder close` / `:CoderClose`
Closes the coder split view, keeping only your target file open.
```vim
:Coder close
```
---
#### `:Coder toggle` / `:CoderToggle`
Toggles the coder view on or off. Useful for quick switching.
```vim
:Coder toggle
```
---
#### `:Coder process` / `:CoderProcess`
Processes the last completed prompt in the coder file and sends it to the LLM.
```vim
" After writing a prompt and closing with @/
:Coder process
```
**What happens:**
1. Finds the last `/@...@/` prompt in the coder buffer
2. Detects the prompt type (refactor, add, document, etc.)
3. Sends it to the configured LLM with file context
4. Injects the generated code into the target file
---
#### `:Coder status`
Displays current plugin status including:
- LLM provider and configuration
- API key status (configured/not set)
- Window settings
- Project statistics (files, directories)
- Tree log path
```vim
:Coder status
```
---
#### `:Coder focus`
Switches focus between the coder window and target window.
```vim
:Coder focus
" Press again to switch back
```
---
#### `:Coder tree` / `:CoderTree`
Manually refreshes the `.coder/tree.log` file with current project structure.
```vim
:Coder tree
```
> Note: The tree is automatically updated on file save/create/delete.
---
#### `:Coder tree-view` / `:CoderTreeView`
Opens the tree.log file in a readonly split for viewing your project structure.
```vim
:Coder tree-view
```
---
#### `:Coder ask` / `:CoderAsk`
Opens the **Ask panel** - a chat interface similar to avante.nvim for asking questions about your code, getting explanations, or general programming help.
```vim
:Coder ask
```
**The Ask Panel Layout:**
```
┌───────────────────┬─────────────────────────────────────────┐
│ 💬 Chat (output) │ │
│ │ Your code file │
│ ┌─ 👤 You ──── │ │
│ │ What is this? │ │
│ │ │
│ ┌─ 🤖 AI ───── │ │
│ │ This is... │ │
├───────────────────┤ │
│ ✏️ Input │ │
│ Type question... │ │
└───────────────────┴─────────────────────────────────────────┘
(1/4 width) (3/4 width)
```
> **Note:** The Ask panel is fixed at 1/4 (25%) of the screen width.
**Ask Panel Keymaps:**
| Key | Mode | Description |
|-----|------|-------------|
| `@` | Insert | Attach/reference a file |
| `Ctrl+Enter` | Insert/Normal | Submit question |
| `Ctrl+n` | Insert/Normal | Start new chat (clear all) |
| `Ctrl+f` | Insert/Normal | Add current file as context |
| `Ctrl+h/j/k/l` | Normal/Insert | Navigate between windows |
| `q` | Normal | Close panel (closes both windows) |
| `K` / `J` | Normal | Jump between output/input |
| `Y` | Normal | Copy last response to clipboard |
---
#### `:Coder ask-toggle` / `:CoderAskToggle`
Toggles the Ask panel on or off.
```vim
:Coder ask-toggle
```
---
#### `:Coder ask-clear` / `:CoderAskClear`
Clears the Ask panel chat history.
```vim
:Coder ask-clear
```
| Command | Description |
|---------|-------------|
| `:CoderIndex` | Open coder companion for current file |
| `:CoderLogs` | Toggle logs panel |
| `:CoderType` | Switch between Ask/Agent modes |
| `:CoderTree` | Refresh tree.log |
| `:CoderTreeView` | View tree.log |
---
## 📖 Usage Guide
### Step 1: Open Your Project File
### Tag-Based Prompts
Open any source file you want to work with:
Write prompts in your coder file using `/@` and `@/` tags:
```vim
:e src/components/Button.tsx
```
### Step 2: Start Coder View
```vim
:Coder open
```
This creates a split:
```
┌─────────────────────────┬─────────────────────────┐
│ Button.coder.tsx │ Button.tsx │
│ (write prompts here) │ (your actual code) │
└─────────────────────────┴─────────────────────────┘
```
### Step 3: Write Your Prompt
In the coder file (left), write your prompt using tags:
```tsx
```typescript
/@ Create a Button component with the following props:
- variant: 'primary' | 'secondary' | 'danger'
- size: 'sm' | 'md' | 'lg'
- disabled: boolean
- onClick: function
Use Tailwind CSS for styling @/
```
### Step 4: Process the Prompt
When you close the tag with `@/`, the prompt is automatically processed.
When you close the tag with `@/`, you'll be prompted to process. Or manually:
### Transform Commands
```vim
:Coder process
Transform prompts inline without the split view:
```typescript
// In your source file:
/@ Add input validation for email and password @/
// Run :CoderTransformCursor to transform the prompt at cursor
```
### Step 5: Review Generated Code
The generated code appears in your target file (right panel). Review, edit if needed, and save!
---
### Prompt Types
The plugin automatically detects what you want based on keywords:
The plugin auto-detects prompt type:
| Keywords | Type | Behavior |
|----------|------|----------|
| `refactor`, `rewrite`, `change` | Refactor | Replaces code in target file |
| `add`, `create`, `implement`, `new` | Add | Inserts code at cursor position |
| `document`, `comment`, `jsdoc` | Document | Adds documentation above code |
| `explain`, `what`, `how` | Explain | Shows explanation (no injection) |
| *(other)* | Generic | Prompts you for injection method |
| `refactor`, `rewrite` | Refactor | Replaces code |
| `add`, `create`, `implement` | Add | Inserts new code |
| `document`, `comment` | Document | Adds documentation |
| `explain`, `what`, `how` | Explain | Shows explanation only |
---
### Prompt Examples
## 🤖 Agent Mode
#### Creating New Functions
The Agent mode provides an autonomous coding assistant with tool access:
```typescript
/@ Create an async function fetchUsers that:
- Takes a page number and limit as parameters
- Fetches from /api/users endpoint
- Returns typed User[] array
- Handles errors gracefully @/
```
### Available Tools
#### Refactoring Code
- **read_file**: Read file contents
- **edit_file**: Edit files with find/replace
- **write_file**: Create or overwrite files
- **bash**: Execute shell commands
```typescript
/@ Refactor the handleSubmit function to:
- Use async/await instead of .then()
- Add proper TypeScript types
- Extract validation logic into separate function @/
```
### Using Agent Mode
#### Adding Documentation
1. Open the agent panel: `:CoderAgent` or `<leader>ca`
2. Describe what you want to accomplish
3. The agent will use tools to complete the task
4. Review changes before they're applied
```typescript
/@ Add JSDoc documentation to all exported functions
including @param, @returns, and @example tags @/
```
### Agent Keymaps
#### Implementing Patterns
| Key | Description |
|-----|-------------|
| `<CR>` | Submit message |
| `Ctrl+c` | Stop agent execution |
| `q` | Close agent panel |
```typescript
/@ Implement the singleton pattern for DatabaseConnection class
with lazy initialization and thread safety @/
```
---
#### Adding Tests
## ⌨️ Keymaps
```typescript
/@ Create unit tests for the calculateTotal function
using Jest, cover edge cases:
- Empty array
- Negative numbers
- Large numbers @/
### Default Keymaps (auto-configured)
| Key | Mode | Description |
|-----|------|-------------|
| `<leader>ctt` | Normal | Transform tag at cursor |
| `<leader>ctt` | Visual | Transform selected tags |
| `<leader>ctT` | Normal | Transform all tags in file |
| `<leader>ca` | Normal | Toggle Agent panel |
| `<leader>ci` | Normal | Open coder companion (index) |
### Ask Panel Keymaps
| Key | Description |
|-----|-------------|
| `@` | Attach/reference a file |
| `Ctrl+Enter` | Submit question |
| `Ctrl+n` | Start new chat |
| `Ctrl+f` | Add current file as context |
| `q` | Close panel |
| `Y` | Copy last response |
### Suggested Additional Keymaps
```lua
local map = vim.keymap.set
map("n", "<leader>co", "<cmd>Coder open<cr>", { desc = "Coder: Open" })
map("n", "<leader>cc", "<cmd>Coder close<cr>", { desc = "Coder: Close" })
map("n", "<leader>ct", "<cmd>Coder toggle<cr>", { desc = "Coder: Toggle" })
map("n", "<leader>cp", "<cmd>Coder process<cr>", { desc = "Coder: Process" })
map("n", "<leader>cs", "<cmd>Coder status<cr>", { desc = "Coder: Status" })
```
---
## 🏗️ How It Works
## 🏥 Health Check
```
┌─────────────────────────────────────────────────────────────────┐
│ Neovim │
├────────────────────────────┬────────────────────────────────────┤
│ src/api.coder.ts │ src/api.ts │
│ │ │
│ /@ Create a REST client │ // Generated code appears here │
│ class with methods for │ export class RestClient { │
│ GET, POST, PUT, DELETE │ async get<T>(url: string) { │
│ with TypeScript │ // ... │
│ generics @/ │ } │
│ │ } │
└────────────────────────────┴────────────────────────────────────┘
Verify your setup:
```vim
:checkhealth codetyper
```
### File Structure
This checks:
- Neovim version
- curl availability
- LLM configuration
- API key status
- Telescope availability (optional)
---
## 📁 File Structure
```
your-project/
@@ -477,105 +415,9 @@ your-project/
├── src/
│ ├── index.ts # Your source file
│ ├── index.coder.ts # Coder file (gitignored)
│ ├── utils.ts
│ └── utils.coder.ts
└── .gitignore # Auto-updated with coder patterns
```
### The Flow
1. **You write prompts** in `*.coder.*` files using `/@...@/` tags
2. **Plugin detects** when you close a prompt tag
3. **Context is gathered** from the target file (content, language, etc.)
4. **LLM generates** code based on your prompt and context
5. **Code is injected** into the target file based on prompt type
6. **You review and save** - you're always in control!
### Project Tree Logging
The `.coder/tree.log` file is automatically maintained:
```
# Project Tree: my-project
# Generated: 2026-01-11 15:30:45
# By: Codetyper.nvim
📦 my-project
├── 📁 src
│ ├── 📘 index.ts
│ ├── 📘 utils.ts
│ └── 📁 components
│ └── ⚛️ Button.tsx
├── 📋 package.json
└── 📝 README.md
```
Updated automatically when you:
- Create new files
- Save files
- Delete files
---
## 🔑 Keymaps (Suggested)
Add these to your Neovim config:
```lua
-- Codetyper keymaps
local map = vim.keymap.set
-- Coder view
map("n", "<leader>co", "<cmd>Coder open<cr>", { desc = "Coder: Open view" })
map("n", "<leader>cc", "<cmd>Coder close<cr>", { desc = "Coder: Close view" })
map("n", "<leader>ct", "<cmd>Coder toggle<cr>", { desc = "Coder: Toggle view" })
map("n", "<leader>cp", "<cmd>Coder process<cr>", { desc = "Coder: Process prompt" })
map("n", "<leader>cs", "<cmd>Coder status<cr>", { desc = "Coder: Show status" })
map("n", "<leader>cf", "<cmd>Coder focus<cr>", { desc = "Coder: Switch focus" })
map("n", "<leader>cv", "<cmd>Coder tree-view<cr>", { desc = "Coder: View tree" })
-- Ask panel
map("n", "<leader>ca", "<cmd>Coder ask<cr>", { desc = "Coder: Open Ask" })
map("n", "<leader>cA", "<cmd>Coder ask-toggle<cr>", { desc = "Coder: Toggle Ask" })
map("n", "<leader>cx", "<cmd>Coder ask-clear<cr>", { desc = "Coder: Clear Ask" })
```
Or with [which-key.nvim](https://github.com/folke/which-key.nvim):
```lua
local wk = require("which-key")
wk.register({
["<leader>c"] = {
name = "+coder",
o = { "<cmd>Coder open<cr>", "Open view" },
c = { "<cmd>Coder close<cr>", "Close view" },
t = { "<cmd>Coder toggle<cr>", "Toggle view" },
p = { "<cmd>Coder process<cr>", "Process prompt" },
s = { "<cmd>Coder status<cr>", "Show status" },
f = { "<cmd>Coder focus<cr>", "Switch focus" },
v = { "<cmd>Coder tree-view<cr>", "View tree" },
},
})
```
---
## 🔧 Health Check
Verify your setup is correct:
```vim
:checkhealth codetyper
```
This checks:
- ✅ Neovim version
- ✅ curl availability
- ✅ LLM configuration
- ✅ API key status
- ✅ Telescope availability (optional)
- ✅ Gitignore configuration
---
## 🤝 Contributing
@@ -590,13 +432,13 @@ MIT License - see [LICENSE](LICENSE) for details.
---
## 👤 Author
## 👨‍💻 Author
**cargdev**
- 🌐 Website: [cargdev.io](https://cargdev.io)
- 📝 Blog: [blog.cargdev.io](https://blog.cargdev.io)
- 📧 Email: carlos.gutierrez@carg.dev
- Website: [cargdev.io](https://cargdev.io)
- Blog: [blog.cargdev.io](https://blog.cargdev.io)
- Email: carlos.gutierrez@carg.dev
---

View File

@@ -11,34 +11,41 @@ CONTENTS *codetyper-contents*
2. Requirements ............................ |codetyper-requirements|
3. Installation ............................ |codetyper-installation|
4. Configuration ........................... |codetyper-configuration|
5. Usage ................................... |codetyper-usage|
6. Commands ................................ |codetyper-commands|
7. Workflow ................................ |codetyper-workflow|
8. API ..................................... |codetyper-api|
5. LLM Providers ........................... |codetyper-providers|
6. Usage ................................... |codetyper-usage|
7. Commands ................................ |codetyper-commands|
8. Agent Mode .............................. |codetyper-agent|
9. Transform Commands ...................... |codetyper-transform|
10. Keymaps ................................ |codetyper-keymaps|
11. API .................................... |codetyper-api|
==============================================================================
1. INTRODUCTION *codetyper-introduction*
Codetyper.nvim is an AI-powered coding partner that helps you write code
faster using LLM APIs (Claude, Ollama) with a unique workflow.
Instead of generating files directly, Codetyper watches what you type in
special `.coder.*` files and generates code when you close prompt tags.
faster using LLM APIs with a unique workflow.
Key features:
- Split view with coder file and target file side by side
- Prompt-based code generation using /@ ... @/ tags
- Support for Claude and Ollama LLM providers
- Automatic .gitignore management for coder files and .coder/ folder
- Intelligent code injection based on prompt type
- Automatic project tree logging in .coder/tree.log
- Support for Claude, OpenAI, Gemini, Copilot, and Ollama providers
- Agent mode with autonomous tool use (read, edit, write, bash)
- Transform commands for inline prompt processing
- Auto-index feature for automatic companion file creation
- Automatic .gitignore management
- Real-time logs panel with token usage tracking
==============================================================================
2. REQUIREMENTS *codetyper-requirements*
- Neovim >= 0.8.0
- curl (for API calls)
- Claude API key (if using Claude) or Ollama running locally
- One of:
- Claude API key (ANTHROPIC_API_KEY)
- OpenAI API key (OPENAI_API_KEY)
- Gemini API key (GEMINI_API_KEY)
- GitHub Copilot (via copilot.lua or copilot.vim)
- Ollama running locally
==============================================================================
3. INSTALLATION *codetyper-installation*
@@ -50,10 +57,7 @@ Using lazy.nvim: >lua
config = function()
require("codetyper").setup({
llm = {
provider = "claude", -- or "ollama"
claude = {
api_key = vim.env.ANTHROPIC_API_KEY,
},
provider = "claude", -- or "openai", "gemini", "copilot", "ollama"
},
})
end,
@@ -75,19 +79,31 @@ Default configuration: >lua
require("codetyper").setup({
llm = {
provider = "claude", -- "claude" or "ollama"
provider = "claude", -- "claude", "openai", "gemini", "copilot", "ollama"
claude = {
api_key = nil, -- Uses ANTHROPIC_API_KEY env var if nil
model = "claude-sonnet-4-20250514",
},
openai = {
api_key = nil, -- Uses OPENAI_API_KEY env var if nil
model = "gpt-4o",
endpoint = nil, -- Custom endpoint (Azure, OpenRouter, etc.)
},
gemini = {
api_key = nil, -- Uses GEMINI_API_KEY env var if nil
model = "gemini-2.0-flash",
},
copilot = {
model = "gpt-4o", -- Uses OAuth from copilot.lua/copilot.vim
},
ollama = {
host = "http://localhost:11434",
model = "codellama",
model = "deepseek-coder:6.7b",
},
},
window = {
width = 0.4, -- 40% of screen width
position = "left", -- "left" or "right"
width = 25, -- Percentage of screen width (25 = 25%)
position = "left",
border = "rounded",
},
patterns = {
@@ -96,10 +112,67 @@ Default configuration: >lua
file_pattern = "*.coder.*",
},
auto_gitignore = true,
auto_open_ask = true,
auto_index = false, -- Auto-create coder companion files
})
<
==============================================================================
5. USAGE *codetyper-usage*
5. LLM PROVIDERS *codetyper-providers*
*codetyper-claude*
Claude (Anthropic)~
Best for complex reasoning and code generation.
>lua
llm = {
provider = "claude",
claude = { model = "claude-sonnet-4-20250514" },
}
<
*codetyper-openai*
OpenAI~
Supports custom endpoints for Azure, OpenRouter, etc.
>lua
llm = {
provider = "openai",
openai = {
model = "gpt-4o",
endpoint = nil, -- optional custom endpoint
},
}
<
*codetyper-gemini*
Google Gemini~
Fast and capable.
>lua
llm = {
provider = "gemini",
gemini = { model = "gemini-2.0-flash" },
}
<
*codetyper-copilot*
GitHub Copilot~
Uses your existing Copilot subscription.
Requires copilot.lua or copilot.vim to be configured.
>lua
llm = {
provider = "copilot",
copilot = { model = "gpt-4o" },
}
<
*codetyper-ollama*
Ollama (Local)~
Run models locally with no API costs.
>lua
llm = {
provider = "ollama",
ollama = {
host = "http://localhost:11434",
model = "deepseek-coder:6.7b",
},
}
<
==============================================================================
6. USAGE *codetyper-usage*
1. Open any file (e.g., `index.ts`)
2. Run `:Coder open` to create/open the corresponding coder file
@@ -113,8 +186,17 @@ Default configuration: >lua
- Generate the code
- Inject it into the target file
Prompt Types~
The plugin detects the type of request from your prompt:
- "refactor" / "rewrite" - Modifies existing code
- "add" / "create" / "implement" - Adds new code
- "document" / "comment" - Adds documentation
- "explain" - Provides explanations (no code injection)
==============================================================================
6. COMMANDS *codetyper-commands*
7. COMMANDS *codetyper-commands*
*:Coder*
:Coder [subcommand]
@@ -143,8 +225,55 @@ Default configuration: >lua
*:CoderProcess*
:CoderProcess
Process the last prompt in the current coder buffer and
inject generated code into the target file.
Process the last prompt in the current coder buffer.
*:CoderAsk*
:CoderAsk
Open the Ask panel for questions and explanations.
*:CoderAskToggle*
:CoderAskToggle
Toggle the Ask panel.
*:CoderAskClear*
:CoderAskClear
Clear Ask panel chat history.
*:CoderAgent*
:CoderAgent
Open the Agent panel for autonomous coding tasks.
*:CoderAgentToggle*
:CoderAgentToggle
Toggle the Agent panel.
*:CoderAgentStop*
:CoderAgentStop
Stop the currently running agent.
*:CoderTransform*
:CoderTransform
Transform all /@ @/ tags in the current file.
*:CoderTransformCursor*
:CoderTransformCursor
Transform the /@ @/ tag at cursor position.
*:CoderTransformVisual*
:CoderTransformVisual
Transform selected /@ @/ tags (visual mode).
*:CoderIndex*
:CoderIndex
Open coder companion file for current source file.
*:CoderLogs*
:CoderLogs
Toggle the logs panel showing LLM request details.
*:CoderType*
:CoderType
Show mode switcher UI (Ask/Agent).
*:CoderTree*
:CoderTree
@@ -155,54 +284,83 @@ Default configuration: >lua
Open the tree.log file in a vertical split for viewing.
==============================================================================
7. WORKFLOW *codetyper-workflow*
8. AGENT MODE *codetyper-agent*
The Coder Workflow~
Agent mode provides an autonomous coding assistant with tool access.
1. Target File: Your actual source file (e.g., `src/utils.ts`)
2. Coder File: A companion file (e.g., `src/utils.coder.ts`)
Available Tools~
The coder file mirrors your target file's location and extension.
When you write prompts in the coder file and close them, the
generated code appears in the target file.
- read_file Read file contents at a path
- edit_file Edit files with find/replace
- write_file Create or overwrite files
- bash Execute shell commands
Prompt Types~
Using Agent Mode~
The plugin detects the type of request from your prompt:
1. Open the agent panel: `:CoderAgent` or `<leader>ca`
2. Describe what you want to accomplish
3. The agent will use tools to complete the task
4. Review changes before they're applied
- "refactor" - Modifies existing code
- "add" / "create" / "implement" - Adds new code
- "document" / "comment" - Adds documentation
- "explain" - Provides explanations (no code injection)
Agent Keymaps~
Example Prompts~
>
/@ Refactor this function to use async/await @/
/@ Add input validation to the form handler @/
/@ Add JSDoc comments to all exported functions @/
/@ Create a React hook for managing form state
with validation support @/
<
Project Tree Logging~
Codetyper automatically maintains a .coder/ folder with a tree.log file:
>
.coder/
└── tree.log # Auto-updated project structure
<
The tree.log is updated whenever you:
- Create a new file
- Save a file
- Delete a file
- Change directories
View the tree anytime with `:Coder tree-view` or refresh with `:Coder tree`.
<CR> Submit message
Ctrl+c Stop agent execution
q Close agent panel
==============================================================================
8. API *codetyper-api*
9. TRANSFORM COMMANDS *codetyper-transform*
Transform commands allow you to process /@ @/ tags inline without
opening the split view.
*:CoderTransform*
:CoderTransform
Find and transform all /@ @/ tags in the current buffer.
Each tag is replaced with generated code.
*:CoderTransformCursor*
:CoderTransformCursor
Transform the /@ @/ tag at the current cursor position.
Useful for processing a single prompt.
*:CoderTransformVisual*
:'<,'>CoderTransformVisual
Transform /@ @/ tags within the visual selection.
Select lines containing tags and run this command.
Example~
>
// In your source file:
/@ Add input validation for email @/
// After running :CoderTransformCursor:
function validateEmail(email) {
return /^[^\s@]+@[^\s@]+\.[^\s@]+$/.test(email);
}
<
==============================================================================
10. KEYMAPS *codetyper-keymaps*
Default keymaps (auto-configured):
<leader>ctt (Normal) Transform tag at cursor
<leader>ctt (Visual) Transform selected tags
<leader>ctT (Normal) Transform all tags in file
<leader>ca (Normal) Toggle Agent panel
<leader>ci (Normal) Open coder companion (index)
Ask Panel keymaps:
@ Attach/reference a file
Ctrl+Enter Submit question
Ctrl+n Start new chat
Ctrl+f Add current file as context
q Close panel
Y Copy last response
==============================================================================
11. API *codetyper-api*
*codetyper.setup()*
codetyper.setup({opts})

346
llms.txt
View File

@@ -4,7 +4,7 @@
## Overview
Codetyper.nvim is a Neovim plugin written in Lua that acts as an AI-powered coding partner. It integrates with LLM APIs (Claude, Ollama) to help developers write code faster using a unique prompt-based workflow.
Codetyper.nvim is a Neovim plugin written in Lua that acts as an AI-powered coding partner. It integrates with multiple LLM APIs (Claude, OpenAI, Gemini, Copilot, Ollama) to help developers write code faster using a unique prompt-based workflow.
## Core Concept
@@ -27,15 +27,39 @@ lua/codetyper/
├── commands.lua # Vim command definitions (:Coder, :CoderOpen, etc.)
├── window.lua # Split window management (open, close, toggle)
├── parser.lua # Parses /@ @/ tags from buffer content
├── gitignore.lua # Manages .gitignore entries for coder files and .coder/ folder
├── autocmds.lua # Autocommands for tag detection, filetype, tree updates
├── gitignore.lua # Manages .gitignore entries for coder files
├── autocmds.lua # Autocommands for tag detection, filetype, auto-index
├── inject.lua # Code injection strategies
├── health.lua # Health check for :checkhealth
├── tree.lua # Project tree logging (.coder/tree.log)
── llm/
├── init.lua # LLM interface, provider selection
├── claude.lua # Claude API client (Anthropic)
── ollama.lua # Ollama API client (local LLMs)
── logs_panel.lua # Standalone logs panel UI
├── llm/
├── init.lua # LLM interface, provider selection
── claude.lua # Claude API client (Anthropic)
│ ├── openai.lua # OpenAI API client (with custom endpoint support)
│ ├── gemini.lua # Google Gemini API client
│ ├── copilot.lua # GitHub Copilot client (uses OAuth from copilot.lua/vim)
│ └── ollama.lua # Ollama API client (local LLMs)
├── agent/
│ ├── init.lua # Agent system entry point
│ ├── ui.lua # Agent panel UI
│ ├── logs.lua # Logging system with listeners
│ ├── tools.lua # Tool definitions (read_file, edit_file, write_file, bash)
│ ├── executor.lua # Tool execution logic
│ ├── parser.lua # Parse tool calls from LLM responses
│ ├── queue.lua # Event queue with priority heap
│ ├── patch.lua # Patch candidates with staleness detection
│ ├── confidence.lua # Response confidence scoring heuristics
│ ├── worker.lua # Async LLM worker wrapper
│ ├── scheduler.lua # Event scheduler with completion-awareness
│ ├── scope.lua # Tree-sitter scope resolution
│ └── intent.lua # Intent detection from prompts
├── ask/
│ ├── init.lua # Ask panel entry point
│ └── ui.lua # Ask panel UI (chat interface)
└── prompts/
├── init.lua # System prompts for code generation
└── agent.lua # Agent-specific prompts and tool instructions
```
## .coder/ Folder
@@ -47,51 +71,240 @@ The plugin automatically creates and maintains a `.coder/` folder in your projec
└── tree.log # Project structure, auto-updated on file changes
```
The `tree.log` contains:
- Project name and timestamp
- Full directory tree with file type icons
- Automatically ignores: hidden files, node_modules, .git, build folders, coder files
## Key Features
Tree updates are triggered by:
- `BufWritePost` - When files are saved
- `BufNewFile` - When new files are created
- `BufDelete` - When files are deleted
- `DirChanged` - When changing directories
### 1. Multiple LLM Providers
Updates are debounced (1 second) to prevent excessive writes.
## Key Functions
### Setup
```lua
require("codetyper").setup({
llm = { provider = "claude" | "ollama", ... },
window = { width = 0.4, position = "left" },
patterns = { open_tag = "/@", close_tag = "@/" },
auto_gitignore = true,
})
llm = {
provider = "claude", -- "claude", "openai", "gemini", "copilot", "ollama"
claude = { api_key = nil, model = "claude-sonnet-4-20250514" },
openai = { api_key = nil, model = "gpt-4o", endpoint = nil },
gemini = { api_key = nil, model = "gemini-2.0-flash" },
copilot = { model = "gpt-4o" },
ollama = { host = "http://localhost:11434", model = "deepseek-coder:6.7b" },
}
```
### Commands
### 2. Agent Mode
Autonomous coding assistant with tool access:
- `read_file` - Read file contents
- `edit_file` - Edit files with find/replace
- `write_file` - Create or overwrite files
- `bash` - Execute shell commands
### 3. Transform Commands
Transform `/@ @/` tags inline without split view:
- `:CoderTransform` - Transform all tags in file
- `:CoderTransformCursor` - Transform tag at cursor
- `:CoderTransformVisual` - Transform selected tags
### 4. Auto-Index
Automatically create coder companion files when opening source files:
```lua
auto_index = true -- disabled by default
```
### 5. Logs Panel
Real-time visibility into LLM operations with token usage tracking.
### 6. Event-Driven Scheduler
Prompts are treated as events, not commands:
```
User types /@...@/ → Event queued → Scheduler dispatches → Worker processes → Patch created → Safe injection
```
**Key concepts:**
- **PromptEvent**: Captures buffer state (changedtick, content hash) at prompt time
- **Optimistic Execution**: Ollama as fast scout, escalate to remote LLMs if confidence low
- **Confidence Scoring**: 5 heuristics (length, uncertainty, syntax, repetition, truncation)
- **Staleness Detection**: Discard patches if buffer changed during generation
- **Completion Safety**: Defer injection while autocomplete popup visible
**Configuration:**
```lua
scheduler = {
enabled = true, -- Enable event-driven mode
ollama_scout = true, -- Use Ollama first
escalation_threshold = 0.7, -- Below this → escalate
max_concurrent = 2, -- Parallel workers
completion_delay_ms = 100, -- Wait after popup closes
}
```
### 7. Tree-sitter Scope Resolution
Prompts automatically resolve to their enclosing function/method/class:
```lua
function foo()
/@ complete this function @/ -- Resolves to `foo`
end
```
**Scope types:** `function`, `method`, `class`, `block`, `file`
For replacement intents (complete, refactor, fix), the entire scope is extracted
and sent to the LLM, then replaced with the transformed version.
### 8. Intent Detection
The system parses prompts to detect user intent:
| Intent | Keywords | Action |
|--------|----------|--------|
| complete | complete, finish, implement | replace |
| refactor | refactor, rewrite, simplify | replace |
| fix | fix, repair, debug, bug | replace |
| add | add, create, insert, new | insert |
| document | document, comment, jsdoc | replace |
| test | test, spec, unit test | append |
| optimize | optimize, performance, faster | replace |
| explain | explain, what, how, why | none |
### 9. Tag Precedence
Multiple tags in the same scope follow "first tag wins" rule:
- Earlier (by line number) unresolved tag processes first
- Later tags in same scope are skipped with warning
- Different scopes process independently
## Commands
### Main Commands
- `:Coder open` - Opens split view with coder file
- `:Coder close` - Closes the split
- `:Coder toggle` - Toggles the view
- `:Coder process` - Manually triggers code generation
- `:Coder status` - Shows configuration status and project stats
- `:Coder tree` - Manually refresh tree.log
- `:Coder tree-view` - Open tree.log in split view
### Prompt Tags
- Opening tag: `/@`
- Closing tag: `@/`
- Content between tags is the prompt sent to LLM
### Ask Panel
- `:CoderAsk` - Open Ask panel
- `:CoderAskToggle` - Toggle Ask panel
- `:CoderAskClear` - Clear chat history
### Prompt Types (Auto-detected)
- `refactor` - Modifies existing code
- `add` - Adds new code at cursor/end
- `document` - Adds documentation/comments
- `explain` - Explanations (no code injection)
- `generic` - User chooses injection method
### Agent Mode
- `:CoderAgent` - Open Agent panel
- `:CoderAgentToggle` - Toggle Agent panel
- `:CoderAgentStop` - Stop running agent
### Transform
- `:CoderTransform` - Transform all tags
- `:CoderTransformCursor` - Transform at cursor
- `:CoderTransformVisual` - Transform selection
### Utility
- `:CoderIndex` - Open coder companion
- `:CoderLogs` - Toggle logs panel
- `:CoderType` - Switch Ask/Agent mode
- `:CoderTree` - Refresh tree.log
- `:CoderTreeView` - View tree.log
## Configuration Schema
```lua
{
llm = {
provider = "claude", -- "claude" | "openai" | "gemini" | "copilot" | "ollama"
claude = {
api_key = nil, -- string, uses ANTHROPIC_API_KEY env if nil
model = "claude-sonnet-4-20250514",
},
openai = {
api_key = nil, -- string, uses OPENAI_API_KEY env if nil
model = "gpt-4o",
endpoint = nil, -- custom endpoint for Azure, OpenRouter, etc.
},
gemini = {
api_key = nil, -- string, uses GEMINI_API_KEY env if nil
model = "gemini-2.0-flash",
},
copilot = {
model = "gpt-4o", -- uses OAuth from copilot.lua/copilot.vim
},
ollama = {
host = "http://localhost:11434",
model = "deepseek-coder:6.7b",
},
},
window = {
width = 25, -- percentage (25 = 25% of screen)
position = "left", -- "left" | "right"
border = "rounded",
},
patterns = {
open_tag = "/@",
close_tag = "@/",
file_pattern = "*.coder.*",
},
auto_gitignore = true,
auto_open_ask = true,
auto_index = false, -- auto-create coder companion files
scheduler = {
enabled = true, -- enable event-driven scheduler
ollama_scout = true, -- use Ollama as fast scout
escalation_threshold = 0.7,
max_concurrent = 2,
completion_delay_ms = 100,
},
}
```
## LLM Integration
### Claude API
- Endpoint: `https://api.anthropic.com/v1/messages`
- Uses `x-api-key` header for authentication
- Supports tool use for agent mode
### OpenAI API
- Endpoint: `https://api.openai.com/v1/chat/completions` (configurable)
- Uses `Authorization: Bearer` header
- Supports tool use for agent mode
- Compatible with Azure, OpenRouter, and other OpenAI-compatible APIs
### Gemini API
- Endpoint: `https://generativelanguage.googleapis.com/v1beta/models`
- Uses API key in URL parameter
- Supports function calling for agent mode
### Copilot API
- Uses GitHub OAuth token from copilot.lua/copilot.vim
- Endpoint from token response (typically `api.githubcopilot.com`)
- OpenAI-compatible format
### Ollama API
- Endpoint: `{host}/api/generate` or `{host}/api/chat`
- No authentication required for local instances
- Tool use via prompt-based approach
## Agent Tool Definitions
```lua
tools = {
read_file = { path: string },
edit_file = { path: string, find: string, replace: string },
write_file = { path: string, content: string },
bash = { command: string, timeout?: number },
}
```
## Code Injection Strategies
1. **Refactor**: Replace entire file content
2. **Add**: Insert at cursor position in target file
3. **Document**: Insert above current function/class
4. **Generic**: Prompt user for action
## File Naming Convention
@@ -103,62 +316,13 @@ require("codetyper").setup({
Pattern: `name.coder.extension`
## Configuration Schema
```lua
{
llm = {
provider = "claude", -- "claude" | "ollama"
claude = {
api_key = nil, -- string, uses ANTHROPIC_API_KEY env if nil
model = "claude-sonnet-4-20250514",
},
ollama = {
host = "http://localhost:11434",
model = "codellama",
},
},
window = {
width = 0.4, -- number (percentage if <=1, columns if >1)
position = "left", -- "left" | "right"
border = "rounded", -- border style for floating windows
},
patterns = {
open_tag = "/@", -- string
close_tag = "@/", -- string
file_pattern = "*.coder.*",
},
auto_gitignore = true, -- boolean
}
```
## LLM Integration
### Claude API
- Endpoint: `https://api.anthropic.com/v1/messages`
- Uses `x-api-key` header for authentication
- Requires `anthropic-version: 2023-06-01` header
### Ollama API
- Endpoint: `{host}/api/generate`
- No authentication required for local instances
- Health check via `/api/tags` endpoint
## Code Injection Strategies
1. **Refactor**: Replace entire file content
2. **Add**: Insert at cursor position in target file
3. **Document**: Insert above current function/class
4. **Generic**: Prompt user for action (replace/insert/append/clipboard)
## Dependencies
- **Required**: Neovim >= 0.8.0, curl
- **Optional**: telescope.nvim (enhanced file picker)
- **Optional**: telescope.nvim (enhanced file picker), copilot.lua or copilot.vim (for Copilot provider)
## Contact
- Author: cargdev
- Email: carlos.gutierrez@carg.dev
- Website: https://cargdev.io
- Blog: https://blog.cargdev.io

View File

@@ -0,0 +1,328 @@
---@mod codetyper.agent.confidence Response confidence scoring
---@brief [[
--- Scores LLM responses using heuristics to decide if escalation is needed.
--- Returns 0.0-1.0 where higher = more confident the response is good.
---@brief ]]
local M = {}
--- Heuristic weights (must sum to 1.0)
M.weights = {
length = 0.15, -- Response length relative to prompt
uncertainty = 0.30, -- Uncertainty phrases
syntax = 0.25, -- Syntax completeness
repetition = 0.15, -- Duplicate lines
truncation = 0.15, -- Incomplete ending
}
--- Uncertainty phrases that indicate low confidence
local uncertainty_phrases = {
-- English
"i'm not sure",
"i am not sure",
"maybe",
"perhaps",
"might work",
"could work",
"not certain",
"uncertain",
"i think",
"possibly",
"TODO",
"FIXME",
"XXX",
"placeholder",
"implement this",
"fill in",
"your code here",
"...", -- Ellipsis as placeholder
"# TODO",
"// TODO",
"-- TODO",
"/* TODO",
}
--- Score based on response length relative to prompt
---@param response string
---@param prompt string
---@return number 0.0-1.0
local function score_length(response, prompt)
local response_len = #response
local prompt_len = #prompt
-- Very short response to long prompt is suspicious
if prompt_len > 50 and response_len < 20 then
return 0.2
end
-- Response should generally be longer than prompt for code generation
local ratio = response_len / math.max(prompt_len, 1)
if ratio < 0.5 then
return 0.3
elseif ratio < 1.0 then
return 0.6
elseif ratio < 2.0 then
return 0.8
else
return 1.0
end
end
--- Score based on uncertainty phrases
---@param response string
---@return number 0.0-1.0
local function score_uncertainty(response)
local lower = response:lower()
local found = 0
for _, phrase in ipairs(uncertainty_phrases) do
if lower:find(phrase:lower(), 1, true) then
found = found + 1
end
end
-- More uncertainty phrases = lower score
if found == 0 then
return 1.0
elseif found == 1 then
return 0.7
elseif found == 2 then
return 0.5
else
return 0.2
end
end
--- Check bracket balance for common languages
---@param response string
---@return boolean balanced
local function check_brackets(response)
local pairs = {
["{"] = "}",
["["] = "]",
["("] = ")",
}
local stack = {}
for char in response:gmatch(".") do
if pairs[char] then
table.insert(stack, pairs[char])
elseif char == "}" or char == "]" or char == ")" then
if #stack == 0 or stack[#stack] ~= char then
return false
end
table.remove(stack)
end
end
return #stack == 0
end
--- Score based on syntax completeness
---@param response string
---@return number 0.0-1.0
local function score_syntax(response)
local score = 1.0
-- Check bracket balance
if not check_brackets(response) then
score = score - 0.4
end
-- Check for common incomplete patterns
-- Lua: unbalanced end/function
local function_count = select(2, response:gsub("function%s*%(", ""))
+ select(2, response:gsub("function%s+%w+%(", ""))
local end_count = select(2, response:gsub("%f[%w]end%f[%W]", ""))
if function_count > end_count + 2 then
score = score - 0.2
end
-- JavaScript/TypeScript: unclosed template literals
local backtick_count = select(2, response:gsub("`", ""))
if backtick_count % 2 ~= 0 then
score = score - 0.2
end
-- String quotes balance
local double_quotes = select(2, response:gsub('"', ""))
local single_quotes = select(2, response:gsub("'", ""))
-- Allow for escaped quotes by being lenient
if double_quotes % 2 ~= 0 and not response:find('\\"') then
score = score - 0.1
end
if single_quotes % 2 ~= 0 and not response:find("\\'") then
score = score - 0.1
end
return math.max(0, score)
end
--- Score based on line repetition
---@param response string
---@return number 0.0-1.0
local function score_repetition(response)
local lines = vim.split(response, "\n", { plain = true })
if #lines < 3 then
return 1.0
end
-- Count duplicate non-empty lines
local seen = {}
local duplicates = 0
for _, line in ipairs(lines) do
local trimmed = vim.trim(line)
if #trimmed > 10 then -- Only check substantial lines
if seen[trimmed] then
duplicates = duplicates + 1
end
seen[trimmed] = true
end
end
local dup_ratio = duplicates / #lines
if dup_ratio < 0.1 then
return 1.0
elseif dup_ratio < 0.2 then
return 0.8
elseif dup_ratio < 0.3 then
return 0.5
else
return 0.2 -- High repetition = degraded output
end
end
--- Score based on truncation indicators
---@param response string
---@return number 0.0-1.0
local function score_truncation(response)
local score = 1.0
-- Ends with ellipsis
if response:match("%.%.%.$") then
score = score - 0.5
end
-- Ends with incomplete comment
if response:match("/%*[^*/]*$") then -- Unclosed /* comment
score = score - 0.4
end
if response:match("<!%-%-[^>]*$") then -- Unclosed HTML comment
score = score - 0.4
end
-- Ends mid-statement (common patterns)
local trimmed = vim.trim(response)
local last_char = trimmed:sub(-1)
-- Suspicious endings
if last_char == "=" or last_char == "," or last_char == "(" then
score = score - 0.3
end
-- Very short last line after long response
local lines = vim.split(response, "\n", { plain = true })
if #lines > 5 then
local last_line = vim.trim(lines[#lines])
if #last_line < 5 and not last_line:match("^[%}%]%)%;end]") then
score = score - 0.2
end
end
return math.max(0, score)
end
---@class ConfidenceBreakdown
---@field length number
---@field uncertainty number
---@field syntax number
---@field repetition number
---@field truncation number
---@field weighted_total number
--- Calculate confidence score for response
---@param response string The LLM response
---@param prompt string The original prompt
---@param context table|nil Additional context (unused for now)
---@return number confidence 0.0-1.0
---@return ConfidenceBreakdown breakdown Individual scores
function M.score(response, prompt, context)
_ = context -- Reserved for future use
if not response or #response == 0 then
return 0, {
length = 0,
uncertainty = 0,
syntax = 0,
repetition = 0,
truncation = 0,
weighted_total = 0,
}
end
local scores = {
length = score_length(response, prompt or ""),
uncertainty = score_uncertainty(response),
syntax = score_syntax(response),
repetition = score_repetition(response),
truncation = score_truncation(response),
}
-- Calculate weighted total
local weighted = 0
for key, weight in pairs(M.weights) do
weighted = weighted + (scores[key] * weight)
end
scores.weighted_total = weighted
return weighted, scores
end
--- Check if response needs escalation
---@param confidence number
---@param threshold number|nil Default: 0.7
---@return boolean needs_escalation
function M.needs_escalation(confidence, threshold)
threshold = threshold or 0.7
return confidence < threshold
end
--- Get human-readable confidence level
---@param confidence number
---@return string
function M.level_name(confidence)
if confidence >= 0.9 then
return "excellent"
elseif confidence >= 0.8 then
return "good"
elseif confidence >= 0.7 then
return "acceptable"
elseif confidence >= 0.5 then
return "uncertain"
else
return "poor"
end
end
--- Format breakdown for logging
---@param breakdown ConfidenceBreakdown
---@return string
function M.format_breakdown(breakdown)
return string.format(
"len:%.2f unc:%.2f syn:%.2f rep:%.2f tru:%.2f = %.2f",
breakdown.length,
breakdown.uncertainty,
breakdown.syntax,
breakdown.repetition,
breakdown.truncation,
breakdown.weighted_total
)
end
return M

View File

@@ -0,0 +1,312 @@
---@mod codetyper.agent.intent Intent detection from prompts
---@brief [[
--- Parses prompt content to determine user intent and target scope.
--- Intents determine how the generated code should be applied.
---@brief ]]
local M = {}
---@class Intent
---@field type string "complete"|"refactor"|"add"|"fix"|"document"|"test"|"explain"|"optimize"
---@field scope_hint string|nil "function"|"class"|"block"|"file"|"selection"|nil
---@field confidence number 0.0-1.0 how confident we are about the intent
---@field action string "replace"|"insert"|"append"|"none"
---@field keywords string[] Keywords that triggered this intent
--- Intent patterns with associated metadata
local intent_patterns = {
-- Complete: fill in missing implementation
complete = {
patterns = {
"complete",
"finish",
"implement",
"fill in",
"fill out",
"stub",
"todo",
"fixme",
},
scope_hint = "function",
action = "replace",
priority = 1,
},
-- Refactor: rewrite existing code
refactor = {
patterns = {
"refactor",
"rewrite",
"restructure",
"reorganize",
"clean up",
"cleanup",
"simplify",
"improve",
},
scope_hint = "function",
action = "replace",
priority = 2,
},
-- Fix: repair bugs or issues
fix = {
patterns = {
"fix",
"repair",
"correct",
"debug",
"solve",
"resolve",
"patch",
"bug",
"error",
"issue",
},
scope_hint = "function",
action = "replace",
priority = 1,
},
-- Add: insert new code
add = {
patterns = {
"add",
"create",
"insert",
"include",
"append",
"new",
"generate",
"write",
},
scope_hint = nil, -- Could be anywhere
action = "insert",
priority = 3,
},
-- Document: add documentation
document = {
patterns = {
"document",
"comment",
"jsdoc",
"docstring",
"describe",
"annotate",
"type hint",
"typehint",
},
scope_hint = "function",
action = "replace", -- Replace with documented version
priority = 2,
},
-- Test: generate tests
test = {
patterns = {
"test",
"spec",
"unit test",
"integration test",
"coverage",
},
scope_hint = "file",
action = "append",
priority = 3,
},
-- Optimize: improve performance
optimize = {
patterns = {
"optimize",
"performance",
"faster",
"efficient",
"speed up",
"reduce",
"minimize",
},
scope_hint = "function",
action = "replace",
priority = 2,
},
-- Explain: provide explanation (no code change)
explain = {
patterns = {
"explain",
"what does",
"how does",
"why",
"describe",
"walk through",
"understand",
},
scope_hint = "function",
action = "none",
priority = 4,
},
}
--- Scope hint patterns
local scope_patterns = {
["this function"] = "function",
["this method"] = "function",
["the function"] = "function",
["the method"] = "function",
["this class"] = "class",
["the class"] = "class",
["this file"] = "file",
["the file"] = "file",
["this block"] = "block",
["the block"] = "block",
["this"] = nil, -- Use Tree-sitter to determine
["here"] = nil,
}
--- Detect intent from prompt content
---@param prompt string The prompt content
---@return Intent
function M.detect(prompt)
local lower = prompt:lower()
local best_match = nil
local best_priority = 999
local matched_keywords = {}
-- Check each intent type
for intent_type, config in pairs(intent_patterns) do
for _, pattern in ipairs(config.patterns) do
if lower:find(pattern, 1, true) then
if config.priority < best_priority then
best_match = intent_type
best_priority = config.priority
matched_keywords = { pattern }
elseif config.priority == best_priority and best_match == intent_type then
table.insert(matched_keywords, pattern)
end
end
end
end
-- Default to "add" if no clear intent
if not best_match then
best_match = "add"
matched_keywords = {}
end
local config = intent_patterns[best_match]
-- Detect scope hint from prompt
local scope_hint = config.scope_hint
for pattern, hint in pairs(scope_patterns) do
if lower:find(pattern, 1, true) then
scope_hint = hint or scope_hint
break
end
end
-- Calculate confidence based on keyword matches
local confidence = 0.5 + (#matched_keywords * 0.15)
confidence = math.min(confidence, 1.0)
return {
type = best_match,
scope_hint = scope_hint,
confidence = confidence,
action = config.action,
keywords = matched_keywords,
}
end
--- Check if intent requires code modification
---@param intent Intent
---@return boolean
function M.modifies_code(intent)
return intent.action ~= "none"
end
--- Check if intent should replace existing code
---@param intent Intent
---@return boolean
function M.is_replacement(intent)
return intent.action == "replace"
end
--- Check if intent adds new code
---@param intent Intent
---@return boolean
function M.is_insertion(intent)
return intent.action == "insert" or intent.action == "append"
end
--- Get system prompt modifier based on intent
---@param intent Intent
---@return string
function M.get_prompt_modifier(intent)
local modifiers = {
complete = [[
You are completing an incomplete function.
Return the complete function with all missing parts filled in.
Keep the existing signature unless changes are required.
Output only the code, no explanations.]],
refactor = [[
You are refactoring existing code.
Improve the code structure while maintaining the same behavior.
Keep the function signature unchanged.
Output only the refactored code, no explanations.]],
fix = [[
You are fixing a bug in the code.
Identify and correct the issue while minimizing changes.
Preserve the original intent of the code.
Output only the fixed code, no explanations.]],
add = [[
You are adding new code.
Follow the existing code style and conventions.
Output only the new code to be inserted, no explanations.]],
document = [[
You are adding documentation to the code.
Add appropriate comments/docstrings for the function.
Include parameter types, return types, and description.
Output the complete function with documentation.]],
test = [[
You are generating tests for the code.
Create comprehensive unit tests covering edge cases.
Follow the testing conventions of the project.
Output only the test code, no explanations.]],
optimize = [[
You are optimizing code for performance.
Improve efficiency while maintaining correctness.
Document any significant algorithmic changes.
Output only the optimized code, no explanations.]],
explain = [[
You are explaining code to a developer.
Provide a clear, concise explanation of what the code does.
Include information about the algorithm and any edge cases.
Do not output code, only explanation.]],
}
return modifiers[intent.type] or modifiers.add
end
--- Format intent for logging
---@param intent Intent
---@return string
function M.format(intent)
return string.format(
"%s (scope: %s, action: %s, confidence: %.2f)",
intent.type,
intent.scope_hint or "auto",
intent.action,
intent.confidence
)
end
return M

View File

@@ -0,0 +1,478 @@
---@mod codetyper.agent.patch Patch system with staleness detection
---@brief [[
--- Manages code patches with buffer snapshots for staleness detection.
--- Patches are queued for safe injection when completion popup is not visible.
---@brief ]]
local M = {}
---@class BufferSnapshot
---@field bufnr number Buffer number
---@field changedtick number vim.b.changedtick at snapshot time
---@field content_hash string Hash of buffer content in range
---@field range {start_line: number, end_line: number}|nil Range snapshotted
---@class PatchCandidate
---@field id string Unique patch ID
---@field event_id string Related PromptEvent ID
---@field target_bufnr number Target buffer for injection
---@field target_path string Target file path
---@field original_snapshot BufferSnapshot Snapshot at event creation
---@field generated_code string Code to inject
---@field injection_range {start_line: number, end_line: number}|nil
---@field injection_strategy string "append"|"replace"|"insert"
---@field confidence number Confidence score (0.0-1.0)
---@field status string "pending"|"applied"|"stale"|"rejected"
---@field created_at number Timestamp
---@field applied_at number|nil When applied
--- Patch storage
---@type PatchCandidate[]
local patches = {}
--- Patch ID counter
local patch_counter = 0
--- Generate unique patch ID
---@return string
function M.generate_id()
patch_counter = patch_counter + 1
return string.format("patch_%d_%d", os.time(), patch_counter)
end
--- Hash buffer content in range
---@param bufnr number
---@param start_line number|nil 1-indexed, nil for whole buffer
---@param end_line number|nil 1-indexed, nil for whole buffer
---@return string
local function hash_buffer_range(bufnr, start_line, end_line)
if not vim.api.nvim_buf_is_valid(bufnr) then
return ""
end
local lines
if start_line and end_line then
lines = vim.api.nvim_buf_get_lines(bufnr, start_line - 1, end_line, false)
else
lines = vim.api.nvim_buf_get_lines(bufnr, 0, -1, false)
end
local content = table.concat(lines, "\n")
local hash = 0
for i = 1, #content do
hash = (hash * 31 + string.byte(content, i)) % 2147483647
end
return string.format("%x", hash)
end
--- Take a snapshot of buffer state
---@param bufnr number Buffer number
---@param range {start_line: number, end_line: number}|nil Optional range
---@return BufferSnapshot
function M.snapshot_buffer(bufnr, range)
local changedtick = 0
if vim.api.nvim_buf_is_valid(bufnr) then
changedtick = vim.api.nvim_buf_get_var(bufnr, "changedtick") or vim.b[bufnr].changedtick or 0
end
local content_hash
if range then
content_hash = hash_buffer_range(bufnr, range.start_line, range.end_line)
else
content_hash = hash_buffer_range(bufnr, nil, nil)
end
return {
bufnr = bufnr,
changedtick = changedtick,
content_hash = content_hash,
range = range,
}
end
--- Check if buffer changed since snapshot
---@param snapshot BufferSnapshot
---@return boolean is_stale
---@return string|nil reason
function M.is_snapshot_stale(snapshot)
if not vim.api.nvim_buf_is_valid(snapshot.bufnr) then
return true, "buffer_invalid"
end
-- Check changedtick first (fast path)
local current_tick = vim.api.nvim_buf_get_var(snapshot.bufnr, "changedtick")
or vim.b[snapshot.bufnr].changedtick or 0
if current_tick ~= snapshot.changedtick then
-- Changedtick differs, but might be just cursor movement
-- Verify with content hash
local current_hash
if snapshot.range then
current_hash = hash_buffer_range(
snapshot.bufnr,
snapshot.range.start_line,
snapshot.range.end_line
)
else
current_hash = hash_buffer_range(snapshot.bufnr, nil, nil)
end
if current_hash ~= snapshot.content_hash then
return true, "content_changed"
end
end
return false, nil
end
--- Check if a patch is stale
---@param patch PatchCandidate
---@return boolean
---@return string|nil reason
function M.is_stale(patch)
return M.is_snapshot_stale(patch.original_snapshot)
end
--- Queue a patch for deferred application
---@param patch PatchCandidate
---@return PatchCandidate
function M.queue_patch(patch)
patch.id = patch.id or M.generate_id()
patch.status = patch.status or "pending"
patch.created_at = patch.created_at or os.time()
table.insert(patches, patch)
-- Log patch creation
pcall(function()
local logs = require("codetyper.agent.logs")
logs.add({
type = "patch",
message = string.format(
"Patch queued: %s (confidence: %.2f)",
patch.id, patch.confidence or 0
),
data = {
patch_id = patch.id,
event_id = patch.event_id,
target_path = patch.target_path,
code_preview = patch.generated_code:sub(1, 50),
},
})
end)
return patch
end
--- Create patch from event and response
---@param event table PromptEvent
---@param generated_code string
---@param confidence number
---@param strategy string|nil Injection strategy (overrides intent-based)
---@return PatchCandidate
function M.create_from_event(event, generated_code, confidence, strategy)
-- Get target buffer
local target_bufnr = vim.fn.bufnr(event.target_path)
if target_bufnr == -1 then
-- Try to find by filename
for _, buf in ipairs(vim.api.nvim_list_bufs()) do
local name = vim.api.nvim_buf_get_name(buf)
if name == event.target_path then
target_bufnr = buf
break
end
end
end
-- Take snapshot of the scope range in target buffer (for staleness detection)
local snapshot_range = event.scope_range or event.range
local snapshot = M.snapshot_buffer(
target_bufnr ~= -1 and target_bufnr or event.bufnr,
snapshot_range
)
-- Determine injection strategy and range based on intent
local injection_strategy = strategy
local injection_range = nil
if not injection_strategy and event.intent then
local intent_mod = require("codetyper.agent.intent")
if intent_mod.is_replacement(event.intent) then
injection_strategy = "replace"
-- Use scope range for replacement
if event.scope_range then
injection_range = event.scope_range
end
elseif event.intent.action == "insert" then
injection_strategy = "insert"
-- Insert at prompt location
injection_range = { start_line = event.range.start_line, end_line = event.range.start_line }
elseif event.intent.action == "append" then
injection_strategy = "append"
-- Will append to end of file
else
injection_strategy = "append"
end
end
injection_strategy = injection_strategy or "append"
return {
id = M.generate_id(),
event_id = event.id,
target_bufnr = target_bufnr,
target_path = event.target_path,
original_snapshot = snapshot,
generated_code = generated_code,
injection_range = injection_range,
injection_strategy = injection_strategy,
confidence = confidence,
status = "pending",
created_at = os.time(),
intent = event.intent,
scope = event.scope,
}
end
--- Get all pending patches
---@return PatchCandidate[]
function M.get_pending()
local pending = {}
for _, patch in ipairs(patches) do
if patch.status == "pending" then
table.insert(pending, patch)
end
end
return pending
end
--- Get patch by ID
---@param id string
---@return PatchCandidate|nil
function M.get(id)
for _, patch in ipairs(patches) do
if patch.id == id then
return patch
end
end
return nil
end
--- Get patches for event
---@param event_id string
---@return PatchCandidate[]
function M.get_for_event(event_id)
local result = {}
for _, patch in ipairs(patches) do
if patch.event_id == event_id then
table.insert(result, patch)
end
end
return result
end
--- Mark patch as applied
---@param id string
---@return boolean
function M.mark_applied(id)
local patch = M.get(id)
if patch then
patch.status = "applied"
patch.applied_at = os.time()
return true
end
return false
end
--- Mark patch as stale
---@param id string
---@param reason string|nil
---@return boolean
function M.mark_stale(id, reason)
local patch = M.get(id)
if patch then
patch.status = "stale"
patch.stale_reason = reason
return true
end
return false
end
--- Mark patch as rejected
---@param id string
---@param reason string|nil
---@return boolean
function M.mark_rejected(id, reason)
local patch = M.get(id)
if patch then
patch.status = "rejected"
patch.reject_reason = reason
return true
end
return false
end
--- Apply a patch to the target buffer
---@param patch PatchCandidate
---@return boolean success
---@return string|nil error
function M.apply(patch)
-- Check staleness first
local is_stale, stale_reason = M.is_stale(patch)
if is_stale then
M.mark_stale(patch.id, stale_reason)
pcall(function()
local logs = require("codetyper.agent.logs")
logs.add({
type = "warning",
message = string.format("Patch %s is stale: %s", patch.id, stale_reason or "unknown"),
})
end)
return false, "patch_stale: " .. (stale_reason or "unknown")
end
-- Ensure target buffer is valid
local target_bufnr = patch.target_bufnr
if target_bufnr == -1 or not vim.api.nvim_buf_is_valid(target_bufnr) then
-- Try to load buffer from path
target_bufnr = vim.fn.bufadd(patch.target_path)
if target_bufnr == 0 then
M.mark_rejected(patch.id, "buffer_not_found")
return false, "target buffer not found"
end
vim.fn.bufload(target_bufnr)
patch.target_bufnr = target_bufnr
end
-- Prepare code lines
local code_lines = vim.split(patch.generated_code, "\n", { plain = true })
-- Apply based on strategy
local ok, err = pcall(function()
if patch.injection_strategy == "replace" and patch.injection_range then
-- Replace specific range
vim.api.nvim_buf_set_lines(
target_bufnr,
patch.injection_range.start_line - 1,
patch.injection_range.end_line,
false,
code_lines
)
elseif patch.injection_strategy == "insert" and patch.injection_range then
-- Insert at specific line
vim.api.nvim_buf_set_lines(
target_bufnr,
patch.injection_range.start_line - 1,
patch.injection_range.start_line - 1,
false,
code_lines
)
else
-- Default: append to end
local line_count = vim.api.nvim_buf_line_count(target_bufnr)
vim.api.nvim_buf_set_lines(target_bufnr, line_count, line_count, false, code_lines)
end
end)
if not ok then
M.mark_rejected(patch.id, err)
return false, err
end
M.mark_applied(patch.id)
pcall(function()
local logs = require("codetyper.agent.logs")
logs.add({
type = "success",
message = string.format("Patch %s applied successfully", patch.id),
data = {
target_path = patch.target_path,
lines_added = #code_lines,
},
})
end)
return true, nil
end
--- Flush all pending patches that are safe to apply
---@return number applied_count
---@return number stale_count
function M.flush_pending()
local applied = 0
local stale = 0
for _, patch in ipairs(patches) do
if patch.status == "pending" then
local success, _ = M.apply(patch)
if success then
applied = applied + 1
else
stale = stale + 1
end
end
end
return applied, stale
end
--- Cancel all pending patches for a buffer
---@param bufnr number
---@return number cancelled_count
function M.cancel_for_buffer(bufnr)
local cancelled = 0
for _, patch in ipairs(patches) do
if patch.status == "pending" and
(patch.target_bufnr == bufnr or patch.original_snapshot.bufnr == bufnr) then
patch.status = "cancelled"
cancelled = cancelled + 1
end
end
return cancelled
end
--- Cleanup old patches
---@param max_age number Max age in seconds (default: 300)
function M.cleanup(max_age)
max_age = max_age or 300
local now = os.time()
local i = 1
while i <= #patches do
local patch = patches[i]
if patch.status ~= "pending" and (now - patch.created_at) > max_age then
table.remove(patches, i)
else
i = i + 1
end
end
end
--- Get statistics
---@return table
function M.stats()
local stats = {
total = #patches,
pending = 0,
applied = 0,
stale = 0,
rejected = 0,
cancelled = 0,
}
for _, patch in ipairs(patches) do
local s = patch.status
if stats[s] then
stats[s] = stats[s] + 1
end
end
return stats
end
--- Clear all patches
function M.clear()
patches = {}
end
return M

View File

@@ -0,0 +1,438 @@
---@mod codetyper.agent.queue Event queue for prompt processing
---@brief [[
--- Priority queue system for PromptEvents with observer pattern.
--- Events are processed by priority (1=high, 2=normal, 3=low).
---@brief ]]
local M = {}
---@class PromptEvent
---@field id string Unique event ID
---@field bufnr number Source buffer number
---@field range {start_line: number, end_line: number} Line range of prompt tag
---@field timestamp number os.clock() timestamp
---@field changedtick number Buffer changedtick snapshot
---@field content_hash string Hash of prompt region
---@field prompt_content string Cleaned prompt text
---@field target_path string Target file for injection
---@field priority number Priority (1=high, 2=normal, 3=low)
---@field status string "pending"|"processing"|"completed"|"escalated"|"cancelled"
---@field attempt_count number Number of processing attempts
---@field worker_type string|nil LLM provider used ("ollama"|"claude"|etc)
---@field created_at number System time when created
---@field intent Intent|nil Detected intent from prompt
---@field scope ScopeInfo|nil Resolved scope (function/class/file)
---@field scope_text string|nil Text of the resolved scope
---@field scope_range {start_line: number, end_line: number}|nil Range of scope in target
--- Internal state
---@type PromptEvent[]
local queue = {}
--- Event listeners (observer pattern)
---@type function[]
local listeners = {}
--- Event ID counter
local event_counter = 0
--- Generate unique event ID
---@return string
function M.generate_id()
event_counter = event_counter + 1
return string.format("evt_%d_%d", os.time(), event_counter)
end
--- Simple hash function for content
---@param content string
---@return string
function M.hash_content(content)
local hash = 0
for i = 1, #content do
hash = (hash * 31 + string.byte(content, i)) % 2147483647
end
return string.format("%x", hash)
end
--- Notify all listeners of queue change
---@param event_type string "enqueue"|"dequeue"|"update"|"cancel"
---@param event PromptEvent|nil The affected event
local function notify_listeners(event_type, event)
for _, listener in ipairs(listeners) do
pcall(listener, event_type, event, #queue)
end
end
--- Add event listener
---@param callback function(event_type: string, event: PromptEvent|nil, queue_size: number)
---@return number Listener ID for removal
function M.add_listener(callback)
table.insert(listeners, callback)
return #listeners
end
--- Remove event listener
---@param listener_id number
function M.remove_listener(listener_id)
if listener_id > 0 and listener_id <= #listeners then
table.remove(listeners, listener_id)
end
end
--- Compare events for priority sorting
---@param a PromptEvent
---@param b PromptEvent
---@return boolean
local function compare_priority(a, b)
-- Lower priority number = higher priority
if a.priority ~= b.priority then
return a.priority < b.priority
end
-- Same priority: older events first (FIFO)
return a.timestamp < b.timestamp
end
--- Check if two events are in the same scope
---@param a PromptEvent
---@param b PromptEvent
---@return boolean
local function same_scope(a, b)
-- Same buffer
if a.target_path ~= b.target_path then
return false
end
-- Both have scope ranges
if a.scope_range and b.scope_range then
-- Check if ranges overlap
return a.scope_range.start_line <= b.scope_range.end_line
and b.scope_range.start_line <= a.scope_range.end_line
end
-- Fallback: check if prompt ranges are close (within 10 lines)
if a.range and b.range then
local distance = math.abs(a.range.start_line - b.range.start_line)
return distance < 10
end
return false
end
--- Find conflicting events in the same scope
---@param event PromptEvent
---@return PromptEvent[] Conflicting pending events
function M.find_conflicts(event)
local conflicts = {}
for _, existing in ipairs(queue) do
if existing.status == "pending" and existing.id ~= event.id then
if same_scope(event, existing) then
table.insert(conflicts, existing)
end
end
end
return conflicts
end
--- Check if an event should be skipped due to conflicts (first tag wins)
---@param event PromptEvent
---@return boolean should_skip
---@return string|nil reason
function M.check_precedence(event)
local conflicts = M.find_conflicts(event)
for _, conflict in ipairs(conflicts) do
-- First (older) tag wins
if conflict.timestamp < event.timestamp then
return true, string.format(
"Skipped: earlier tag in same scope (event %s)",
conflict.id
)
end
end
return false, nil
end
--- Insert event maintaining priority order
---@param event PromptEvent
local function insert_sorted(event)
local pos = #queue + 1
for i, existing in ipairs(queue) do
if compare_priority(event, existing) then
pos = i
break
end
end
table.insert(queue, pos, event)
end
--- Enqueue a new event
---@param event PromptEvent
---@return PromptEvent The enqueued event with generated ID if missing
function M.enqueue(event)
-- Ensure required fields
event.id = event.id or M.generate_id()
event.timestamp = event.timestamp or os.clock()
event.created_at = event.created_at or os.time()
event.status = event.status or "pending"
event.priority = event.priority or 2
event.attempt_count = event.attempt_count or 0
-- Generate content hash if not provided
if not event.content_hash and event.prompt_content then
event.content_hash = M.hash_content(event.prompt_content)
end
insert_sorted(event)
notify_listeners("enqueue", event)
-- Log to agent logs if available
pcall(function()
local logs = require("codetyper.agent.logs")
logs.add({
type = "queue",
message = string.format("Event queued: %s (priority: %d)", event.id, event.priority),
data = {
event_id = event.id,
bufnr = event.bufnr,
prompt_preview = event.prompt_content:sub(1, 50),
},
})
end)
return event
end
--- Dequeue highest priority pending event
---@return PromptEvent|nil
function M.dequeue()
for i, event in ipairs(queue) do
if event.status == "pending" then
event.status = "processing"
notify_listeners("dequeue", event)
return event
end
end
return nil
end
--- Peek at next pending event without removing
---@return PromptEvent|nil
function M.peek()
for _, event in ipairs(queue) do
if event.status == "pending" then
return event
end
end
return nil
end
--- Get event by ID
---@param id string
---@return PromptEvent|nil
function M.get(id)
for _, event in ipairs(queue) do
if event.id == id then
return event
end
end
return nil
end
--- Update event status
---@param id string
---@param status string
---@param extra table|nil Additional fields to update
---@return boolean Success
function M.update_status(id, status, extra)
for _, event in ipairs(queue) do
if event.id == id then
event.status = status
if extra then
for k, v in pairs(extra) do
event[k] = v
end
end
notify_listeners("update", event)
return true
end
end
return false
end
--- Mark event as completed
---@param id string
---@return boolean
function M.complete(id)
return M.update_status(id, "completed")
end
--- Mark event as escalated (needs remote LLM)
---@param id string
---@return boolean
function M.escalate(id)
local event = M.get(id)
if event then
event.status = "escalated"
event.attempt_count = event.attempt_count + 1
-- Re-queue as pending with same priority
event.status = "pending"
notify_listeners("update", event)
return true
end
return false
end
--- Cancel all events for a buffer
---@param bufnr number
---@return number Number of cancelled events
function M.cancel_for_buffer(bufnr)
local cancelled = 0
for _, event in ipairs(queue) do
if event.bufnr == bufnr and event.status == "pending" then
event.status = "cancelled"
cancelled = cancelled + 1
notify_listeners("cancel", event)
end
end
return cancelled
end
--- Cancel event by ID
---@param id string
---@return boolean
function M.cancel(id)
return M.update_status(id, "cancelled")
end
--- Get all pending events
---@return PromptEvent[]
function M.get_pending()
local pending = {}
for _, event in ipairs(queue) do
if event.status == "pending" then
table.insert(pending, event)
end
end
return pending
end
--- Get all processing events
---@return PromptEvent[]
function M.get_processing()
local processing = {}
for _, event in ipairs(queue) do
if event.status == "processing" then
table.insert(processing, event)
end
end
return processing
end
--- Get queue size (all events)
---@return number
function M.size()
return #queue
end
--- Get count of pending events
---@return number
function M.pending_count()
local count = 0
for _, event in ipairs(queue) do
if event.status == "pending" then
count = count + 1
end
end
return count
end
--- Get count of processing events
---@return number
function M.processing_count()
local count = 0
for _, event in ipairs(queue) do
if event.status == "processing" then
count = count + 1
end
end
return count
end
--- Check if queue is empty (no pending events)
---@return boolean
function M.is_empty()
return M.pending_count() == 0
end
--- Clear all events (optionally filter by status)
---@param status string|nil Status to clear, or nil for all
function M.clear(status)
if status then
local i = 1
while i <= #queue do
if queue[i].status == status then
table.remove(queue, i)
else
i = i + 1
end
end
else
queue = {}
end
notify_listeners("update", nil)
end
--- Cleanup completed/cancelled events older than max_age seconds
---@param max_age number Maximum age in seconds (default: 300)
function M.cleanup(max_age)
max_age = max_age or 300
local now = os.time()
local i = 1
while i <= #queue do
local event = queue[i]
if (event.status == "completed" or event.status == "cancelled")
and (now - event.created_at) > max_age then
table.remove(queue, i)
else
i = i + 1
end
end
end
--- Get queue statistics
---@return table
function M.stats()
local stats = {
total = #queue,
pending = 0,
processing = 0,
completed = 0,
cancelled = 0,
escalated = 0,
}
for _, event in ipairs(queue) do
local s = event.status
if stats[s] then
stats[s] = stats[s] + 1
end
end
return stats
end
--- Debug: dump queue contents
---@return string
function M.dump()
local lines = { "Queue contents:" }
for i, event in ipairs(queue) do
table.insert(lines, string.format(
" %d. [%s] %s (p:%d, status:%s, attempts:%d)",
i, event.id,
event.prompt_content:sub(1, 30):gsub("\n", " "),
event.priority, event.status, event.attempt_count
))
end
return table.concat(lines, "\n")
end
return M

View File

@@ -0,0 +1,488 @@
---@mod codetyper.agent.scheduler Event scheduler with completion-awareness
---@brief [[
--- Central orchestrator for the event-driven system.
--- Handles dispatch, escalation, and completion-safe injection.
---@brief ]]
local M = {}
local queue = require("codetyper.agent.queue")
local patch = require("codetyper.agent.patch")
local worker = require("codetyper.agent.worker")
local confidence_mod = require("codetyper.agent.confidence")
--- Scheduler state
local state = {
running = false,
timer = nil,
poll_interval = 100, -- ms
paused = false,
config = {
enabled = true,
ollama_scout = true,
escalation_threshold = 0.7,
max_concurrent = 2,
completion_delay_ms = 100,
remote_provider = "claude", -- Default fallback provider
},
}
--- Autocommand group for injection timing
local augroup = nil
--- Check if completion popup is visible
---@return boolean
function M.is_completion_visible()
-- Check native popup menu
if vim.fn.pumvisible() == 1 then
return true
end
-- Check nvim-cmp
local ok, cmp = pcall(require, "cmp")
if ok and cmp.visible and cmp.visible() then
return true
end
-- Check coq_nvim
local coq_ok, coq = pcall(require, "coq")
if coq_ok and coq and type(coq.visible) == "function" and coq.visible() then
return true
end
return false
end
--- Check if we're in insert mode
---@return boolean
function M.is_insert_mode()
local mode = vim.fn.mode()
return mode == "i" or mode == "ic" or mode == "ix"
end
--- Check if it's safe to inject code
---@return boolean
---@return string|nil reason if not safe
function M.is_safe_to_inject()
if M.is_completion_visible() then
return false, "completion_visible"
end
if M.is_insert_mode() then
return false, "insert_mode"
end
return true, nil
end
--- Get the provider for escalation
---@return string
local function get_remote_provider()
local ok, codetyper = pcall(require, "codetyper")
if ok then
local config = codetyper.get_config()
if config and config.llm and config.llm.provider then
-- If current provider is ollama, use configured remote
if config.llm.provider == "ollama" then
-- Check which remote provider is configured
if config.llm.claude and config.llm.claude.api_key then
return "claude"
elseif config.llm.openai and config.llm.openai.api_key then
return "openai"
elseif config.llm.gemini and config.llm.gemini.api_key then
return "gemini"
elseif config.llm.copilot then
return "copilot"
end
end
return config.llm.provider
end
end
return state.config.remote_provider
end
--- Get the primary provider (ollama if scout enabled, else configured)
---@return string
local function get_primary_provider()
if state.config.ollama_scout then
return "ollama"
end
local ok, codetyper = pcall(require, "codetyper")
if ok then
local config = codetyper.get_config()
if config and config.llm and config.llm.provider then
return config.llm.provider
end
end
return "claude"
end
--- Process worker result and decide next action
---@param event table PromptEvent
---@param result table WorkerResult
local function handle_worker_result(event, result)
if not result.success then
-- Failed - try escalation if this was ollama
if result.worker_type == "ollama" and event.attempt_count < 2 then
pcall(function()
local logs = require("codetyper.agent.logs")
logs.add({
type = "info",
message = string.format(
"Escalating event %s to remote provider (ollama failed)",
event.id
),
})
end)
event.attempt_count = event.attempt_count + 1
event.status = "pending"
event.worker_type = get_remote_provider()
return
end
-- Mark as failed
queue.update_status(event.id, "failed", { error = result.error })
return
end
-- Success - check confidence
local needs_escalation = confidence_mod.needs_escalation(
result.confidence,
state.config.escalation_threshold
)
if needs_escalation and result.worker_type == "ollama" and event.attempt_count < 2 then
-- Low confidence from ollama - escalate to remote
pcall(function()
local logs = require("codetyper.agent.logs")
logs.add({
type = "info",
message = string.format(
"Escalating event %s to remote provider (confidence: %.2f < %.2f)",
event.id, result.confidence, state.config.escalation_threshold
),
})
end)
event.attempt_count = event.attempt_count + 1
event.status = "pending"
event.worker_type = get_remote_provider()
return
end
-- Good enough or final attempt - create patch
local p = patch.create_from_event(event, result.response, result.confidence)
patch.queue_patch(p)
queue.complete(event.id)
-- Schedule patch application
M.schedule_patch_flush()
end
--- Dispatch next event from queue
local function dispatch_next()
if state.paused then
return
end
-- Check concurrent limit
if worker.active_count() >= state.config.max_concurrent then
return
end
-- Get next pending event
local event = queue.dequeue()
if not event then
return
end
-- Check for precedence conflicts (multiple tags in same scope)
local should_skip, skip_reason = queue.check_precedence(event)
if should_skip then
pcall(function()
local logs = require("codetyper.agent.logs")
logs.add({
type = "warning",
message = string.format("Event %s skipped: %s", event.id, skip_reason or "conflict"),
})
end)
queue.cancel(event.id)
-- Try next event
return dispatch_next()
end
-- Determine which provider to use
local provider = event.worker_type or get_primary_provider()
-- Log dispatch with intent/scope info
pcall(function()
local logs = require("codetyper.agent.logs")
local intent_info = event.intent and event.intent.type or "unknown"
local scope_info = event.scope and event.scope.type ~= "file"
and string.format("%s:%s", event.scope.type, event.scope.name or "anon")
or "file"
logs.add({
type = "info",
message = string.format(
"Dispatching %s [intent: %s, scope: %s, provider: %s]",
event.id, intent_info, scope_info, provider
),
})
end)
-- Create worker
worker.create(event, provider, function(result)
vim.schedule(function()
handle_worker_result(event, result)
end)
end)
end
--- Schedule patch flush after delay (completion safety)
function M.schedule_patch_flush()
vim.defer_fn(function()
local safe, reason = M.is_safe_to_inject()
if safe then
local applied, stale = patch.flush_pending()
if applied > 0 or stale > 0 then
pcall(function()
local logs = require("codetyper.agent.logs")
logs.add({
type = "info",
message = string.format("Patches flushed: %d applied, %d stale", applied, stale),
})
end)
end
else
-- Not safe yet, reschedule
pcall(function()
local logs = require("codetyper.agent.logs")
logs.add({
type = "debug",
message = string.format("Patch flush deferred: %s", reason or "unknown"),
})
end)
-- Will be retried on next InsertLeave or CursorHold
end
end, state.config.completion_delay_ms)
end
--- Main scheduler loop
local function scheduler_loop()
if not state.running then
return
end
dispatch_next()
-- Cleanup old items periodically
if math.random() < 0.01 then -- ~1% chance each tick
queue.cleanup(300)
patch.cleanup(300)
end
-- Schedule next tick
state.timer = vim.defer_fn(scheduler_loop, state.poll_interval)
end
--- Setup autocommands for injection timing
local function setup_autocmds()
if augroup then
pcall(vim.api.nvim_del_augroup_by_id, augroup)
end
augroup = vim.api.nvim_create_augroup("CodetypeScheduler", { clear = true })
-- Flush patches when leaving insert mode
vim.api.nvim_create_autocmd("InsertLeave", {
group = augroup,
callback = function()
vim.defer_fn(function()
if not M.is_completion_visible() then
patch.flush_pending()
end
end, state.config.completion_delay_ms)
end,
desc = "Flush pending patches on InsertLeave",
})
-- Flush patches on cursor hold
vim.api.nvim_create_autocmd("CursorHold", {
group = augroup,
callback = function()
if not M.is_insert_mode() and not M.is_completion_visible() then
patch.flush_pending()
end
end,
desc = "Flush pending patches on CursorHold",
})
-- Cancel patches when buffer changes significantly
vim.api.nvim_create_autocmd("BufWritePre", {
group = augroup,
callback = function(ev)
-- Mark relevant patches as potentially stale
-- They'll be checked on next flush attempt
end,
desc = "Check patch staleness on save",
})
-- Cleanup when buffer is deleted
vim.api.nvim_create_autocmd("BufDelete", {
group = augroup,
callback = function(ev)
queue.cancel_for_buffer(ev.buf)
patch.cancel_for_buffer(ev.buf)
worker.cancel_for_event(ev.buf)
end,
desc = "Cleanup on buffer delete",
})
end
--- Start the scheduler
---@param config table|nil Configuration overrides
function M.start(config)
if state.running then
return
end
-- Merge config
if config then
for k, v in pairs(config) do
state.config[k] = v
end
end
-- Load config from codetyper if available
pcall(function()
local codetyper = require("codetyper")
local ct_config = codetyper.get_config()
if ct_config and ct_config.scheduler then
for k, v in pairs(ct_config.scheduler) do
state.config[k] = v
end
end
end)
if not state.config.enabled then
return
end
state.running = true
state.paused = false
-- Setup autocmds
setup_autocmds()
-- Add queue listener
queue.add_listener(function(event_type, event, queue_size)
if event_type == "enqueue" and not state.paused then
-- New event - try to dispatch immediately
vim.schedule(dispatch_next)
end
end)
-- Start main loop
scheduler_loop()
pcall(function()
local logs = require("codetyper.agent.logs")
logs.add({
type = "info",
message = "Scheduler started",
data = {
ollama_scout = state.config.ollama_scout,
escalation_threshold = state.config.escalation_threshold,
max_concurrent = state.config.max_concurrent,
},
})
end)
end
--- Stop the scheduler
function M.stop()
state.running = false
if state.timer then
pcall(function()
if type(state.timer) == "userdata" and state.timer.stop then
state.timer:stop()
end
end)
state.timer = nil
end
if augroup then
pcall(vim.api.nvim_del_augroup_by_id, augroup)
augroup = nil
end
pcall(function()
local logs = require("codetyper.agent.logs")
logs.add({
type = "info",
message = "Scheduler stopped",
})
end)
end
--- Pause the scheduler (don't process new events)
function M.pause()
state.paused = true
end
--- Resume the scheduler
function M.resume()
state.paused = false
vim.schedule(dispatch_next)
end
--- Check if scheduler is running
---@return boolean
function M.is_running()
return state.running
end
--- Check if scheduler is paused
---@return boolean
function M.is_paused()
return state.paused
end
--- Get scheduler status
---@return table
function M.status()
return {
running = state.running,
paused = state.paused,
queue_stats = queue.stats(),
patch_stats = patch.stats(),
active_workers = worker.active_count(),
config = vim.deepcopy(state.config),
}
end
--- Manually trigger dispatch
function M.dispatch()
if state.running and not state.paused then
dispatch_next()
end
end
--- Force flush all pending patches (ignores completion check)
function M.force_flush()
return patch.flush_pending()
end
--- Update configuration
---@param config table
function M.configure(config)
for k, v in pairs(config) do
state.config[k] = v
end
end
return M

View File

@@ -0,0 +1,444 @@
---@mod codetyper.agent.scope Tree-sitter scope resolution
---@brief [[
--- Resolves semantic scope for prompts using Tree-sitter.
--- Finds the smallest enclosing function/method/block for a given position.
---@brief ]]
local M = {}
---@class ScopeInfo
---@field type string "function"|"method"|"class"|"block"|"file"|"unknown"
---@field node_type string Tree-sitter node type
---@field range {start_row: number, start_col: number, end_row: number, end_col: number}
---@field text string The full text of the scope
---@field name string|nil Name of the function/class if available
--- Node types that represent function-like scopes per language
local function_nodes = {
-- Lua
["function_declaration"] = "function",
["function_definition"] = "function",
["local_function"] = "function",
["function"] = "function",
-- JavaScript/TypeScript
["function_declaration"] = "function",
["function_expression"] = "function",
["arrow_function"] = "function",
["method_definition"] = "method",
["function"] = "function",
-- Python
["function_definition"] = "function",
["async_function_definition"] = "function",
-- Go
["function_declaration"] = "function",
["method_declaration"] = "method",
-- Rust
["function_item"] = "function",
["impl_item"] = "method",
-- Ruby
["method"] = "method",
["singleton_method"] = "method",
-- Java/C#
["method_declaration"] = "method",
["constructor_declaration"] = "method",
-- C/C++
["function_definition"] = "function",
}
--- Node types that represent class-like scopes
local class_nodes = {
["class_declaration"] = "class",
["class_definition"] = "class",
["class"] = "class",
["struct_item"] = "class",
["impl_item"] = "class",
["interface_declaration"] = "class",
["module"] = "class",
}
--- Node types that represent block scopes
local block_nodes = {
["block"] = "block",
["statement_block"] = "block",
["compound_statement"] = "block",
["do_block"] = "block",
}
--- Check if Tree-sitter is available for buffer
---@param bufnr number
---@return boolean
function M.has_treesitter(bufnr)
local ok, parsers = pcall(require, "nvim-treesitter.parsers")
if not ok then
return false
end
local lang = parsers.get_buf_lang(bufnr)
if not lang then
return false
end
return parsers.has_parser(lang)
end
--- Get Tree-sitter node at position
---@param bufnr number
---@param row number 0-indexed
---@param col number 0-indexed
---@return TSNode|nil
local function get_node_at_pos(bufnr, row, col)
local ok, ts_utils = pcall(require, "nvim-treesitter.ts_utils")
if not ok then
return nil
end
-- Try to get the node at the cursor position
local node = ts_utils.get_node_at_cursor()
if node then
return node
end
-- Fallback: get root and find node
local parser = vim.treesitter.get_parser(bufnr)
if not parser then
return nil
end
local tree = parser:parse()[1]
if not tree then
return nil
end
local root = tree:root()
return root:named_descendant_for_range(row, col, row, col)
end
--- Find enclosing scope node of specific types
---@param node TSNode
---@param node_types table<string, string>
---@return TSNode|nil, string|nil scope_type
local function find_enclosing_scope(node, node_types)
local current = node
while current do
local node_type = current:type()
if node_types[node_type] then
return current, node_types[node_type]
end
current = current:parent()
end
return nil, nil
end
--- Extract function/method name from node
---@param node TSNode
---@param bufnr number
---@return string|nil
local function get_scope_name(node, bufnr)
-- Try to find name child node
local name_node = node:field("name")[1]
if name_node then
return vim.treesitter.get_node_text(name_node, bufnr)
end
-- Try identifier child
for child in node:iter_children() do
if child:type() == "identifier" or child:type() == "property_identifier" then
return vim.treesitter.get_node_text(child, bufnr)
end
end
return nil
end
--- Resolve scope at position using Tree-sitter
---@param bufnr number Buffer number
---@param row number 1-indexed line number
---@param col number 1-indexed column number
---@return ScopeInfo
function M.resolve_scope(bufnr, row, col)
-- Default to file scope
local default_scope = {
type = "file",
node_type = "file",
range = {
start_row = 1,
start_col = 0,
end_row = vim.api.nvim_buf_line_count(bufnr),
end_col = 0,
},
text = table.concat(vim.api.nvim_buf_get_lines(bufnr, 0, -1, false), "\n"),
name = vim.fn.fnamemodify(vim.api.nvim_buf_get_name(bufnr), ":t"),
}
-- Check if Tree-sitter is available
if not M.has_treesitter(bufnr) then
-- Fall back to heuristic-based scope resolution
return M.resolve_scope_heuristic(bufnr, row, col) or default_scope
end
-- Convert to 0-indexed for Tree-sitter
local ts_row = row - 1
local ts_col = col - 1
-- Get node at position
local node = get_node_at_pos(bufnr, ts_row, ts_col)
if not node then
return default_scope
end
-- Try to find function scope first
local scope_node, scope_type = find_enclosing_scope(node, function_nodes)
-- If no function, try class
if not scope_node then
scope_node, scope_type = find_enclosing_scope(node, class_nodes)
end
-- If no class, try block
if not scope_node then
scope_node, scope_type = find_enclosing_scope(node, block_nodes)
end
if not scope_node then
return default_scope
end
-- Get range (convert back to 1-indexed)
local start_row, start_col, end_row, end_col = scope_node:range()
-- Get text
local text = vim.treesitter.get_node_text(scope_node, bufnr)
-- Get name
local name = get_scope_name(scope_node, bufnr)
return {
type = scope_type,
node_type = scope_node:type(),
range = {
start_row = start_row + 1,
start_col = start_col,
end_row = end_row + 1,
end_col = end_col,
},
text = text,
name = name,
}
end
--- Heuristic fallback for scope resolution (no Tree-sitter)
---@param bufnr number
---@param row number 1-indexed
---@param col number 1-indexed
---@return ScopeInfo|nil
function M.resolve_scope_heuristic(bufnr, row, col)
_ = col -- unused in heuristic
local lines = vim.api.nvim_buf_get_lines(bufnr, 0, -1, false)
local filetype = vim.bo[bufnr].filetype
-- Language-specific function patterns
local patterns = {
lua = {
start = "^%s*local%s+function%s+",
start_alt = "^%s*function%s+",
ending = "^%s*end%s*$",
},
python = {
start = "^%s*def%s+",
start_alt = "^%s*async%s+def%s+",
ending = nil, -- Python uses indentation
},
javascript = {
start = "^%s*function%s+",
start_alt = "^%s*const%s+%w+%s*=%s*",
ending = "^%s*}%s*$",
},
typescript = {
start = "^%s*function%s+",
start_alt = "^%s*const%s+%w+%s*=%s*",
ending = "^%s*}%s*$",
},
}
local lang_patterns = patterns[filetype]
if not lang_patterns then
return nil
end
-- Find function start (search backwards)
local start_line = nil
for i = row, 1, -1 do
local line = lines[i]
if line:match(lang_patterns.start) or
(lang_patterns.start_alt and line:match(lang_patterns.start_alt)) then
start_line = i
break
end
end
if not start_line then
return nil
end
-- Find function end
local end_line = nil
if lang_patterns.ending then
-- Brace/end based languages
local depth = 0
for i = start_line, #lines do
local line = lines[i]
-- Count braces or end keywords
if filetype == "lua" then
if line:match("function") or line:match("if") or line:match("for") or line:match("while") then
depth = depth + 1
end
if line:match("^%s*end") then
depth = depth - 1
if depth <= 0 then
end_line = i
break
end
end
else
-- JavaScript/TypeScript brace counting
for _ in line:gmatch("{") do depth = depth + 1 end
for _ in line:gmatch("}") do depth = depth - 1 end
if depth <= 0 and i > start_line then
end_line = i
break
end
end
end
else
-- Python: use indentation
local base_indent = #(lines[start_line]:match("^%s*") or "")
for i = start_line + 1, #lines do
local line = lines[i]
if line:match("^%s*$") then
goto continue
end
local indent = #(line:match("^%s*") or "")
if indent <= base_indent then
end_line = i - 1
break
end
::continue::
end
end_line = end_line or #lines
end
if not end_line then
end_line = #lines
end
-- Extract text
local scope_lines = {}
for i = start_line, end_line do
table.insert(scope_lines, lines[i])
end
-- Try to extract function name
local name = nil
local first_line = lines[start_line]
name = first_line:match("function%s+([%w_]+)") or
first_line:match("def%s+([%w_]+)") or
first_line:match("const%s+([%w_]+)")
return {
type = "function",
node_type = "heuristic",
range = {
start_row = start_line,
start_col = 0,
end_row = end_line,
end_col = #lines[end_line],
},
text = table.concat(scope_lines, "\n"),
name = name,
}
end
--- Get scope for the current cursor position
---@return ScopeInfo
function M.resolve_scope_at_cursor()
local bufnr = vim.api.nvim_get_current_buf()
local cursor = vim.api.nvim_win_get_cursor(0)
return M.resolve_scope(bufnr, cursor[1], cursor[2] + 1)
end
--- Check if position is inside a function/method
---@param bufnr number
---@param row number 1-indexed
---@param col number 1-indexed
---@return boolean
function M.is_in_function(bufnr, row, col)
local scope = M.resolve_scope(bufnr, row, col)
return scope.type == "function" or scope.type == "method"
end
--- Get all functions in buffer
---@param bufnr number
---@return ScopeInfo[]
function M.get_all_functions(bufnr)
local functions = {}
if not M.has_treesitter(bufnr) then
return functions
end
local parser = vim.treesitter.get_parser(bufnr)
if not parser then
return functions
end
local tree = parser:parse()[1]
if not tree then
return functions
end
local root = tree:root()
-- Query for all function nodes
local lang = parser:lang()
local query_string = [[
(function_declaration) @func
(function_definition) @func
(method_definition) @func
(arrow_function) @func
]]
local ok, query = pcall(vim.treesitter.query.parse, lang, query_string)
if not ok then
return functions
end
for _, node in query:iter_captures(root, bufnr, 0, -1) do
local start_row, start_col, end_row, end_col = node:range()
local text = vim.treesitter.get_node_text(node, bufnr)
local name = get_scope_name(node, bufnr)
table.insert(functions, {
type = function_nodes[node:type()] or "function",
node_type = node:type(),
range = {
start_row = start_row + 1,
start_col = start_col,
end_row = end_row + 1,
end_col = end_col,
},
text = text,
name = name,
})
end
return functions
end
return M

View File

@@ -97,6 +97,23 @@ function M.to_claude_format()
return tools
end
--- Convert tool definitions to OpenAI API format
---@return table[] Tools in OpenAI's expected format
function M.to_openai_format()
local tools = {}
for _, tool in pairs(M.definitions) do
table.insert(tools, {
type = "function",
["function"] = {
name = tool.name,
description = tool.description,
parameters = tool.parameters,
},
})
end
return tools
end
--- Convert tool definitions to prompt format for Ollama
---@return string Formatted tool descriptions for system prompt
function M.to_prompt_format()

View File

@@ -0,0 +1,419 @@
---@mod codetyper.agent.worker Async LLM worker wrapper
---@brief [[
--- Wraps LLM clients with timeout handling and confidence scoring.
--- Provides unified interface for scheduler to dispatch work.
---@brief ]]
local M = {}
local confidence = require("codetyper.agent.confidence")
---@class WorkerResult
---@field success boolean Whether the request succeeded
---@field response string|nil The generated code
---@field error string|nil Error message if failed
---@field confidence number Confidence score (0.0-1.0)
---@field confidence_breakdown table Detailed confidence breakdown
---@field duration number Time taken in seconds
---@field worker_type string LLM provider used
---@field usage table|nil Token usage if available
---@class Worker
---@field id string Worker ID
---@field event table PromptEvent being processed
---@field worker_type string LLM provider type
---@field status string "pending"|"running"|"completed"|"failed"|"timeout"
---@field start_time number Start timestamp
---@field timeout_ms number Timeout in milliseconds
---@field timer any Timeout timer handle
---@field callback function Result callback
--- Worker ID counter
local worker_counter = 0
--- Active workers
---@type table<string, Worker>
local active_workers = {}
--- Default timeouts by provider type
local default_timeouts = {
ollama = 30000, -- 30s for local
claude = 60000, -- 60s for remote
openai = 60000,
gemini = 60000,
copilot = 60000,
}
--- Generate worker ID
---@return string
local function generate_id()
worker_counter = worker_counter + 1
return string.format("worker_%d_%d", os.time(), worker_counter)
end
--- Get LLM client by type
---@param worker_type string
---@return table|nil client
---@return string|nil error
local function get_client(worker_type)
local ok, client = pcall(require, "codetyper.llm." .. worker_type)
if ok and client then
return client, nil
end
return nil, "Unknown provider: " .. worker_type
end
--- Build prompt for code generation
---@param event table PromptEvent
---@return string prompt
---@return table context
local function build_prompt(event)
local intent_mod = require("codetyper.agent.intent")
-- Get target file content for context
local target_content = ""
if event.target_path then
local ok, lines = pcall(function()
return vim.fn.readfile(event.target_path)
end)
if ok and lines then
target_content = table.concat(lines, "\n")
end
end
local filetype = vim.fn.fnamemodify(event.target_path or "", ":e")
-- Build context with scope information
local context = {
target_path = event.target_path,
target_content = target_content,
filetype = filetype,
scope = event.scope,
scope_text = event.scope_text,
scope_range = event.scope_range,
intent = event.intent,
}
-- Build the actual prompt based on intent and scope
local system_prompt = ""
local user_prompt = event.prompt_content
if event.intent then
system_prompt = intent_mod.get_prompt_modifier(event.intent)
end
-- If we have a scope (function/method), include it in the prompt
if event.scope_text and event.scope and event.scope.type ~= "file" then
local scope_type = event.scope.type
local scope_name = event.scope.name or "anonymous"
-- For replacement intents, provide the full scope to transform
if event.intent and intent_mod.is_replacement(event.intent) then
user_prompt = string.format(
[[Here is a %s named "%s" in a %s file:
```%s
%s
```
User request: %s
Return the complete transformed %s. Output only code, no explanations.]],
scope_type,
scope_name,
filetype,
filetype,
event.scope_text,
event.prompt_content,
scope_type
)
else
-- For insertion intents, provide context
user_prompt = string.format(
[[Context - this code is inside a %s named "%s":
```%s
%s
```
User request: %s
Output only the code to insert, no explanations.]],
scope_type,
scope_name,
filetype,
event.scope_text,
event.prompt_content
)
end
else
-- No scope resolved, use full file context
user_prompt = string.format(
[[File: %s (%s)
```%s
%s
```
User request: %s
Output only code, no explanations.]],
vim.fn.fnamemodify(event.target_path or "", ":t"),
filetype,
filetype,
target_content:sub(1, 4000), -- Limit context size
event.prompt_content
)
end
context.system_prompt = system_prompt
context.formatted_prompt = user_prompt
return user_prompt, context
end
--- Create and start a worker
---@param event table PromptEvent
---@param worker_type string LLM provider type
---@param callback function(result: WorkerResult)
---@return Worker
function M.create(event, worker_type, callback)
local worker = {
id = generate_id(),
event = event,
worker_type = worker_type,
status = "pending",
start_time = os.clock(),
timeout_ms = default_timeouts[worker_type] or 60000,
callback = callback,
}
active_workers[worker.id] = worker
-- Log worker creation
pcall(function()
local logs = require("codetyper.agent.logs")
logs.add({
type = "worker",
message = string.format("Worker %s started (%s)", worker.id, worker_type),
data = {
worker_id = worker.id,
event_id = event.id,
provider = worker_type,
},
})
end)
-- Start the work
M.start(worker)
return worker
end
--- Start worker execution
---@param worker Worker
function M.start(worker)
worker.status = "running"
-- Set up timeout
worker.timer = vim.defer_fn(function()
if worker.status == "running" then
worker.status = "timeout"
active_workers[worker.id] = nil
pcall(function()
local logs = require("codetyper.agent.logs")
logs.add({
type = "warning",
message = string.format("Worker %s timed out after %dms", worker.id, worker.timeout_ms),
})
end)
worker.callback({
success = false,
response = nil,
error = "timeout",
confidence = 0,
confidence_breakdown = {},
duration = (os.clock() - worker.start_time),
worker_type = worker.worker_type,
})
end
end, worker.timeout_ms)
-- Get client and execute
local client, client_err = get_client(worker.worker_type)
if not client then
M.complete(worker, nil, client_err)
return
end
local prompt, context = build_prompt(worker.event)
-- Call the LLM
client.generate(prompt, context, function(response, err, usage)
-- Cancel timeout timer
if worker.timer then
pcall(function()
-- Timer might have already fired
if type(worker.timer) == "userdata" and worker.timer.stop then
worker.timer:stop()
end
end)
end
if worker.status ~= "running" then
return -- Already timed out or cancelled
end
M.complete(worker, response, err, usage)
end)
end
--- Complete worker execution
---@param worker Worker
---@param response string|nil
---@param error string|nil
---@param usage table|nil
function M.complete(worker, response, error, usage)
local duration = os.clock() - worker.start_time
if error then
worker.status = "failed"
active_workers[worker.id] = nil
pcall(function()
local logs = require("codetyper.agent.logs")
logs.add({
type = "error",
message = string.format("Worker %s failed: %s", worker.id, error),
})
end)
worker.callback({
success = false,
response = nil,
error = error,
confidence = 0,
confidence_breakdown = {},
duration = duration,
worker_type = worker.worker_type,
usage = usage,
})
return
end
-- Score confidence
local conf_score, breakdown = confidence.score(response, worker.event.prompt_content)
worker.status = "completed"
active_workers[worker.id] = nil
pcall(function()
local logs = require("codetyper.agent.logs")
logs.add({
type = "success",
message = string.format(
"Worker %s completed (%.2fs, confidence: %.2f - %s)",
worker.id, duration, conf_score, confidence.level_name(conf_score)
),
data = {
confidence_breakdown = confidence.format_breakdown(breakdown),
usage = usage,
},
})
end)
worker.callback({
success = true,
response = response,
error = nil,
confidence = conf_score,
confidence_breakdown = breakdown,
duration = duration,
worker_type = worker.worker_type,
usage = usage,
})
end
--- Cancel a worker
---@param worker_id string
---@return boolean
function M.cancel(worker_id)
local worker = active_workers[worker_id]
if not worker then
return false
end
if worker.timer then
pcall(function()
if type(worker.timer) == "userdata" and worker.timer.stop then
worker.timer:stop()
end
end)
end
worker.status = "cancelled"
active_workers[worker_id] = nil
pcall(function()
local logs = require("codetyper.agent.logs")
logs.add({
type = "info",
message = string.format("Worker %s cancelled", worker_id),
})
end)
return true
end
--- Get active worker count
---@return number
function M.active_count()
local count = 0
for _ in pairs(active_workers) do
count = count + 1
end
return count
end
--- Get all active workers
---@return Worker[]
function M.get_active()
local workers = {}
for _, worker in pairs(active_workers) do
table.insert(workers, worker)
end
return workers
end
--- Check if worker exists and is running
---@param worker_id string
---@return boolean
function M.is_running(worker_id)
local worker = active_workers[worker_id]
return worker ~= nil and worker.status == "running"
end
--- Cancel all workers for an event
---@param event_id string
---@return number cancelled_count
function M.cancel_for_event(event_id)
local cancelled = 0
for id, worker in pairs(active_workers) do
if worker.event.id == event_id then
M.cancel(id)
cancelled = cancelled + 1
end
end
return cancelled
end
--- Set timeout for worker type
---@param worker_type string
---@param timeout_ms number
function M.set_timeout(worker_type, timeout_ms)
default_timeouts[worker_type] = timeout_ms
end
return M

View File

@@ -140,6 +140,19 @@ function M.setup()
end,
desc = "Update tree.log on directory change",
})
-- Auto-index: Create/open coder companion file when opening source files
vim.api.nvim_create_autocmd("BufEnter", {
group = group,
pattern = "*",
callback = function(ev)
-- Delay to ensure buffer is fully loaded
vim.defer_fn(function()
M.auto_index_file(ev.buf)
end, 100)
end,
desc = "Auto-index source files with coder companion",
})
end
--- Get config with fallback defaults
@@ -193,11 +206,96 @@ function M.check_for_closed_prompt()
-- Mark as processed
processed_prompts[prompt_key] = true
-- Auto-process the prompt (no confirmation needed)
utils.notify("Processing prompt...", vim.log.levels.INFO)
vim.schedule(function()
vim.cmd("CoderProcess")
end)
-- Check if scheduler is enabled
local codetyper = require("codetyper")
local ct_config = codetyper.get_config()
local scheduler_enabled = ct_config and ct_config.scheduler and ct_config.scheduler.enabled
if scheduler_enabled then
-- Event-driven: emit to queue
vim.schedule(function()
local queue = require("codetyper.agent.queue")
local patch_mod = require("codetyper.agent.patch")
local intent_mod = require("codetyper.agent.intent")
local scope_mod = require("codetyper.agent.scope")
-- Take buffer snapshot
local snapshot = patch_mod.snapshot_buffer(bufnr, {
start_line = prompt.start_line,
end_line = prompt.end_line,
})
-- Get target path
local current_file = vim.fn.expand("%:p")
local target_path = utils.get_target_path(current_file)
-- Clean prompt content
local cleaned = parser.clean_prompt(prompt.content)
-- Detect intent from prompt
local intent = intent_mod.detect(cleaned)
-- Resolve scope in target file (use prompt position to find enclosing scope)
local target_bufnr = vim.fn.bufnr(target_path)
local scope = nil
local scope_text = nil
local scope_range = nil
if target_bufnr ~= -1 then
-- Find scope at the corresponding line in target
-- Use the prompt's line position as reference
scope = scope_mod.resolve_scope(target_bufnr, prompt.start_line, 1)
if scope and scope.type ~= "file" then
scope_text = scope.text
scope_range = {
start_line = scope.range.start_row,
end_line = scope.range.end_row,
}
end
end
-- Determine priority based on intent
local priority = 2 -- Normal
if intent.type == "fix" or intent.type == "complete" then
priority = 1 -- High priority for fixes and completions
elseif intent.type == "test" or intent.type == "document" then
priority = 3 -- Lower priority for tests and docs
end
-- Enqueue the event
queue.enqueue({
id = queue.generate_id(),
bufnr = bufnr,
range = { start_line = prompt.start_line, end_line = prompt.end_line },
timestamp = os.clock(),
changedtick = snapshot.changedtick,
content_hash = snapshot.content_hash,
prompt_content = cleaned,
target_path = target_path,
priority = priority,
status = "pending",
attempt_count = 0,
intent = intent,
scope = scope,
scope_text = scope_text,
scope_range = scope_range,
})
local scope_info = scope and scope.type ~= "file"
and string.format(" [%s: %s]", scope.type, scope.name or "anonymous")
or ""
utils.notify(
string.format("Prompt queued: %s%s", intent.type, scope_info),
vim.log.levels.INFO
)
end)
else
-- Legacy: direct processing
utils.notify("Processing prompt...", vim.log.levels.INFO)
vim.schedule(function()
vim.cmd("CoderProcess")
end)
end
end
end
end
@@ -268,11 +366,9 @@ function M.auto_open_target_file()
local codetyper = require("codetyper")
local config = codetyper.get_config()
-- Fallback width if config not fully loaded
local width = (config and config.window and config.window.width) or 0.4
if width <= 1 then
width = math.floor(vim.o.columns * width)
end
-- Fallback width if config not fully loaded (percentage, e.g., 25 = 25%)
local width_pct = (config and config.window and config.window.width) or 25
local width = math.ceil(vim.o.columns * (width_pct / 100))
-- Store current coder window
local coder_win = vim.api.nvim_get_current_win()
@@ -362,4 +458,178 @@ function M.clear()
vim.api.nvim_del_augroup_by_name(AUGROUP)
end
--- Track buffers that have been auto-indexed
---@type table<number, boolean>
local auto_indexed_buffers = {}
--- Supported file extensions for auto-indexing
local supported_extensions = {
"ts", "tsx", "js", "jsx", "py", "lua", "go", "rs", "rb",
"java", "c", "cpp", "cs", "json", "yaml", "yml", "md",
"html", "css", "scss", "vue", "svelte", "php", "sh", "zsh",
}
--- Check if extension is supported
---@param ext string File extension
---@return boolean
local function is_supported_extension(ext)
for _, supported in ipairs(supported_extensions) do
if ext == supported then
return true
end
end
return false
end
--- Auto-index a file by creating/opening its coder companion
---@param bufnr number Buffer number
function M.auto_index_file(bufnr)
-- Skip if buffer is invalid
if not vim.api.nvim_buf_is_valid(bufnr) then
return
end
-- Skip if already indexed
if auto_indexed_buffers[bufnr] then
return
end
-- Get file path
local filepath = vim.api.nvim_buf_get_name(bufnr)
if not filepath or filepath == "" then
return
end
-- Skip coder files
if utils.is_coder_file(filepath) then
return
end
-- Skip special buffers
local buftype = vim.bo[bufnr].buftype
if buftype ~= "" then
return
end
-- Skip unsupported file types
local ext = vim.fn.fnamemodify(filepath, ":e")
if ext == "" or not is_supported_extension(ext) then
return
end
-- Skip if auto_index is disabled in config
local codetyper = require("codetyper")
local config = codetyper.get_config()
if config and config.auto_index == false then
return
end
-- Mark as indexed
auto_indexed_buffers[bufnr] = true
-- Get coder companion path
local coder_path = utils.get_coder_path(filepath)
-- Check if coder file already exists
local coder_exists = utils.file_exists(coder_path)
-- Create coder file with template if it doesn't exist
if not coder_exists then
local filename = vim.fn.fnamemodify(filepath, ":t")
local template = string.format(
[[-- Coder companion for %s
-- Use /@ @/ tags to write pseudo-code prompts
-- Example:
-- /@
-- Add a function that validates user input
-- - Check for empty strings
-- - Validate email format
-- @/
]],
filename
)
utils.write_file(coder_path, template)
end
-- Notify user about the coder companion
local coder_filename = vim.fn.fnamemodify(coder_path, ":t")
if coder_exists then
utils.notify("Coder companion available: " .. coder_filename, vim.log.levels.DEBUG)
else
utils.notify("Created coder companion: " .. coder_filename, vim.log.levels.INFO)
end
end
--- Open the coder companion for the current file
---@param open_split? boolean Whether to open in split view (default: true)
function M.open_coder_companion(open_split)
open_split = open_split ~= false -- Default to true
local filepath = vim.fn.expand("%:p")
if not filepath or filepath == "" then
utils.notify("No file open", vim.log.levels.WARN)
return
end
if utils.is_coder_file(filepath) then
utils.notify("Already in coder file", vim.log.levels.INFO)
return
end
local coder_path = utils.get_coder_path(filepath)
-- Create if it doesn't exist
if not utils.file_exists(coder_path) then
local filename = vim.fn.fnamemodify(filepath, ":t")
local ext = vim.fn.fnamemodify(filepath, ":e")
local comment_prefix = "--"
if vim.tbl_contains({ "js", "jsx", "ts", "tsx", "java", "c", "cpp", "cs", "go", "rs", "php" }, ext) then
comment_prefix = "//"
elseif vim.tbl_contains({ "py", "sh", "zsh", "yaml", "yml" }, ext) then
comment_prefix = "#"
elseif vim.tbl_contains({ "html", "md" }, ext) then
comment_prefix = "<!--"
end
local close_comment = comment_prefix == "<!--" and " -->" or ""
local template = string.format(
[[%s Coder companion for %s%s
%s Use /@ @/ tags to write pseudo-code prompts%s
%s Example:%s
%s /@%s
%s Add a function that validates user input%s
%s - Check for empty strings%s
%s - Validate email format%s
%s @/%s
]],
comment_prefix, filename, close_comment,
comment_prefix, close_comment,
comment_prefix, close_comment,
comment_prefix, close_comment,
comment_prefix, close_comment,
comment_prefix, close_comment,
comment_prefix, close_comment,
comment_prefix, close_comment
)
utils.write_file(coder_path, template)
end
if open_split then
-- Use the window module to open split view
local window = require("codetyper.window")
window.open_split(coder_path, filepath)
else
-- Just open the coder file
vim.cmd("edit " .. vim.fn.fnameescape(coder_path))
end
end
--- Clear auto-indexed tracking for a buffer
---@param bufnr number Buffer number
function M.clear_auto_indexed(bufnr)
auto_indexed_buffers[bufnr] = nil
end
return M

View File

@@ -788,6 +788,12 @@ function M.setup()
cmd_logs_toggle()
end, { desc = "Toggle logs panel" })
-- Index command - open coder companion for current file
vim.api.nvim_create_user_command("CoderIndex", function()
local autocmds = require("codetyper.autocmds")
autocmds.open_coder_companion()
end, { desc = "Open coder companion for current file" })
-- Setup default keymaps
M.setup_keymaps()
end
@@ -817,6 +823,12 @@ function M.setup_keymaps()
silent = true,
desc = "Coder: Toggle Agent panel"
})
-- Index keymap - open coder companion
vim.keymap.set("n", "<leader>ci", "<cmd>CoderIndex<CR>", {
silent = true,
desc = "Coder: Open coder companion for file"
})
end
return M

View File

@@ -5,7 +5,7 @@ local M = {}
---@type CoderConfig
local defaults = {
llm = {
provider = "ollama",
provider = "ollama", -- Options: "claude", "ollama", "openai", "gemini", "copilot"
claude = {
api_key = nil, -- Will use ANTHROPIC_API_KEY env var if nil
model = "claude-sonnet-4-20250514",
@@ -14,9 +14,21 @@ local defaults = {
host = "http://localhost:11434",
model = "deepseek-coder:6.7b",
},
openai = {
api_key = nil, -- Will use OPENAI_API_KEY env var if nil
model = "gpt-4o",
endpoint = nil, -- Custom endpoint (Azure, OpenRouter, etc.)
},
gemini = {
api_key = nil, -- Will use GEMINI_API_KEY env var if nil
model = "gemini-2.0-flash",
},
copilot = {
model = "gpt-4o", -- Uses GitHub Copilot authentication
},
},
window = {
width = 0.25, -- 25% of screen width (1/4)
width = 25, -- 25% of screen width (1/4)
position = "left",
border = "rounded",
},
@@ -27,6 +39,14 @@ local defaults = {
},
auto_gitignore = true,
auto_open_ask = true, -- Auto-open Ask panel on startup
auto_index = false, -- Auto-create coder companion files on file open
scheduler = {
enabled = true, -- Enable event-driven scheduler
ollama_scout = true, -- Use Ollama as fast local scout for first attempt
escalation_threshold = 0.7, -- Below this confidence, escalate to remote LLM
max_concurrent = 2, -- Maximum concurrent workers
completion_delay_ms = 100, -- Wait after completion popup closes
},
}
--- Deep merge two tables
@@ -67,16 +87,38 @@ function M.validate(config)
return false, "Missing LLM configuration"
end
if config.llm.provider ~= "claude" and config.llm.provider ~= "ollama" then
return false, "Invalid LLM provider. Must be 'claude' or 'ollama'"
local valid_providers = { "claude", "ollama", "openai", "gemini", "copilot" }
local is_valid_provider = false
for _, p in ipairs(valid_providers) do
if config.llm.provider == p then
is_valid_provider = true
break
end
end
if not is_valid_provider then
return false, "Invalid LLM provider. Must be one of: " .. table.concat(valid_providers, ", ")
end
-- Validate provider-specific configuration
if config.llm.provider == "claude" then
local api_key = config.llm.claude.api_key or vim.env.ANTHROPIC_API_KEY
if not api_key or api_key == "" then
return false, "Claude API key not configured. Set llm.claude.api_key or ANTHROPIC_API_KEY env var"
end
elseif config.llm.provider == "openai" then
local api_key = config.llm.openai.api_key or vim.env.OPENAI_API_KEY
if not api_key or api_key == "" then
return false, "OpenAI API key not configured. Set llm.openai.api_key or OPENAI_API_KEY env var"
end
elseif config.llm.provider == "gemini" then
local api_key = config.llm.gemini.api_key or vim.env.GEMINI_API_KEY
if not api_key or api_key == "" then
return false, "Gemini API key not configured. Set llm.gemini.api_key or GEMINI_API_KEY env var"
end
end
-- Note: copilot uses OAuth from copilot.lua/copilot.vim, validated at runtime
-- Note: ollama doesn't require API key, just host configuration
return true
end

View File

@@ -1,8 +1,10 @@
---@mod codetyper Codetyper.nvim - AI-powered coding partner
---@brief [[
--- Codetyper.nvim is a Neovim plugin that acts as your coding partner.
--- It uses LLM APIs (Claude, Ollama) to help you write code faster
--- using special `.coder.*` files and inline prompt tags.
--- It uses LLM APIs (Claude, OpenAI, Gemini, Copilot, Ollama) to help you
--- write code faster using special `.coder.*` files and inline prompt tags.
--- Features an event-driven scheduler with confidence scoring and
--- completion-aware injection timing.
---@brief ]]
local M = {}
@@ -41,6 +43,12 @@ function M.setup(opts)
-- Initialize tree logging (creates .coder folder and initial tree.log)
tree.setup()
-- Start the event-driven scheduler if enabled
if M.config.scheduler and M.config.scheduler.enabled then
local scheduler = require("codetyper.agent.scheduler")
scheduler.start(M.config.scheduler)
end
M._initialized = true
-- Auto-open Ask panel after a short delay (to let UI settle)

View File

@@ -505,4 +505,148 @@ function M.format_messages_for_claude(messages)
return formatted
end
--- Generate with tool use support for agentic mode
---@param messages table[] Conversation history
---@param context table Context information
---@param tool_definitions table Tool definitions
---@param callback fun(response: table|nil, error: string|nil) Callback with raw response
function M.generate_with_tools(messages, context, tool_definitions, callback)
local api_key = get_api_key()
if not api_key then
callback(nil, "Claude API key not configured")
return
end
local tools_module = require("codetyper.agent.tools")
local agent_prompts = require("codetyper.prompts.agent")
-- Build system prompt with agent instructions
local system_prompt = llm.build_system_prompt(context)
system_prompt = system_prompt .. "\n\n" .. agent_prompts.system
system_prompt = system_prompt .. "\n\n" .. agent_prompts.tool_instructions
-- Build request body with tools
local body = {
model = get_model(),
max_tokens = 4096,
system = system_prompt,
messages = M.format_messages_for_claude(messages),
tools = tools_module.to_claude_format(),
}
local json_body = vim.json.encode(body)
local cmd = {
"curl",
"-s",
"-X", "POST",
API_URL,
"-H", "Content-Type: application/json",
"-H", "x-api-key: " .. api_key,
"-H", "anthropic-version: 2023-06-01",
"-d", json_body,
}
vim.fn.jobstart(cmd, {
stdout_buffered = true,
on_stdout = function(_, data)
if not data or #data == 0 or (data[1] == "" and #data == 1) then
return
end
local response_text = table.concat(data, "\n")
local ok, response = pcall(vim.json.decode, response_text)
if not ok then
vim.schedule(function()
callback(nil, "Failed to parse Claude response")
end)
return
end
if response.error then
vim.schedule(function()
callback(nil, response.error.message or "Claude API error")
end)
return
end
-- Return raw response for parser to handle
vim.schedule(function()
callback(response, nil)
end)
end,
on_stderr = function(_, data)
if data and #data > 0 and data[1] ~= "" then
vim.schedule(function()
callback(nil, "Claude API request failed: " .. table.concat(data, "\n"))
end)
end
end,
on_exit = function(_, code)
if code ~= 0 then
vim.schedule(function()
callback(nil, "Claude API request failed with code: " .. code)
end)
end
end,
})
end
--- Format messages for Claude API
---@param messages table[] Internal message format
---@return table[] Claude API message format
function M.format_messages_for_claude(messages)
local formatted = {}
for _, msg in ipairs(messages) do
if msg.role == "user" then
if type(msg.content) == "table" then
-- Tool results
table.insert(formatted, {
role = "user",
content = msg.content,
})
else
table.insert(formatted, {
role = "user",
content = msg.content,
})
end
elseif msg.role == "assistant" then
-- Build content array for assistant messages
local content = {}
-- Add text if present
if msg.content and msg.content ~= "" then
table.insert(content, {
type = "text",
text = msg.content,
})
end
-- Add tool uses if present
if msg.tool_calls then
for _, tool_call in ipairs(msg.tool_calls) do
table.insert(content, {
type = "tool_use",
id = tool_call.id,
name = tool_call.name,
input = tool_call.parameters,
})
end
end
if #content > 0 then
table.insert(formatted, {
role = "assistant",
content = content,
})
end
end
end
return formatted
end
return M

View File

@@ -1,531 +1,501 @@
---Reference implementation:
---https://github.com/zbirenbaum/copilot.lua/blob/master/lua/copilot/auth.lua config file
---https://github.com/zed-industries/zed/blob/ad43bbbf5eda59eba65309735472e0be58b4f7dd/crates/copilot/src/copilot_chat.rs#L272 for authorization
---
---@class CopilotToken
---@field annotations_enabled boolean
---@field chat_enabled boolean
---@field chat_jetbrains_enabled boolean
---@field code_quote_enabled boolean
---@field codesearch boolean
---@field copilotignore_enabled boolean
---@field endpoints {api: string, ["origin-tracker"]: string, proxy: string, telemetry: string}
---@field expires_at integer
---@field individual boolean
---@field nes_enabled boolean
---@field prompt_8k boolean
---@field public_suggestions string
---@field refresh_in integer
---@field sku string
---@field snippy_load_test_enabled boolean
---@field telemetry string
---@field token string
---@field tracking_id string
---@field vsc_electron_fetcher boolean
---@field xcode boolean
---@field xcode_chat boolean
---@mod codetyper.llm.copilot GitHub Copilot API client for Codetyper.nvim
local curl = require("plenary.curl")
local Path = require("plenary.path")
local Utils = require("avante.utils")
local Providers = require("avante.providers")
local OpenAI = require("avante.providers").openai
local H = {}
---@class AvanteProviderFunctor
local M = {}
local copilot_path = vim.fn.stdpath("data") .. "/avante/github-copilot.json"
local lockfile_path = vim.fn.stdpath("data") .. "/avante/copilot-timer.lock"
local utils = require("codetyper.utils")
local llm = require("codetyper.llm")
-- Lockfile management
local function is_process_running(pid)
local result = vim.uv.kill(pid, 0)
if result ~= nil and result == 0 then
return true
else
return false
end
end
--- Copilot API endpoints
local AUTH_URL = "https://api.github.com/copilot_internal/v2/token"
local function try_acquire_timer_lock()
local lockfile = Path:new(lockfile_path)
--- Cached state
---@class CopilotState
---@field oauth_token string|nil
---@field github_token table|nil
M.state = nil
local tmp_lockfile = lockfile_path .. ".tmp." .. vim.fn.getpid()
Path:new(tmp_lockfile):write(tostring(vim.fn.getpid()), "w")
-- Check existing lock
if lockfile:exists() then
local content = lockfile:read()
local pid = tonumber(content)
if pid and is_process_running(pid) then
os.remove(tmp_lockfile)
return false -- Another instance is already managing
end
end
-- Attempt to take ownership
local success = os.rename(tmp_lockfile, lockfile_path)
if not success then
os.remove(tmp_lockfile)
return false
end
return true
end
local function start_manager_check_timer()
if M._manager_check_timer then
M._manager_check_timer:stop()
M._manager_check_timer:close()
end
M._manager_check_timer = vim.uv.new_timer()
M._manager_check_timer:start(
30000,
30000,
vim.schedule_wrap(function()
if not M._refresh_timer and try_acquire_timer_lock() then
M.setup_timer()
end
end)
)
end
---@class OAuthToken
---@field user string
---@field oauth_token string
---
---@return string
function H.get_oauth_token()
--- Get OAuth token from copilot.lua or copilot.vim config
---@return string|nil OAuth token
local function get_oauth_token()
local xdg_config = vim.fn.expand("$XDG_CONFIG_HOME")
local os_name = Utils.get_os_name()
---@type string
local config_dir
local os_name = vim.loop.os_uname().sysname:lower()
local config_dir
if xdg_config and vim.fn.isdirectory(xdg_config) > 0 then
config_dir = xdg_config
elseif vim.tbl_contains({ "linux", "darwin" }, os_name) then
elseif os_name:match("linux") or os_name:match("darwin") then
config_dir = vim.fn.expand("~/.config")
else
config_dir = vim.fn.expand("~/AppData/Local")
end
--- hosts.json (copilot.lua), apps.json (copilot.vim)
---@type Path[]
local paths = vim.iter({ "hosts.json", "apps.json" }):fold({}, function(acc, path)
local yason = Path:new(config_dir):joinpath("github-copilot", path)
if yason:exists() then
table.insert(acc, yason)
end
return acc
end)
if #paths == 0 then
error("You must setup copilot with either copilot.lua or copilot.vim", 2)
end
local yason = paths[1]
return vim
.iter(
---@type table<string, OAuthToken>
---@diagnostic disable-next-line: param-type-mismatch
vim.json.decode(yason:read())
)
:filter(function(k, _)
return k:match("github.com")
end)
---@param acc {oauth_token: string}
:fold({}, function(acc, _, v)
acc.oauth_token = v.oauth_token
return acc
end)
.oauth_token
end
H.chat_auth_url = "https://api.github.com/copilot_internal/v2/token"
function H.chat_completion_url(base_url)
return Utils.url_join(base_url, "/chat/completions")
end
function H.response_url(base_url)
return Utils.url_join(base_url, "/responses")
end
function H.refresh_token(async, force)
if not M.state then
error("internal initialization error")
end
async = async == nil and true or async
force = force or false
-- Do not refresh token if not forced or not expired
if
not force
and M.state.github_token
and M.state.github_token.expires_at
and M.state.github_token.expires_at > math.floor(os.time())
then
return false
end
local provider_conf = Providers.get_config("copilot")
local curl_opts = {
headers = {
["Authorization"] = "token " .. M.state.oauth_token,
["Accept"] = "application/json",
},
timeout = provider_conf.timeout,
proxy = provider_conf.proxy,
insecure = provider_conf.allow_insecure,
}
local function handle_response(response)
if response.status == 200 then
M.state.github_token = vim.json.decode(response.body)
local file = Path:new(copilot_path)
file:write(vim.json.encode(M.state.github_token), "w")
if not vim.g.avante_login then
vim.g.avante_login = true
end
-- If triggered synchronously, reset timer
if not async and M._refresh_timer then
M.setup_timer()
end
return true
else
error("Failed to get success response: " .. vim.inspect(response))
return false
end
end
if async then
curl.get(
H.chat_auth_url,
vim.tbl_deep_extend("force", {
callback = handle_response,
}, curl_opts)
)
else
local response = curl.get(H.chat_auth_url, curl_opts)
handle_response(response)
end
end
---@private
---@class AvanteCopilotState
---@field oauth_token string
---@field github_token CopilotToken?
M.state = nil
M.api_key_name = ""
M.tokenizer_id = "gpt-4o"
M.role_map = {
user = "user",
assistant = "assistant",
}
function M:is_disable_stream()
return false
end
setmetatable(M, { __index = OpenAI })
function M:list_models()
if M._model_list_cache then
return M._model_list_cache
end
if not M._is_setup then
M.setup()
end
-- refresh token synchronously, only if it has expired
-- (this should rarely happen, as we refresh the token in the background)
H.refresh_token(false, false)
local provider_conf = Providers.parse_config(self)
local headers = self:build_headers()
local curl_opts = {
headers = Utils.tbl_override(headers, self.extra_headers),
timeout = provider_conf.timeout,
proxy = provider_conf.proxy,
insecure = provider_conf.allow_insecure,
}
local function handle_response(response)
if response.status == 200 then
local body = vim.json.decode(response.body)
-- ref: https://github.com/CopilotC-Nvim/CopilotChat.nvim/blob/16d897fd43d07e3b54478ccdb2f8a16e4df4f45a/lua/CopilotChat/config/providers.lua#L171-L187
local models = vim.iter(body.data)
:filter(function(model)
return model.capabilities.type == "chat" and not vim.endswith(model.id, "paygo")
end)
:map(function(model)
return {
id = model.id,
display_name = model.name,
name = "copilot/" .. model.name .. " (" .. model.id .. ")",
provider_name = "copilot",
tokenizer = model.capabilities.tokenizer,
max_input_tokens = model.capabilities.limits.max_prompt_tokens,
max_output_tokens = model.capabilities.limits.max_output_tokens,
policy = not model["policy"] or model["policy"]["state"] == "enabled",
version = model.version,
}
end)
:totable()
M._model_list_cache = models
return models
else
error("Failed to get success response: " .. vim.inspect(response))
return {}
end
end
local response = curl.get((M.state.github_token.endpoints.api or "") .. "/models", curl_opts)
return handle_response(response)
end
function M:build_headers()
return {
["Authorization"] = "Bearer " .. M.state.github_token.token,
["User-Agent"] = "GitHubCopilotChat/0.26.7",
["Editor-Version"] = "vscode/1.105.1",
["Editor-Plugin-Version"] = "copilot-chat/0.26.7",
["Copilot-Integration-Id"] = "vscode-chat",
["Openai-Intent"] = "conversation-edits",
}
end
function M:parse_curl_args(prompt_opts)
-- refresh token synchronously, only if it has expired
-- (this should rarely happen, as we refresh the token in the background)
H.refresh_token(false, false)
local provider_conf, request_body = Providers.parse_config(self)
local use_response_api = Providers.resolve_use_response_api(provider_conf, prompt_opts)
local disable_tools = provider_conf.disable_tools or false
-- Apply OpenAI's set_allowed_params for Response API compatibility
OpenAI.set_allowed_params(provider_conf, request_body)
local use_ReAct_prompt = provider_conf.use_ReAct_prompt == true
local tools = nil
if not disable_tools and prompt_opts.tools and not use_ReAct_prompt then
tools = {}
for _, tool in ipairs(prompt_opts.tools) do
local transformed_tool = OpenAI:transform_tool(tool)
-- Response API uses flattened tool structure
if use_response_api then
if transformed_tool.type == "function" and transformed_tool["function"] then
transformed_tool = {
type = "function",
name = transformed_tool["function"].name,
description = transformed_tool["function"].description,
parameters = transformed_tool["function"].parameters,
}
-- Try hosts.json (copilot.lua) and apps.json (copilot.vim)
local paths = { "hosts.json", "apps.json" }
for _, filename in ipairs(paths) do
local path = config_dir .. "/github-copilot/" .. filename
if vim.fn.filereadable(path) == 1 then
local content = vim.fn.readfile(path)
if content and #content > 0 then
local ok, data = pcall(vim.json.decode, table.concat(content, "\n"))
if ok and data then
for key, value in pairs(data) do
if key:match("github.com") and value.oauth_token then
return value.oauth_token
end
end
end
end
table.insert(tools, transformed_tool)
end
end
local headers = self:build_headers()
if prompt_opts.messages and #prompt_opts.messages > 0 then
local last_message = prompt_opts.messages[#prompt_opts.messages]
local initiator = last_message.role == "user" and "user" or "agent"
headers["X-Initiator"] = initiator
end
local parsed_messages = self:parse_messages(prompt_opts)
-- Build base body
local base_body = {
model = provider_conf.model,
stream = true,
tools = tools,
}
-- Response API uses 'input' instead of 'messages'
-- NOTE: Copilot doesn't support previous_response_id, always send full history
if use_response_api then
base_body.input = parsed_messages
-- Response API uses max_output_tokens instead of max_tokens/max_completion_tokens
if request_body.max_completion_tokens then
request_body.max_output_tokens = request_body.max_completion_tokens
request_body.max_completion_tokens = nil
end
if request_body.max_tokens then
request_body.max_output_tokens = request_body.max_tokens
request_body.max_tokens = nil
end
-- Response API doesn't use stream_options
base_body.stream_options = nil
base_body.include = { "reasoning.encrypted_content" }
base_body.reasoning = {
summary = "detailed",
}
base_body.truncation = "disabled"
else
base_body.messages = parsed_messages
base_body.stream_options = {
include_usage = true,
}
end
local base_url = M.state.github_token.endpoints.api or provider_conf.endpoint
local build_url = use_response_api and H.response_url or H.chat_completion_url
return {
url = build_url(base_url),
timeout = provider_conf.timeout,
proxy = provider_conf.proxy,
insecure = provider_conf.allow_insecure,
headers = Utils.tbl_override(headers, self.extra_headers),
body = vim.tbl_deep_extend("force", base_body, request_body),
}
return nil
end
M._refresh_timer = nil
function M.setup_timer()
if M._refresh_timer then
M._refresh_timer:stop()
M._refresh_timer:close()
end
-- Calculate time until token expires
local now = math.floor(os.time())
local expires_at = M.state.github_token and M.state.github_token.expires_at or now
local time_until_expiry = math.max(0, expires_at - now)
-- Refresh 2 minutes before expiration
local initial_interval = math.max(0, (time_until_expiry - 120) * 1000)
-- Regular interval of 28 minutes after the first refresh
local repeat_interval = 28 * 60 * 1000
M._refresh_timer = vim.uv.new_timer()
M._refresh_timer:start(
initial_interval,
repeat_interval,
vim.schedule_wrap(function()
H.refresh_token(true, true)
end)
)
--- Get model from config
---@return string Model name
local function get_model()
local codetyper = require("codetyper")
local config = codetyper.get_config()
return config.llm.copilot.model
end
function M.setup_file_watcher()
if M._file_watcher then
--- Refresh GitHub token using OAuth token
---@param callback fun(token: table|nil, error: string|nil)
local function refresh_token(callback)
if not M.state or not M.state.oauth_token then
callback(nil, "No OAuth token available")
return
end
local copilot_token_file = Path:new(copilot_path)
M._file_watcher = vim.uv.new_fs_event()
-- Check if current token is still valid
if M.state.github_token and M.state.github_token.expires_at then
if M.state.github_token.expires_at > os.time() then
callback(M.state.github_token, nil)
return
end
end
M._file_watcher:start(
copilot_path,
{},
vim.schedule_wrap(function()
-- Reload token from file
if copilot_token_file:exists() then
local ok, token = pcall(vim.json.decode, copilot_token_file:read())
if ok then
M.state.github_token = token
end
local cmd = {
"curl",
"-s",
"-X",
"GET",
AUTH_URL,
"-H",
"Authorization: token " .. M.state.oauth_token,
"-H",
"Accept: application/json",
}
vim.fn.jobstart(cmd, {
stdout_buffered = true,
on_stdout = function(_, data)
if not data or #data == 0 or (data[1] == "" and #data == 1) then
return
end
end)
)
local response_text = table.concat(data, "\n")
local ok, token = pcall(vim.json.decode, response_text)
if not ok then
vim.schedule(function()
callback(nil, "Failed to parse token response")
end)
return
end
if token.error then
vim.schedule(function()
callback(nil, token.error_description or "Token refresh failed")
end)
return
end
M.state.github_token = token
vim.schedule(function()
callback(token, nil)
end)
end,
on_stderr = function(_, data)
if data and #data > 0 and data[1] ~= "" then
vim.schedule(function()
callback(nil, "Token refresh failed: " .. table.concat(data, "\n"))
end)
end
end,
on_exit = function(_, code)
if code ~= 0 then
vim.schedule(function()
callback(nil, "Token refresh failed with code: " .. code)
end)
end
end,
})
end
M._is_setup = false
function M.is_env_set()
local ok = pcall(function()
H.get_oauth_token()
end)
return ok
--- Build request headers
---@param token table GitHub token
---@return table Headers
local function build_headers(token)
return {
"Authorization: Bearer " .. token.token,
"Content-Type: application/json",
"User-Agent: GitHubCopilotChat/0.26.7",
"Editor-Version: vscode/1.105.1",
"Editor-Plugin-Version: copilot-chat/0.26.7",
"Copilot-Integration-Id: vscode-chat",
"Openai-Intent: conversation-edits",
}
end
function M.setup()
local copilot_token_file = Path:new(copilot_path)
--- Build request body for Copilot API
---@param prompt string User prompt
---@param context table Context information
---@return table Request body
local function build_request_body(prompt, context)
local system_prompt = llm.build_system_prompt(context)
return {
model = get_model(),
messages = {
{ role = "system", content = system_prompt },
{ role = "user", content = prompt },
},
max_tokens = 4096,
temperature = 0.2,
stream = false,
}
end
--- Make HTTP request to Copilot API
---@param token table GitHub token
---@param body table Request body
---@param callback fun(response: string|nil, error: string|nil, usage: table|nil)
local function make_request(token, body, callback)
local endpoint = (token.endpoints and token.endpoints.api or "https://api.githubcopilot.com")
.. "/chat/completions"
local json_body = vim.json.encode(body)
local headers = build_headers(token)
local cmd = {
"curl",
"-s",
"-X",
"POST",
endpoint,
}
for _, header in ipairs(headers) do
table.insert(cmd, "-H")
table.insert(cmd, header)
end
table.insert(cmd, "-d")
table.insert(cmd, json_body)
vim.fn.jobstart(cmd, {
stdout_buffered = true,
on_stdout = function(_, data)
if not data or #data == 0 or (data[1] == "" and #data == 1) then
return
end
local response_text = table.concat(data, "\n")
local ok, response = pcall(vim.json.decode, response_text)
if not ok then
vim.schedule(function()
callback(nil, "Failed to parse Copilot response", nil)
end)
return
end
if response.error then
vim.schedule(function()
callback(nil, response.error.message or "Copilot API error", nil)
end)
return
end
-- Extract usage info
local usage = response.usage or {}
if response.choices and response.choices[1] and response.choices[1].message then
local code = llm.extract_code(response.choices[1].message.content)
vim.schedule(function()
callback(code, nil, usage)
end)
else
vim.schedule(function()
callback(nil, "No content in Copilot response", nil)
end)
end
end,
on_stderr = function(_, data)
if data and #data > 0 and data[1] ~= "" then
vim.schedule(function()
callback(nil, "Copilot API request failed: " .. table.concat(data, "\n"), nil)
end)
end
end,
on_exit = function(_, code)
if code ~= 0 then
vim.schedule(function()
callback(nil, "Copilot API request failed with code: " .. code, nil)
end)
end
end,
})
end
--- Initialize Copilot state
local function ensure_initialized()
if not M.state then
M.state = {
oauth_token = get_oauth_token(),
github_token = nil,
oauth_token = H.get_oauth_token(),
}
end
-- Load and validate existing token
if copilot_token_file:exists() then
local ok, token = pcall(vim.json.decode, copilot_token_file:read())
if ok and token.expires_at and token.expires_at > math.floor(os.time()) then
M.state.github_token = token
end
end
-- Setup timer management
local timer_lock_acquired = try_acquire_timer_lock()
if timer_lock_acquired then
M.setup_timer()
else
vim.schedule(function()
H.refresh_token(true, false)
end)
end
M.setup_file_watcher()
start_manager_check_timer()
require("avante.tokenizers").setup(M.tokenizer_id)
vim.g.avante_login = true
M._is_setup = true
end
function M.cleanup()
-- Cleanup refresh timer
if M._refresh_timer then
M._refresh_timer:stop()
M._refresh_timer:close()
M._refresh_timer = nil
--- Generate code using Copilot API
---@param prompt string The user's prompt
---@param context table Context information
---@param callback fun(response: string|nil, error: string|nil)
function M.generate(prompt, context, callback)
local logs = require("codetyper.agent.logs")
-- Remove lockfile if we were the manager
local lockfile = Path:new(lockfile_path)
if lockfile:exists() then
local content = lockfile:read()
local pid = tonumber(content)
if pid and pid == vim.fn.getpid() then
lockfile:rm()
ensure_initialized()
if not M.state.oauth_token then
local err = "Copilot not authenticated. Please set up copilot.lua or copilot.vim first."
logs.error(err)
callback(nil, err)
return
end
local model = get_model()
logs.request("copilot", model)
logs.thinking("Refreshing authentication token...")
refresh_token(function(token, err)
if err then
logs.error(err)
utils.notify(err, vim.log.levels.ERROR)
callback(nil, err)
return
end
logs.thinking("Building request body...")
local body = build_request_body(prompt, context)
local prompt_estimate = logs.estimate_tokens(vim.json.encode(body))
logs.debug(string.format("Estimated prompt: ~%d tokens", prompt_estimate))
logs.thinking("Sending to Copilot API...")
utils.notify("Sending request to Copilot...", vim.log.levels.INFO)
make_request(token, body, function(response, request_err, usage)
if request_err then
logs.error(request_err)
utils.notify(request_err, vim.log.levels.ERROR)
callback(nil, request_err)
else
if usage then
logs.response(usage.prompt_tokens or 0, usage.completion_tokens or 0, "stop")
end
logs.thinking("Response received, extracting code...")
logs.info("Code generated successfully")
utils.notify("Code generated successfully", vim.log.levels.INFO)
callback(response, nil)
end
end)
end)
end
--- Check if Copilot is properly configured
---@return boolean, string? Valid status and optional error message
function M.validate()
ensure_initialized()
if not M.state.oauth_token then
return false, "Copilot not authenticated. Set up copilot.lua or copilot.vim first."
end
return true
end
--- Generate with tool use support for agentic mode
---@param messages table[] Conversation history
---@param context table Context information
---@param tool_definitions table Tool definitions
---@param callback fun(response: table|nil, error: string|nil)
function M.generate_with_tools(messages, context, tool_definitions, callback)
local logs = require("codetyper.agent.logs")
ensure_initialized()
if not M.state.oauth_token then
local err = "Copilot not authenticated"
logs.error(err)
callback(nil, err)
return
end
local model = get_model()
logs.request("copilot", model)
logs.thinking("Refreshing authentication token...")
refresh_token(function(token, err)
if err then
logs.error(err)
callback(nil, err)
return
end
local tools_module = require("codetyper.agent.tools")
local agent_prompts = require("codetyper.prompts.agent")
-- Build system prompt with agent instructions
local system_prompt = llm.build_system_prompt(context)
system_prompt = system_prompt .. "\n\n" .. agent_prompts.system
system_prompt = system_prompt .. "\n\n" .. agent_prompts.tool_instructions
-- Format messages for Copilot (OpenAI-compatible format)
local copilot_messages = { { role = "system", content = system_prompt } }
for _, msg in ipairs(messages) do
if type(msg.content) == "string" then
table.insert(copilot_messages, { role = msg.role, content = msg.content })
elseif type(msg.content) == "table" then
local text_parts = {}
for _, part in ipairs(msg.content) do
if part.type == "tool_result" then
table.insert(text_parts, "[" .. (part.name or "tool") .. " result]: " .. (part.content or ""))
elseif part.type == "text" then
table.insert(text_parts, part.text or "")
end
end
if #text_parts > 0 then
table.insert(copilot_messages, { role = msg.role, content = table.concat(text_parts, "\n") })
end
end
end
end
-- Cleanup manager check timer
if M._manager_check_timer then
M._manager_check_timer:stop()
M._manager_check_timer:close()
M._manager_check_timer = nil
end
local body = {
model = get_model(),
messages = copilot_messages,
max_tokens = 4096,
temperature = 0.3,
stream = false,
tools = tools_module.to_openai_format(),
}
-- Cleanup file watcher
if M._file_watcher then
---@diagnostic disable-next-line: param-type-mismatch
M._file_watcher:stop()
M._file_watcher = nil
end
local endpoint = (token.endpoints and token.endpoints.api or "https://api.githubcopilot.com")
.. "/chat/completions"
local json_body = vim.json.encode(body)
local prompt_estimate = logs.estimate_tokens(json_body)
logs.debug(string.format("Estimated prompt: ~%d tokens", prompt_estimate))
logs.thinking("Sending to Copilot API...")
local headers = build_headers(token)
local cmd = {
"curl",
"-s",
"-X",
"POST",
endpoint,
}
for _, header in ipairs(headers) do
table.insert(cmd, "-H")
table.insert(cmd, header)
end
table.insert(cmd, "-d")
table.insert(cmd, json_body)
vim.fn.jobstart(cmd, {
stdout_buffered = true,
on_stdout = function(_, data)
if not data or #data == 0 or (data[1] == "" and #data == 1) then
return
end
local response_text = table.concat(data, "\n")
local ok, response = pcall(vim.json.decode, response_text)
if not ok then
vim.schedule(function()
logs.error("Failed to parse Copilot response")
callback(nil, "Failed to parse Copilot response")
end)
return
end
if response.error then
vim.schedule(function()
logs.error(response.error.message or "Copilot API error")
callback(nil, response.error.message or "Copilot API error")
end)
return
end
-- Log token usage
if response.usage then
logs.response(response.usage.prompt_tokens or 0, response.usage.completion_tokens or 0, "stop")
end
-- Convert to Claude-like format for parser compatibility
local converted = { content = {} }
if response.choices and response.choices[1] then
local choice = response.choices[1]
if choice.message then
if choice.message.content then
table.insert(converted.content, { type = "text", text = choice.message.content })
logs.thinking("Response contains text")
end
if choice.message.tool_calls then
for _, tc in ipairs(choice.message.tool_calls) do
local args = {}
if tc["function"] and tc["function"].arguments then
local ok_args, parsed = pcall(vim.json.decode, tc["function"].arguments)
if ok_args then
args = parsed
end
end
table.insert(converted.content, {
type = "tool_use",
id = tc.id,
name = tc["function"].name,
input = args,
})
logs.thinking("Tool call: " .. tc["function"].name)
end
end
end
end
vim.schedule(function()
callback(converted, nil)
end)
end,
on_stderr = function(_, data)
if data and #data > 0 and data[1] ~= "" then
vim.schedule(function()
logs.error("Copilot API request failed: " .. table.concat(data, "\n"))
callback(nil, "Copilot API request failed: " .. table.concat(data, "\n"))
end)
end
end,
on_exit = function(_, code)
if code ~= 0 then
vim.schedule(function()
logs.error("Copilot API request failed with code: " .. code)
callback(nil, "Copilot API request failed with code: " .. code)
end)
end
end,
})
end)
end
-- Register cleanup on Neovim exit
vim.api.nvim_create_autocmd("VimLeavePre", {
callback = function()
M.cleanup()
end,
})
return M

View File

@@ -1,361 +1,394 @@
local Utils = require("avante.utils")
local Providers = require("avante.providers")
local Clipboard = require("avante.clipboard")
local OpenAI = require("avante.providers").openai
local Prompts = require("avante.utils.prompts")
---@mod codetyper.llm.gemini Google Gemini API client for Codetyper.nvim
---@class AvanteProviderFunctor
local M = {}
M.api_key_name = "GEMINI_API_KEY"
M.role_map = {
user = "user",
assistant = "model",
}
local utils = require("codetyper.utils")
local llm = require("codetyper.llm")
function M:is_disable_stream()
return false
--- Gemini API endpoint
local API_URL = "https://generativelanguage.googleapis.com/v1beta/models"
--- Get API key from config or environment
---@return string|nil API key
local function get_api_key()
local codetyper = require("codetyper")
local config = codetyper.get_config()
return config.llm.gemini.api_key or vim.env.GEMINI_API_KEY
end
---@param tool AvanteLLMTool
function M:transform_to_function_declaration(tool)
local input_schema_properties, required = Utils.llm_tool_param_fields_to_json_schema(tool.param.fields)
local parameters = nil
if not vim.tbl_isempty(input_schema_properties) then
parameters = {
type = "object",
properties = input_schema_properties,
required = required,
}
end
return {
name = tool.name,
description = tool.get_description and tool.get_description() or tool.description,
parameters = parameters,
}
--- Get model from config
---@return string Model name
local function get_model()
local codetyper = require("codetyper")
local config = codetyper.get_config()
return config.llm.gemini.model
end
function M:parse_messages(opts)
local provider_conf, _ = Providers.parse_config(self)
local use_ReAct_prompt = provider_conf.use_ReAct_prompt == true
local contents = {}
local prev_role = nil
local tool_id_to_name = {}
vim.iter(opts.messages):each(function(message)
local role = message.role
if role == prev_role then
if role == M.role_map["user"] then
table.insert(
contents,
{ role = M.role_map["assistant"], parts = {
{ text = "Ok, I understand." },
} }
)
else
table.insert(contents, { role = M.role_map["user"], parts = {
{ text = "Ok" },
} })
end
end
prev_role = role
local parts = {}
local content_items = message.content
if type(content_items) == "string" then
table.insert(parts, { text = content_items })
elseif type(content_items) == "table" then
---@cast content_items AvanteLLMMessageContentItem[]
for _, item in ipairs(content_items) do
if type(item) == "string" then
table.insert(parts, { text = item })
elseif type(item) == "table" and item.type == "text" then
table.insert(parts, { text = item.text })
elseif type(item) == "table" and item.type == "image" then
table.insert(parts, {
inline_data = {
mime_type = "image/png",
data = item.source.data,
},
})
elseif type(item) == "table" and item.type == "tool_use" and not use_ReAct_prompt then
tool_id_to_name[item.id] = item.name
role = "model"
table.insert(parts, {
functionCall = {
name = item.name,
args = item.input,
},
})
elseif type(item) == "table" and item.type == "tool_result" and not use_ReAct_prompt then
role = "function"
local ok, content = pcall(vim.json.decode, item.content)
if not ok then
content = item.content
end
-- item.name here refers to the name of the tool that was called,
-- which is available in the tool_result content item prepared by llm.lua
local tool_name = item.name
if not tool_name then
-- Fallback, though item.name should ideally always be present for tool_result
tool_name = tool_id_to_name[item.tool_use_id]
end
table.insert(parts, {
functionResponse = {
name = tool_name,
response = {
name = tool_name, -- Gemini API requires the name in the response object as well
content = content,
},
},
})
elseif type(item) == "table" and item.type == "thinking" then
table.insert(parts, { text = item.thinking })
elseif type(item) == "table" and item.type == "redacted_thinking" then
table.insert(parts, { text = item.data })
end
end
if not provider_conf.disable_tools and use_ReAct_prompt then
if content_items[1].type == "tool_result" then
local tool_use_msg = nil
for _, msg_ in ipairs(opts.messages) do
if type(msg_.content) == "table" and #msg_.content > 0 then
if
msg_.content[1].type == "tool_use"
and msg_.content[1].id == content_items[1].tool_use_id
then
tool_use_msg = msg_
break
end
end
end
if tool_use_msg then
table.insert(contents, {
role = "model",
parts = {
{ text = Utils.tool_use_to_xml(tool_use_msg.content[1]) },
},
})
role = "user"
table.insert(parts, {
text = "The result of tool use "
.. Utils.tool_use_to_xml(tool_use_msg.content[1])
.. " is:\n",
})
table.insert(parts, {
text = content_items[1].content,
})
end
end
end
end
if #parts > 0 then
table.insert(contents, { role = M.role_map[role] or role, parts = parts })
end
end)
if Clipboard.support_paste_image() and opts.image_paths then
for _, image_path in ipairs(opts.image_paths) do
local image_data = {
inline_data = {
mime_type = "image/png",
data = Clipboard.get_base64_content(image_path),
},
}
table.insert(contents[#contents].parts, image_data)
end
end
local system_prompt = opts.system_prompt
if use_ReAct_prompt then
system_prompt = Prompts.get_ReAct_system_prompt(provider_conf, opts)
end
--- Build request body for Gemini API
---@param prompt string User prompt
---@param context table Context information
---@return table Request body
local function build_request_body(prompt, context)
local system_prompt = llm.build_system_prompt(context)
return {
systemInstruction = {
role = "user",
parts = {
{
text = system_prompt,
},
parts = { { text = system_prompt } },
},
contents = {
{
role = "user",
parts = { { text = prompt } },
},
},
contents = contents,
generationConfig = {
temperature = 0.2,
maxOutputTokens = 4096,
},
}
end
--- Prepares the main request body for Gemini-like APIs.
---@param provider_instance AvanteProviderFunctor The provider instance (self).
---@param prompt_opts AvantePromptOptions Prompt options including messages, tools, system_prompt.
---@param provider_conf table Provider configuration from config.lua (e.g., model, top-level temperature/max_tokens).
---@param request_body_ table Request-specific overrides, typically from provider_conf.request_config_overrides.
---@return table The fully constructed request body.
function M.prepare_request_body(provider_instance, prompt_opts, provider_conf, request_body_)
local request_body = {}
request_body.generationConfig = request_body_.generationConfig or {}
local use_ReAct_prompt = provider_conf.use_ReAct_prompt == true
if use_ReAct_prompt then
request_body.generationConfig.stopSequences = { "</tool_use>" }
--- Make HTTP request to Gemini API
---@param body table Request body
---@param callback fun(response: string|nil, error: string|nil, usage: table|nil) Callback function
local function make_request(body, callback)
local api_key = get_api_key()
if not api_key then
callback(nil, "Gemini API key not configured", nil)
return
end
local disable_tools = provider_conf.disable_tools or false
local model = get_model()
local url = API_URL .. "/" .. model .. ":generateContent?key=" .. api_key
local json_body = vim.json.encode(body)
if not use_ReAct_prompt and not disable_tools and prompt_opts.tools then
local function_declarations = {}
for _, tool in ipairs(prompt_opts.tools) do
table.insert(function_declarations, provider_instance:transform_to_function_declaration(tool))
end
if #function_declarations > 0 then
request_body.tools = {
{
functionDeclarations = function_declarations,
},
}
end
end
return vim.tbl_deep_extend("force", {}, provider_instance:parse_messages(prompt_opts), request_body)
end
---@param usage avante.GeminiTokenUsage | nil
---@return avante.LLMTokenUsage | nil
function M.transform_gemini_usage(usage)
if not usage then
return nil
end
---@type avante.LLMTokenUsage
local res = {
prompt_tokens = usage.promptTokenCount,
completion_tokens = usage.candidatesTokenCount,
local cmd = {
"curl",
"-s",
"-X",
"POST",
url,
"-H",
"Content-Type: application/json",
"-d",
json_body,
}
return res
end
function M:parse_response(ctx, data_stream, _, opts)
local ok, jsn = pcall(vim.json.decode, data_stream)
if not ok then
opts.on_stop({ reason = "error", error = "Failed to parse JSON response: " .. tostring(jsn) })
return
end
if opts.update_tokens_usage and jsn.usageMetadata and jsn.usageMetadata ~= nil then
local usage = M.transform_gemini_usage(jsn.usageMetadata)
if usage ~= nil then
opts.update_tokens_usage(usage)
end
end
-- Handle prompt feedback first, as it might indicate an overall issue with the prompt
if jsn.promptFeedback and jsn.promptFeedback.blockReason then
local feedback = jsn.promptFeedback
OpenAI:finish_pending_messages(ctx, opts) -- Ensure any pending messages are cleared
opts.on_stop({
reason = "error",
error = "Prompt blocked or filtered. Reason: " .. feedback.blockReason,
details = feedback,
})
return
end
if jsn.candidates and #jsn.candidates > 0 then
local candidate = jsn.candidates[1]
---@type AvanteLLMToolUse[]
ctx.tool_use_list = ctx.tool_use_list or {}
-- Check if candidate.content and candidate.content.parts exist before iterating
if candidate.content and candidate.content.parts then
for _, part in ipairs(candidate.content.parts) do
if part.text then
if opts.on_chunk then
opts.on_chunk(part.text)
end
OpenAI:add_text_message(ctx, part.text, "generating", opts)
elseif part.functionCall then
if not ctx.function_call_id then
ctx.function_call_id = 0
end
ctx.function_call_id = ctx.function_call_id + 1
local tool_use = {
id = ctx.turn_id .. "-" .. tostring(ctx.function_call_id),
name = part.functionCall.name,
input_json = vim.json.encode(part.functionCall.args),
}
table.insert(ctx.tool_use_list, tool_use)
OpenAI:add_tool_use_message(ctx, tool_use, "generated", opts)
end
vim.fn.jobstart(cmd, {
stdout_buffered = true,
on_stdout = function(_, data)
if not data or #data == 0 or (data[1] == "" and #data == 1) then
return
end
end
-- Check for finishReason to determine if this candidate's stream is done.
if candidate.finishReason then
OpenAI:finish_pending_messages(ctx, opts)
local reason_str = candidate.finishReason
local stop_details = { finish_reason = reason_str }
stop_details.usage = M.transform_gemini_usage(jsn.usageMetadata)
local response_text = table.concat(data, "\n")
local ok, response = pcall(vim.json.decode, response_text)
if reason_str == "TOOL_CODE" then
-- Model indicates a tool-related stop.
-- The tool_use list is added to the table in llm.lua
opts.on_stop(vim.tbl_deep_extend("force", { reason = "tool_use" }, stop_details))
elseif reason_str == "STOP" then
if ctx.tool_use_list and #ctx.tool_use_list > 0 then
-- Natural stop, but tools were found in this final chunk.
opts.on_stop(vim.tbl_deep_extend("force", { reason = "tool_use" }, stop_details))
if not ok then
vim.schedule(function()
callback(nil, "Failed to parse Gemini response", nil)
end)
return
end
if response.error then
vim.schedule(function()
callback(nil, response.error.message or "Gemini API error", nil)
end)
return
end
-- Extract usage info
local usage = {}
if response.usageMetadata then
usage.prompt_tokens = response.usageMetadata.promptTokenCount or 0
usage.completion_tokens = response.usageMetadata.candidatesTokenCount or 0
end
if response.candidates and response.candidates[1] then
local candidate = response.candidates[1]
if candidate.content and candidate.content.parts then
local text_parts = {}
for _, part in ipairs(candidate.content.parts) do
if part.text then
table.insert(text_parts, part.text)
end
end
local full_text = table.concat(text_parts, "")
local code = llm.extract_code(full_text)
vim.schedule(function()
callback(code, nil, usage)
end)
else
-- Natural stop, no tools in this final chunk.
-- llm.lua will check its accumulated tools if tool_choice was active.
opts.on_stop(vim.tbl_deep_extend("force", { reason = "complete" }, stop_details))
vim.schedule(function()
callback(nil, "No content in Gemini response", nil)
end)
end
elseif reason_str == "MAX_TOKENS" then
opts.on_stop(vim.tbl_deep_extend("force", { reason = "max_tokens" }, stop_details))
elseif reason_str == "SAFETY" or reason_str == "RECITATION" then
opts.on_stop(
vim.tbl_deep_extend(
"force",
{ reason = "error", error = "Generation stopped: " .. reason_str },
stop_details
)
)
else -- OTHER, FINISH_REASON_UNSPECIFIED, or any other unhandled reason.
opts.on_stop(
vim.tbl_deep_extend(
"force",
{ reason = "error", error = "Generation stopped with unhandled reason: " .. reason_str },
stop_details
)
)
else
vim.schedule(function()
callback(nil, "No candidates in Gemini response", nil)
end)
end
end
-- If no finishReason, it's an intermediate chunk; do not call on_stop.
end
end,
on_stderr = function(_, data)
if data and #data > 0 and data[1] ~= "" then
vim.schedule(function()
callback(nil, "Gemini API request failed: " .. table.concat(data, "\n"), nil)
end)
end
end,
on_exit = function(_, code)
if code ~= 0 then
vim.schedule(function()
callback(nil, "Gemini API request failed with code: " .. code, nil)
end)
end
end,
})
end
---@param prompt_opts AvantePromptOptions
---@return AvanteCurlOutput|nil
function M:parse_curl_args(prompt_opts)
local provider_conf, request_body = Providers.parse_config(self)
--- Generate code using Gemini API
---@param prompt string The user's prompt
---@param context table Context information
---@param callback fun(response: string|nil, error: string|nil) Callback function
function M.generate(prompt, context, callback)
local logs = require("codetyper.agent.logs")
local model = get_model()
local api_key = self:parse_api_key()
if api_key == nil then
Utils.error("Gemini: API key is not set. Please set " .. M.api_key_name)
return nil
-- Log the request
logs.request("gemini", model)
logs.thinking("Building request body...")
local body = build_request_body(prompt, context)
-- Estimate prompt tokens
local prompt_estimate = logs.estimate_tokens(vim.json.encode(body))
logs.debug(string.format("Estimated prompt: ~%d tokens", prompt_estimate))
logs.thinking("Sending to Gemini API...")
utils.notify("Sending request to Gemini...", vim.log.levels.INFO)
make_request(body, function(response, err, usage)
if err then
logs.error(err)
utils.notify(err, vim.log.levels.ERROR)
callback(nil, err)
else
-- Log token usage
if usage then
logs.response(usage.prompt_tokens or 0, usage.completion_tokens or 0, "stop")
end
logs.thinking("Response received, extracting code...")
logs.info("Code generated successfully")
utils.notify("Code generated successfully", vim.log.levels.INFO)
callback(response, nil)
end
end)
end
--- Check if Gemini is properly configured
---@return boolean, string? Valid status and optional error message
function M.validate()
local api_key = get_api_key()
if not api_key or api_key == "" then
return false, "Gemini API key not configured"
end
return true
end
--- Generate with tool use support for agentic mode
---@param messages table[] Conversation history
---@param context table Context information
---@param tool_definitions table Tool definitions
---@param callback fun(response: table|nil, error: string|nil) Callback with raw response
function M.generate_with_tools(messages, context, tool_definitions, callback)
local logs = require("codetyper.agent.logs")
local model = get_model()
logs.request("gemini", model)
logs.thinking("Preparing agent request...")
local api_key = get_api_key()
if not api_key then
logs.error("Gemini API key not configured")
callback(nil, "Gemini API key not configured")
return
end
return {
url = Utils.url_join(
provider_conf.endpoint,
provider_conf.model .. ":streamGenerateContent?alt=sse&key=" .. api_key
),
proxy = provider_conf.proxy,
insecure = provider_conf.allow_insecure,
headers = Utils.tbl_override({ ["Content-Type"] = "application/json" }, self.extra_headers),
body = M.prepare_request_body(self, prompt_opts, provider_conf, request_body),
local tools_module = require("codetyper.agent.tools")
local agent_prompts = require("codetyper.prompts.agent")
-- Build system prompt with agent instructions
local system_prompt = llm.build_system_prompt(context)
system_prompt = system_prompt .. "\n\n" .. agent_prompts.system
system_prompt = system_prompt .. "\n\n" .. agent_prompts.tool_instructions
-- Format messages for Gemini
local gemini_contents = {}
for _, msg in ipairs(messages) do
local role = msg.role == "assistant" and "model" or "user"
local parts = {}
if type(msg.content) == "string" then
table.insert(parts, { text = msg.content })
elseif type(msg.content) == "table" then
for _, part in ipairs(msg.content) do
if part.type == "tool_result" then
table.insert(parts, { text = "[" .. (part.name or "tool") .. " result]: " .. (part.content or "") })
elseif part.type == "text" then
table.insert(parts, { text = part.text or "" })
end
end
end
if #parts > 0 then
table.insert(gemini_contents, { role = role, parts = parts })
end
end
-- Build function declarations for tools
local function_declarations = {}
for _, tool in ipairs(tools_module.definitions) do
local properties = {}
local required = {}
if tool.parameters and tool.parameters.properties then
for name, prop in pairs(tool.parameters.properties) do
properties[name] = {
type = prop.type:upper(),
description = prop.description,
}
end
end
if tool.parameters and tool.parameters.required then
required = tool.parameters.required
end
table.insert(function_declarations, {
name = tool.name,
description = tool.description,
parameters = {
type = "OBJECT",
properties = properties,
required = required,
},
})
end
local body = {
systemInstruction = {
role = "user",
parts = { { text = system_prompt } },
},
contents = gemini_contents,
generationConfig = {
temperature = 0.3,
maxOutputTokens = 4096,
},
tools = {
{ functionDeclarations = function_declarations },
},
}
local url = API_URL .. "/" .. model .. ":generateContent?key=" .. api_key
local json_body = vim.json.encode(body)
local prompt_estimate = logs.estimate_tokens(json_body)
logs.debug(string.format("Estimated prompt: ~%d tokens", prompt_estimate))
logs.thinking("Sending to Gemini API...")
local cmd = {
"curl",
"-s",
"-X",
"POST",
url,
"-H",
"Content-Type: application/json",
"-d",
json_body,
}
vim.fn.jobstart(cmd, {
stdout_buffered = true,
on_stdout = function(_, data)
if not data or #data == 0 or (data[1] == "" and #data == 1) then
return
end
local response_text = table.concat(data, "\n")
local ok, response = pcall(vim.json.decode, response_text)
if not ok then
vim.schedule(function()
logs.error("Failed to parse Gemini response")
callback(nil, "Failed to parse Gemini response")
end)
return
end
if response.error then
vim.schedule(function()
logs.error(response.error.message or "Gemini API error")
callback(nil, response.error.message or "Gemini API error")
end)
return
end
-- Log token usage
if response.usageMetadata then
logs.response(
response.usageMetadata.promptTokenCount or 0,
response.usageMetadata.candidatesTokenCount or 0,
"stop"
)
end
-- Convert to Claude-like format for parser compatibility
local converted = { content = {} }
if response.candidates and response.candidates[1] then
local candidate = response.candidates[1]
if candidate.content and candidate.content.parts then
for _, part in ipairs(candidate.content.parts) do
if part.text then
table.insert(converted.content, { type = "text", text = part.text })
logs.thinking("Response contains text")
elseif part.functionCall then
table.insert(converted.content, {
type = "tool_use",
id = vim.fn.sha256(vim.json.encode(part.functionCall)):sub(1, 16),
name = part.functionCall.name,
input = part.functionCall.args or {},
})
logs.thinking("Tool call: " .. part.functionCall.name)
end
end
end
end
vim.schedule(function()
callback(converted, nil)
end)
end,
on_stderr = function(_, data)
if data and #data > 0 and data[1] ~= "" then
vim.schedule(function()
logs.error("Gemini API request failed: " .. table.concat(data, "\n"))
callback(nil, "Gemini API request failed: " .. table.concat(data, "\n"))
end)
end
end,
on_exit = function(_, code)
if code ~= 0 then
vim.schedule(function()
logs.error("Gemini API request failed with code: " .. code)
callback(nil, "Gemini API request failed with code: " .. code)
end)
end
end,
})
end
return M

View File

@@ -14,6 +14,12 @@ function M.get_client()
return require("codetyper.llm.claude")
elseif config.llm.provider == "ollama" then
return require("codetyper.llm.ollama")
elseif config.llm.provider == "openai" then
return require("codetyper.llm.openai")
elseif config.llm.provider == "gemini" then
return require("codetyper.llm.gemini")
elseif config.llm.provider == "copilot" then
return require("codetyper.llm.copilot")
else
error("Unknown LLM provider: " .. config.llm.provider)
end

View File

@@ -394,4 +394,104 @@ function M.generate_with_tools(messages, context, tools, callback)
end)
end
--- Generate with tool use support for agentic mode (simulated via prompts)
---@param messages table[] Conversation history
---@param context table Context information
---@param tool_definitions table Tool definitions
---@param callback fun(response: string|nil, error: string|nil) Callback with response text
function M.generate_with_tools(messages, context, tool_definitions, callback)
local tools_module = require("codetyper.agent.tools")
local agent_prompts = require("codetyper.prompts.agent")
-- Build system prompt with agent instructions and tool definitions
local system_prompt = llm.build_system_prompt(context)
system_prompt = system_prompt .. "\n\n" .. agent_prompts.system
system_prompt = system_prompt .. "\n\n" .. tools_module.to_prompt_format()
-- Flatten messages to single prompt (Ollama's generate API)
local prompt_parts = {}
for _, msg in ipairs(messages) do
if type(msg.content) == "string" then
local role_prefix = msg.role == "user" and "User" or "Assistant"
table.insert(prompt_parts, role_prefix .. ": " .. msg.content)
elseif type(msg.content) == "table" then
-- Handle tool results
for _, item in ipairs(msg.content) do
if item.type == "tool_result" then
table.insert(prompt_parts, "Tool result: " .. item.content)
end
end
end
end
local body = {
model = get_model(),
system = system_prompt,
prompt = table.concat(prompt_parts, "\n\n"),
stream = false,
options = {
temperature = 0.2,
num_predict = 4096,
},
}
local host = get_host()
local url = host .. "/api/generate"
local json_body = vim.json.encode(body)
local cmd = {
"curl",
"-s",
"-X", "POST",
url,
"-H", "Content-Type: application/json",
"-d", json_body,
}
vim.fn.jobstart(cmd, {
stdout_buffered = true,
on_stdout = function(_, data)
if not data or #data == 0 or (data[1] == "" and #data == 1) then
return
end
local response_text = table.concat(data, "\n")
local ok, response = pcall(vim.json.decode, response_text)
if not ok then
vim.schedule(function()
callback(nil, "Failed to parse Ollama response")
end)
return
end
if response.error then
vim.schedule(function()
callback(nil, response.error or "Ollama API error")
end)
return
end
-- Return raw response text for parser to handle
vim.schedule(function()
callback(response.response or "", nil)
end)
end,
on_stderr = function(_, data)
if data and #data > 0 and data[1] ~= "" then
vim.schedule(function()
callback(nil, "Ollama API request failed: " .. table.concat(data, "\n"))
end)
end
end,
on_exit = function(_, code)
if code ~= 0 then
vim.schedule(function()
callback(nil, "Ollama API request failed with code: " .. code)
end)
end
end,
})
end
return M

File diff suppressed because it is too large Load Diff

View File

@@ -6,41 +6,76 @@ local M = {}
--- System prompt for agent mode
M.system = [[You are an AI coding agent integrated into Neovim via Codetyper.nvim.
You can read files, edit code, write new files, and run bash commands to help the user.
Your role is to ASSIST the developer by planning, coordinating, and executing
SAFE, MINIMAL changes using the available tools.
You do NOT operate autonomously on the entire codebase.
You operate on clearly defined tasks and scopes.
You have access to the following tools:
- read_file: Read file contents
- edit_file: Edit a file by finding and replacing specific content
- write_file: Write or create a file
- bash: Execute shell commands
- edit_file: Apply a precise, scoped replacement to a file
- write_file: Create a new file or fully replace an existing file
- bash: Execute non-destructive shell commands when necessary
GUIDELINES:
1. Always read a file before editing it to understand its current state
2. Use edit_file for targeted changes (find and replace specific content)
3. Use write_file only for new files or complete rewrites
4. Be conservative with bash commands - only run what's necessary
5. After making changes, summarize what you did
6. If a task requires multiple steps, think through the plan first
OPERATING PRINCIPLES:
1. Prefer understanding over action — read before modifying
2. Prefer small, scoped edits over large rewrites
3. Preserve existing behavior unless explicitly instructed otherwise
4. Minimize the number of tool calls required
5. Never surprise the user
IMPORTANT:
- Be precise with edit_file - the "find" content must match exactly
- When editing, include enough context to make the match unique
- Never delete files without explicit user confirmation
- Always explain what you're doing and why
IMPORTANT EDITING RULES:
- Always read a file before editing it
- Use edit_file ONLY for well-scoped, exact replacements
- The "find" field MUST match existing content exactly
- Include enough surrounding context to ensure uniqueness
- Use write_file ONLY for new files or intentional full replacements
- NEVER delete files unless explicitly confirmed by the user
BASH SAFETY:
- Use bash only when code inspection or execution is required
- Do NOT run destructive commands (rm, mv, chmod, etc.)
- Prefer read_file over bash when inspecting files
THINKING AND PLANNING:
- If a task requires multiple steps, outline a brief plan internally
- Execute steps one at a time
- Re-evaluate after each tool result
- If uncertainty arises, stop and ask for clarification
COMMUNICATION:
- Do NOT explain every micro-step while working
- After completing changes, provide a clear, concise summary
- If no changes were made, explain why
]]
--- Tool usage instructions appended to system prompt
M.tool_instructions = [[
When you need to use a tool, output the tool call in a JSON block.
After receiving the result, you can either call another tool or provide your final response.
When you need to use a tool, output ONLY a single tool call in valid JSON.
Do NOT include explanations alongside the tool call.
After receiving a tool result:
- Decide whether another tool call is required
- Or produce a final response to the user
SAFETY RULES:
- Never run destructive bash commands (rm -rf, etc.) without confirmation
- Always preserve existing functionality when editing
- If unsure about a change, ask for clarification first
- Never run destructive or irreversible commands
- Never modify code outside the requested scope
- Never guess file contents — read them first
- If a requested change appears risky or ambiguous, ask before proceeding
]]
--- Prompt for when agent finishes
M.completion = [[Based on the tool results above, please provide a summary of what was done and any next steps the user should take.]]
M.completion = [[Provide a concise summary of what was changed.
Include:
- Files that were read or modified
- The nature of the changes (high-level)
- Any follow-up steps or recommendations, if applicable
Do NOT restate tool output verbatim.
]]
return M

View File

@@ -1,128 +1,177 @@
---@mod codetyper.prompts.ask Ask/explanation prompts for Codetyper.nvim
---@mod codetyper.prompts.ask Ask / explanation prompts for Codetyper.nvim
---
--- These prompts are used for the Ask panel and code explanations.
--- These prompts are used for the Ask panel and non-destructive explanations.
local M = {}
--- Prompt for explaining code
M.explain_code = [[Please explain the following code:
{{code}}
Provide:
1. A high-level overview of what it does
2. Explanation of key parts
3. Any potential issues or improvements
]]
--- Prompt for explaining a specific function
M.explain_function = [[Explain this function in detail:
{{code}}
Include:
1. What the function does
2. Parameters and their purposes
3. Return value
4. Any side effects
5. Usage examples
]]
--- Prompt for explaining an error
M.explain_error = [[I'm getting this error:
{{error}}
In this code:
{{code}}
Please explain:
1. What the error means
2. Why it's happening
3. How to fix it
]]
--- Prompt for code review
M.code_review = [[Please review this code:
{{code}}
Provide feedback on:
1. Code quality and readability
2. Potential bugs or issues
3. Performance considerations
4. Security concerns (if applicable)
5. Suggested improvements
]]
--- Prompt for explaining a concept
M.explain_concept = [[Explain the following programming concept:
{{concept}}
Include:
1. Definition and purpose
2. When and why to use it
3. Simple code examples
4. Common pitfalls to avoid
]]
--- Prompt for comparing approaches
M.compare_approaches = [[Compare these different approaches:
{{approaches}}
Analyze:
1. Pros and cons of each
2. Performance implications
3. Maintainability
4. When to use each approach
]]
--- Prompt for debugging help
M.debug_help = [[Help me debug this issue:
Problem: {{problem}}
M.explain_code = [[You are explaining EXISTING code to a developer.
Code:
{{code}}
What I've tried:
Instructions:
- Start with a concise high-level overview
- Explain important logic and structure
- Point out noteworthy implementation details
- Mention potential issues or limitations ONLY if clearly visible
- Do NOT speculate about missing context
Format the response in markdown.
]]
--- Prompt for explaining a specific function
M.explain_function = [[You are explaining an EXISTING function.
Function code:
{{code}}
Explain:
- What the function does and when it is used
- The purpose of each parameter
- The return value, if any
- Side effects or assumptions
- A brief usage example if appropriate
Format the response in markdown.
Do NOT suggest refactors unless explicitly asked.
]]
--- Prompt for explaining an error
M.explain_error = [[You are helping diagnose a real error.
Error message:
{{error}}
Relevant code:
{{code}}
Instructions:
- Explain what the error message means
- Identify the most likely cause based on the code
- Suggest concrete fixes or next debugging steps
- If multiple causes are possible, say so clearly
Format the response in markdown.
Do NOT invent missing stack traces or context.
]]
--- Prompt for code review
M.code_review = [[You are performing a code review on EXISTING code.
Code:
{{code}}
Review criteria:
- Readability and clarity
- Correctness and potential bugs
- Performance considerations where relevant
- Security concerns only if applicable
- Practical improvement suggestions
Guidelines:
- Be constructive and specific
- Do NOT nitpick style unless it impacts clarity
- Do NOT suggest large refactors unless justified
Format the response in markdown.
]]
--- Prompt for explaining a programming concept
M.explain_concept = [[Explain the following programming concept to a developer:
Concept:
{{concept}}
Include:
- A clear definition and purpose
- When and why it is used
- A simple illustrative example
- Common pitfalls or misconceptions
Format the response in markdown.
Avoid unnecessary jargon.
]]
--- Prompt for comparing approaches
M.compare_approaches = [[Compare the following approaches:
{{approaches}}
Analysis guidelines:
- Describe strengths and weaknesses of each
- Discuss performance or complexity tradeoffs if relevant
- Compare maintainability and clarity
- Explain when one approach is preferable over another
Format the response in markdown.
Base comparisons on general principles unless specific code is provided.
]]
--- Prompt for debugging help
M.debug_help = [[You are helping debug a concrete issue.
Problem description:
{{problem}}
Code:
{{code}}
What has already been tried:
{{attempts}}
Please help identify the issue and suggest a solution.
Instructions:
- Identify likely root causes
- Explain why the issue may be occurring
- Suggest specific debugging steps or fixes
- Call out missing information if needed
Format the response in markdown.
Do NOT guess beyond the provided information.
]]
--- Prompt for architecture advice
M.architecture_advice = [[I need advice on this architecture decision:
M.architecture_advice = [[You are providing architecture guidance.
Question:
{{question}}
Context:
{{context}}
Please provide:
1. Recommended approach
2. Reasoning
3. Potential alternatives
4. Things to consider
Instructions:
- Recommend a primary approach
- Explain the reasoning and tradeoffs
- Mention viable alternatives when relevant
- Highlight risks or constraints to consider
Format the response in markdown.
Avoid dogmatic or one-size-fits-all answers.
]]
--- Generic ask prompt
M.generic = [[USER QUESTION: {{question}}
M.generic = [[You are answering a developer's question.
Question:
{{question}}
{{#if files}}
ATTACHED FILE CONTENTS:
Relevant file contents:
{{files}}
{{/if}}
{{#if context}}
ADDITIONAL CONTEXT:
Additional context:
{{context}}
{{/if}}
Please provide a helpful, accurate response.
Instructions:
- Be accurate and grounded in the provided information
- Clearly state assumptions or uncertainty
- Prefer clarity over verbosity
- Do NOT output raw code intended for insertion unless explicitly asked
Format the response in markdown.
]]
return M

View File

@@ -1,107 +1,151 @@
---@mod codetyper.prompts.code Code generation prompts for Codetyper.nvim
---
--- These prompts are used for generating new code.
--- These prompts are used for scoped, non-destructive code generation and transformation.
local M = {}
--- Prompt template for creating a new function
M.create_function = [[Create a function with the following requirements:
{{description}}
--- Prompt template for creating a new function (greenfield)
M.create_function = [[You are creating a NEW function inside an existing codebase.
Requirements:
- Follow the coding style of the existing file
- Include proper error handling
- Use appropriate types (if applicable)
- Make it efficient and readable
{{description}}
OUTPUT ONLY THE RAW CODE. No explanations, no markdown, no code fences.
Constraints:
- Follow the coding style and conventions of the surrounding file
- Choose names consistent with nearby code
- Include appropriate error handling if relevant
- Use correct and idiomatic types for the language
- Do NOT include code outside the function itself
- Do NOT add comments unless explicitly requested
OUTPUT ONLY THE RAW CODE OF THE FUNCTION. No explanations, no markdown, no code fences.
]]
--- Prompt template for creating a new class/module
M.create_class = [[Create a class/module with the following requirements:
--- Prompt template for completing an existing function
M.complete_function = [[You are completing an EXISTING function.
{{description}}
The function definition already exists and will be replaced by your output.
Requirements:
- Follow OOP best practices
- Include constructor/initialization
- Implement proper encapsulation
- Add necessary methods as described
Instructions:
- Preserve the function signature unless completion is impossible without changing it
- Complete missing logic, TODOs, or placeholders
- Preserve naming, structure, and intent
- Do NOT refactor or reformat unrelated parts
- Do NOT add new public APIs unless explicitly required
OUTPUT ONLY THE RAW CODE. No explanations, no markdown, no code fences.
OUTPUT ONLY THE FULL FUNCTION CODE. No explanations, no markdown, no code fences.
]]
--- Prompt template for implementing an interface/trait
M.implement_interface = [[Implement the following interface/trait:
{{description}}
--- Prompt template for creating a new class or module (greenfield)
M.create_class = [[You are creating a NEW class or module inside an existing project.
Requirements:
- Implement all required methods
- Follow the interface contract exactly
- Handle edge cases appropriately
{{description}}
OUTPUT ONLY THE RAW CODE. No explanations, no markdown, no code fences.
Constraints:
- Match the architectural and stylistic patterns of the project
- Include required initialization or constructors
- Expose only the necessary public surface
- Do NOT include unrelated helper code
- Do NOT include comments unless explicitly requested
OUTPUT ONLY THE RAW CLASS OR MODULE CODE. No explanations, no markdown, no code fences.
]]
--- Prompt template for creating a React component
M.create_react_component = [[Create a React component with the following requirements:
--- Prompt template for modifying an existing class or module
M.modify_class = [[You are modifying an EXISTING class or module.
{{description}}
The provided code will be replaced by your output.
Instructions:
- Preserve the public API unless explicitly instructed otherwise
- Modify only what is required to satisfy the request
- Maintain method order and structure where possible
- Do NOT introduce unrelated refactors or stylistic changes
OUTPUT ONLY THE FULL UPDATED CLASS OR MODULE CODE. No explanations, no markdown, no code fences.
]]
--- Prompt template for implementing an interface or trait
M.implement_interface = [[You are implementing an interface or trait in an existing codebase.
Requirements:
- Use functional components with hooks
- Include proper TypeScript types (if .tsx)
- Follow React best practices
- Make it reusable and composable
{{description}}
OUTPUT ONLY THE RAW CODE. No explanations, no markdown, no code fences.
Constraints:
- Implement ALL required methods exactly
- Match method signatures and order defined by the interface
- Do NOT add extra public methods
- Use idiomatic patterns for the target language
- Handle required edge cases only
OUTPUT ONLY THE RAW IMPLEMENTATION CODE. No explanations, no markdown, no code fences.
]]
--- Prompt template for creating a React component (greenfield)
M.create_react_component = [[You are creating a NEW React component within an existing project.
Requirements:
{{description}}
Constraints:
- Use the patterns already present in the codebase
- Prefer functional components if consistent with surrounding files
- Use hooks and TypeScript types only if already in use
- Do NOT introduce new architectural patterns
- Do NOT include comments unless explicitly requested
OUTPUT ONLY THE RAW COMPONENT CODE. No explanations, no markdown, no code fences.
]]
--- Prompt template for creating an API endpoint
M.create_api_endpoint = [[Create an API endpoint with the following requirements:
{{description}}
M.create_api_endpoint = [[You are creating a NEW API endpoint in an existing backend codebase.
Requirements:
- Include input validation
- Proper error handling and status codes
- Follow RESTful conventions
- Include appropriate middleware
{{description}}
OUTPUT ONLY THE RAW CODE. No explanations, no markdown, no code fences.
Constraints:
- Follow the conventions and framework already used in the project
- Validate inputs as required by existing patterns
- Use appropriate error handling and status codes
- Do NOT add middleware or routing changes unless explicitly requested
- Do NOT modify unrelated endpoints
OUTPUT ONLY THE RAW ENDPOINT CODE. No explanations, no markdown, no code fences.
]]
--- Prompt template for creating a utility function
M.create_utility = [[Create a utility function:
{{description}}
M.create_utility = [[You are creating a NEW utility function.
Requirements:
- Pure function (no side effects) if possible
- Handle edge cases
- Efficient implementation
- Well-typed (if applicable)
{{description}}
OUTPUT ONLY THE RAW CODE. No explanations, no markdown, no code fences.
Constraints:
- Prefer pure functions when possible
- Avoid side effects unless explicitly required
- Handle relevant edge cases only
- Match naming and style conventions of existing utilities
OUTPUT ONLY THE RAW FUNCTION CODE. No explanations, no markdown, no code fences.
]]
--- Prompt template for generic code generation
M.generic = [[Generate code based on the following description:
{{description}}
--- Prompt template for generic scoped code transformation
M.generic = [[You are modifying or generating code within an EXISTING file.
Context:
- Language: {{language}}
- File: {{filepath}}
Requirements:
- Match existing code style
- Follow best practices
- Handle errors appropriately
Instructions:
{{description}}
OUTPUT ONLY THE RAW CODE. No explanations, no markdown, no code fences.
Constraints:
- Operate ONLY on the provided scope
- Preserve existing structure and intent
- Do NOT modify code outside the target region
- Do NOT add explanations, comments, or formatting changes unless requested
OUTPUT ONLY THE RAW CODE THAT REPLACES THE TARGET SCOPE. No explanations, no markdown, no code fences.
]]
return M

View File

@@ -1,136 +1,152 @@
---@mod codetyper.prompts.document Documentation prompts for Codetyper.nvim
---
--- These prompts are used for generating documentation.
--- These prompts are used for scoped, non-destructive documentation generation.
local M = {}
--- Prompt for adding JSDoc comments
M.jsdoc = [[Add JSDoc documentation to this code:
M.jsdoc = [[You are adding JSDoc documentation to EXISTING JavaScript or TypeScript code.
{{code}}
The documentation will be INSERTED at the appropriate locations.
Requirements:
- Document all functions and methods
- Document only functions, methods, and types that already exist
- Include @param for all parameters
- Include @returns for return values
- Add @throws if exceptions are thrown
- Include @example where helpful
- Use @typedef for complex types
- Include @returns only if the function returns a value
- Include @throws ONLY if errors are actually thrown
- Use @typedef or @type only when types already exist implicitly
- Do NOT invent new behavior or APIs
- Do NOT change the underlying code
OUTPUT ONLY VALID JSDOC COMMENTS. No explanations, no markdown, no code fences.
]]
--- Prompt for adding Python docstrings
M.python_docstring = [[Add docstrings to this Python code:
M.python_docstring = [[You are adding docstrings to EXISTING Python code.
{{code}}
The documentation will be INSERTED into existing functions or classes.
Requirements:
- Use Google-style docstrings
- Document all functions and classes
- Include Args, Returns, Raises sections
- Add Examples where helpful
- Include type hints in docstrings
- Document only functions and classes that already exist
- Include Args, Returns, and Raises sections ONLY when applicable
- Do NOT invent parameters, return values, or exceptions
- Do NOT change the code logic
OUTPUT ONLY VALID PYTHON DOCSTRINGS. No explanations, no markdown.
]]
--- Prompt for adding LuaDoc comments
M.luadoc = [[Add LuaDoc/EmmyLua annotations to this Lua code:
--- Prompt for adding LuaDoc / EmmyLua comments
M.luadoc = [[You are adding LuaDoc / EmmyLua annotations to EXISTING Lua code.
{{code}}
The documentation will be INSERTED above existing definitions.
Requirements:
- Use ---@param for parameters
- Use ---@return for return values
- Use ---@class for table structures
- Use ---@field for class fields
- Add descriptions for all items
- Use ---@param only for existing parameters
- Use ---@return only for actual return values
- Use ---@class and ---@field only when structures already exist
- Keep descriptions accurate and minimal
- Do NOT add new code or behavior
OUTPUT ONLY VALID LUADOC / EMMYLUA COMMENTS. No explanations, no markdown.
]]
--- Prompt for adding Go documentation
M.godoc = [[Add GoDoc comments to this Go code:
M.godoc = [[You are adding GoDoc comments to EXISTING Go code.
{{code}}
The documentation will be INSERTED above existing declarations.
Requirements:
- Start comments with the name being documented
- Document all exported functions, types, and variables
- Keep comments concise but complete
- Follow Go documentation conventions
- Start each comment with the name being documented
- Document only exported functions, types, and variables
- Describe what the code does, not how it is implemented
- Do NOT invent behavior or usage
OUTPUT ONLY VALID GODoc COMMENTS. No explanations, no markdown.
]]
--- Prompt for adding README documentation
M.readme = [[Generate README documentation for this code:
--- Prompt for generating README documentation
M.readme = [[You are generating a README for an EXISTING codebase.
{{code}}
The README will be CREATED or REPLACED as a standalone document.
Include:
- Project description
- Installation instructions
- Usage examples
- API documentation
- Contributing guidelines
Requirements:
- Describe only functionality that exists in the provided code
- Include installation and usage only if they can be inferred safely
- Do NOT speculate about features or roadmap
- Keep the README concise and accurate
OUTPUT ONLY RAW README CONTENT. No markdown fences, no explanations.
]]
--- Prompt for adding inline comments
M.inline_comments = [[Add helpful inline comments to this code:
M.inline_comments = [[You are adding inline comments to EXISTING code.
{{code}}
The comments will be INSERTED without modifying code logic.
Guidelines:
- Explain complex logic
- Document non-obvious decisions
- Don't state the obvious
- Keep comments concise
- Use TODO/FIXME where appropriate
- Explain complex or non-obvious logic only
- Do NOT comment trivial or self-explanatory code
- Do NOT restate what the code already clearly says
- Do NOT introduce TODO or FIXME unless explicitly requested
OUTPUT ONLY VALID INLINE COMMENTS. No explanations, no markdown.
]]
--- Prompt for adding API documentation
M.api_docs = [[Generate API documentation for this code:
M.api_docs = [[You are generating API documentation for EXISTING code.
{{code}}
The documentation will be INSERTED or GENERATED as appropriate.
Include for each endpoint/function:
- Description
- Parameters with types
- Return value with type
- Example request/response
- Error cases
Requirements:
- Document only endpoints or functions that exist
- Describe parameters and return values accurately
- Include examples ONLY when behavior is unambiguous
- Describe error cases only if they are explicitly handled in code
- Do NOT invent request/response shapes
OUTPUT ONLY RAW API DOCUMENTATION CONTENT. No explanations, no markdown.
]]
--- Prompt for adding type definitions
M.type_definitions = [[Generate type definitions for this code:
M.type_definitions = [[You are generating type definitions for EXISTING code.
{{code}}
The types will be INSERTED or GENERATED alongside existing code.
Requirements:
- Define interfaces/types for all data structures
- Include optional properties where appropriate
- Add JSDoc/docstring descriptions
- Export all types that should be public
- Define types only for data structures that already exist
- Mark optional properties accurately
- Do NOT introduce new runtime behavior
- Match the typing style already used in the project
OUTPUT ONLY VALID TYPE DEFINITIONS. No explanations, no markdown.
]]
--- Prompt for changelog entry
M.changelog = [[Generate a changelog entry for these changes:
--- Prompt for generating a changelog entry
M.changelog = [[You are generating a changelog entry for EXISTING changes.
{{changes}}
Requirements:
- Reflect ONLY the provided changes
- Use a conventional changelog format
- Categorize changes accurately (Added, Changed, Fixed, Removed)
- Highlight breaking changes clearly if present
- Do NOT speculate or add future work
Format:
- Use conventional changelog format
- Categorize as Added/Changed/Fixed/Removed
- Be concise but descriptive
- Include breaking changes prominently
OUTPUT ONLY RAW CHANGELOG TEXT. No explanations, no markdown.
]]
--- Generic documentation prompt
M.generic = [[Add documentation to this code:
{{code}}
M.generic = [[You are adding documentation to EXISTING code.
Language: {{language}}
Requirements:
- Use appropriate documentation format for the language
- Document all public APIs
- Include parameter and return descriptions
- Add examples where helpful
- Use the correct documentation format for the language
- Document only public APIs that already exist
- Describe parameters, return values, and errors accurately
- Do NOT invent behavior, examples, or features
OUTPUT ONLY VALID DOCUMENTATION CONTENT. No explanations, no markdown.
]]
return M

View File

@@ -1,128 +1,191 @@
---@mod codetyper.prompts.refactor Refactoring prompts for Codetyper.nvim
---
--- These prompts are used for code refactoring operations.
--- These prompts are used for scoped, non-destructive refactoring operations.
local M = {}
--- Prompt for general refactoring
M.general = [[Refactor this code to improve its quality:
M.general = [[You are refactoring a SPECIFIC REGION of existing code.
{{code}}
The provided code will be REPLACED by your output.
Focus on:
- Readability
- Maintainability
- Following best practices
- Keeping the same functionality
Goals:
- Improve readability and maintainability
- Preserve ALL existing behavior
- Follow the coding style already present
- Keep changes minimal and justified
Constraints:
- Do NOT change public APIs unless explicitly required
- Do NOT introduce new dependencies
- Do NOT refactor unrelated logic
- Do NOT add comments unless explicitly requested
OUTPUT ONLY THE FULL REFACTORED CODE FOR THIS REGION. No explanations, no markdown, no code fences.
]]
--- Prompt for extracting a function
M.extract_function = [[Extract a function from this code:
M.extract_function = [[You are extracting a function from an EXISTING CODE REGION.
{{code}}
The provided code will be REPLACED by your output.
The function should:
Instructions:
{{description}}
Requirements:
- Give it a meaningful name
- Include proper parameters
- Return appropriate values
Constraints:
- Preserve behavior exactly
- Extract ONLY the logic required
- Choose a name consistent with existing naming conventions
- Do NOT introduce new abstractions beyond the extracted function
- Keep parameter order and data flow explicit
OUTPUT ONLY THE FULL UPDATED CODE FOR THIS REGION. No explanations, no markdown, no code fences.
]]
--- Prompt for simplifying code
M.simplify = [[Simplify this code while maintaining functionality:
M.simplify = [[You are simplifying an EXISTING CODE REGION.
{{code}}
The provided code will be REPLACED by your output.
Goals:
- Reduce complexity
- Reduce unnecessary complexity
- Remove redundancy
- Improve readability
- Keep all existing behavior
- Improve clarity without changing behavior
Constraints:
- Do NOT change function signatures unless required
- Do NOT alter control flow semantics
- Do NOT refactor unrelated logic
OUTPUT ONLY THE FULL SIMPLIFIED CODE FOR THIS REGION. No explanations, no markdown, no code fences.
]]
--- Prompt for converting to async/await
M.async_await = [[Convert this code to use async/await:
M.async_await = [[You are converting an EXISTING CODE REGION to async/await syntax.
{{code}}
The provided code will be REPLACED by your output.
Requirements:
- Convert all promises to async/await
- Maintain error handling
- Keep the same functionality
- Convert promise-based logic to async/await
- Preserve existing error handling semantics
- Maintain return values and control flow
- Match existing async patterns in the file
Constraints:
- Do NOT introduce new behavior
- Do NOT change public APIs unless required
- Do NOT refactor unrelated code
OUTPUT ONLY THE FULL UPDATED CODE FOR THIS REGION. No explanations, no markdown, no code fences.
]]
--- Prompt for adding error handling
M.add_error_handling = [[Add proper error handling to this code:
M.add_error_handling = [[You are adding error handling to an EXISTING CODE REGION.
{{code}}
The provided code will be REPLACED by your output.
Requirements:
- Handle all potential errors
- Use appropriate error types
- Add meaningful error messages
- Don't change core functionality
- Handle realistic failure cases for the existing logic
- Follow error-handling patterns already used in the file
- Preserve normal execution paths
Constraints:
- Do NOT change core logic
- Do NOT introduce new error types unless necessary
- Do NOT add logging unless explicitly requested
OUTPUT ONLY THE FULL UPDATED CODE FOR THIS REGION. No explanations, no markdown, no code fences.
]]
--- Prompt for improving performance
M.optimize_performance = [[Optimize this code for better performance:
M.optimize_performance = [[You are optimizing an EXISTING CODE REGION for performance.
{{code}}
The provided code will be REPLACED by your output.
Focus on:
- Algorithm efficiency
- Memory usage
- Reducing unnecessary operations
- Maintaining readability
Goals:
- Improve algorithmic or operational efficiency
- Reduce unnecessary work or allocations
- Preserve readability where possible
Constraints:
- Preserve ALL existing behavior
- Do NOT introduce premature optimization
- Do NOT change public APIs
- Do NOT refactor unrelated logic
OUTPUT ONLY THE FULL OPTIMIZED CODE FOR THIS REGION. No explanations, no markdown, no code fences.
]]
--- Prompt for converting to TypeScript
M.convert_to_typescript = [[Convert this JavaScript code to TypeScript:
--- Prompt for converting JavaScript to TypeScript
M.convert_to_typescript = [[You are converting an EXISTING JavaScript CODE REGION to TypeScript.
{{code}}
The provided code will be REPLACED by your output.
Requirements:
- Add proper type annotations
- Use interfaces where appropriate
- Handle null/undefined properly
- Maintain all functionality
- Add accurate type annotations
- Use interfaces or types only when they clarify intent
- Handle null and undefined explicitly where required
Constraints:
- Do NOT change runtime behavior
- Do NOT introduce types that alter semantics
- Match TypeScript style already used in the project
OUTPUT ONLY THE FULL TYPESCRIPT CODE FOR THIS REGION. No explanations, no markdown, no code fences.
]]
--- Prompt for applying design pattern
M.apply_pattern = [[Refactor this code to use the {{pattern}} pattern:
--- Prompt for applying a design pattern
M.apply_pattern = [[You are refactoring an EXISTING CODE REGION to apply the {{pattern}} pattern.
{{code}}
The provided code will be REPLACED by your output.
Requirements:
- Properly implement the pattern
- Maintain existing functionality
- Improve code organization
- Apply the pattern correctly and idiomatically
- Preserve ALL existing behavior
- Improve structure only where justified by the pattern
Constraints:
- Do NOT over-abstract
- Do NOT introduce unnecessary indirection
- Do NOT modify unrelated code
OUTPUT ONLY THE FULL UPDATED CODE FOR THIS REGION. No explanations, no markdown, no code fences.
]]
--- Prompt for splitting a large function
M.split_function = [[Split this large function into smaller, focused functions:
M.split_function = [[You are splitting an EXISTING LARGE FUNCTION into smaller functions.
{{code}}
The provided code will be REPLACED by your output.
Goals:
- Single responsibility per function
- Clear function names
- Proper parameter passing
- Maintain all functionality
- Each function has a single, clear responsibility
- Names reflect existing naming conventions
- Data flow remains explicit and understandable
Constraints:
- Preserve external behavior exactly
- Do NOT change the public API unless required
- Do NOT introduce unnecessary abstraction layers
OUTPUT ONLY THE FULL UPDATED CODE FOR THIS REGION. No explanations, no markdown, no code fences.
]]
--- Prompt for removing code smells
M.remove_code_smells = [[Refactor this code to remove code smells:
M.remove_code_smells = [[You are refactoring an EXISTING CODE REGION to remove code smells.
{{code}}
The provided code will be REPLACED by your output.
Look for and fix:
- Long methods
- Duplicated code
- Magic numbers
- Deep nesting
- Other anti-patterns
Focus on:
- Reducing duplication
- Simplifying long or deeply nested logic
- Removing magic numbers where appropriate
Constraints:
- Preserve ALL existing behavior
- Do NOT introduce speculative refactors
- Do NOT refactor beyond the provided region
OUTPUT ONLY THE FULL CLEANED CODE FOR THIS REGION. No explanations, no markdown, no code fences.
]]
return M

View File

@@ -4,98 +4,105 @@
local M = {}
--- Base system prompt for code generation
M.code_generation = [[You are an expert code generation assistant integrated into Neovim.
Your task is to generate production-ready {{language}} code that EXACTLY matches the style of the existing file.
--- Base system prompt for code generation / modification
M.code_generation = [[You are an expert code assistant integrated into Neovim via Codetyper.nvim.
You are operating on a SPECIFIC, LIMITED REGION of an existing {{language}} file.
Your output will REPLACE that region exactly.
ABSOLUTE RULES - FOLLOW STRICTLY:
1. Output ONLY raw {{language}} code - NO explanations, NO markdown, NO code fences (```), NO comments about what you did
2. DO NOT wrap output in ``` or any markdown - just raw code
3. The output must be valid {{language}} code that can be directly inserted into the file
4. MATCH the existing code patterns in the file:
- Same indentation style (spaces/tabs)
- Same naming conventions (camelCase, snake_case, PascalCase, etc.)
- Same import/require style used in the file
- Same comment style
- Same function/class/module patterns used in the file
5. If the file has existing exports, follow the same export pattern
6. If the file uses certain libraries/frameworks, use the same ones
7. Include proper types/annotations if the language supports them and the file uses them
8. Include proper error handling following the file's patterns
1. Output ONLY raw {{language}} code NO explanations, NO markdown, NO code fences, NO meta comments
2. Do NOT include code outside the target region
3. Preserve existing structure, intent, and naming unless explicitly instructed otherwise
4. MATCH the surrounding file's conventions exactly:
- Indentation (spaces/tabs)
- Naming style (camelCase, snake_case, PascalCase, etc.)
- Import / require patterns already in use
- Error handling patterns already in use
- Type annotations only if already present in the file
5. Do NOT refactor unrelated code
6. Do NOT introduce new dependencies unless explicitly requested
7. Output must be valid {{language}} code that can be inserted directly
Language: {{language}}
File: {{filepath}}
Context:
- Language: {{language}}
- File: {{filepath}}
REMEMBER: Output ONLY valid {{language}} code. No markdown. No explanations. Just the code.
REMEMBER: Your output REPLACES a known region. Output ONLY valid {{language}} code.
]]
--- System prompt for code explanation/ask
--- System prompt for Ask / explanation mode
M.ask = [[You are a helpful coding assistant integrated into Neovim via Codetyper.nvim.
You help developers understand code, explain concepts, and answer programming questions.
Your role is to explain, analyze, or answer questions about code — NOT to modify files.
GUIDELINES:
1. Be concise but thorough in your explanations
2. Use code examples when helpful
3. Reference the provided code context in your explanations
1. Be concise, precise, and technically accurate
2. Base explanations strictly on the provided code and context
3. Use code snippets only when they clarify the explanation
4. Format responses in markdown for readability
5. If you don't know something, say so honestly
6. Break down complex concepts into understandable parts
7. Provide practical, actionable advice
5. Clearly state uncertainty if information is missing
6. Focus on practical understanding and tradeoffs
IMPORTANT: When file contents are provided, analyze them carefully and base your response on the actual code.
IMPORTANT:
- Do NOT output raw code intended for insertion
- Do NOT assume missing context
- Do NOT speculate beyond the provided information
]]
--- System prompt for refactoring
M.refactor = [[You are an expert code refactoring assistant integrated into Neovim.
Your task is to refactor {{language}} code while maintaining its functionality.
--- System prompt for scoped refactoring
M.refactor = [[You are an expert refactoring assistant integrated into Neovim via Codetyper.nvim.
You are refactoring a SPECIFIC REGION of {{language}} code.
Your output will REPLACE that region exactly.
ABSOLUTE RULES - FOLLOW STRICTLY:
1. Output ONLY the refactored {{language}} code - NO explanations, NO markdown, NO code fences (```)
2. DO NOT wrap output in ``` or any markdown - just raw code
3. Preserve ALL existing functionality
4. Improve code quality, readability, and maintainability
5. Keep the EXACT same coding style as the original file
6. Do not add new features unless explicitly requested
7. Output must be valid {{language}} code ready to replace the original
1. Output ONLY the refactored {{language}} code NO explanations, NO markdown, NO code fences
2. Preserve ALL existing behavior and external contracts
3. Improve clarity, maintainability, or structure ONLY where required
4. Keep naming, formatting, and style consistent with the original file
5. Do NOT add features or remove functionality unless explicitly instructed
6. Do NOT refactor unrelated code
Language: {{language}}
REMEMBER: Output ONLY valid {{language}} code. No markdown. No explanations.
REMEMBER: Your output replaces a known region. Output ONLY valid {{language}} code.
]]
--- System prompt for documentation
M.document = [[You are a documentation expert integrated into Neovim.
Your task is to generate documentation comments for {{language}} code.
--- System prompt for documentation generation
M.document = [[You are a documentation assistant integrated into Neovim via Codetyper.nvim.
You are generating documentation comments for EXISTING {{language}} code.
Your output will be INSERTED at a specific location.
ABSOLUTE RULES - FOLLOW STRICTLY:
1. Output ONLY the documentation comments - NO explanations, NO markdown
2. DO NOT wrap output in ``` or any markdown - just raw comments
3. Use the appropriate documentation format for {{language}}:
1. Output ONLY documentation comments NO explanations, NO markdown
2. Use the correct documentation style for {{language}}:
- JavaScript/TypeScript/JSX/TSX: JSDoc (/** ... */)
- Python: Docstrings (triple quotes)
- Lua: LuaDoc/EmmyLua (---)
- Lua: LuaDoc / EmmyLua (---)
- Go: GoDoc comments
- Rust: RustDoc (///)
- Ruby: YARD
- PHP: PHPDoc
- Java/Kotlin: Javadoc
- C/C++: Doxygen
4. Document all parameters, return values, and exceptions
5. Output must be valid comment syntax for {{language}}
3. Document parameters, return values, and errors that already exist
4. Do NOT invent behavior or undocumented side effects
Language: {{language}}
REMEMBER: Output ONLY valid {{language}} documentation comments. No markdown.
REMEMBER: Output ONLY valid {{language}} documentation comments.
]]
--- System prompt for test generation
M.test = [[You are a test generation expert integrated into Neovim.
Your task is to generate unit tests for {{language}} code.
M.test = [[You are a test generation assistant integrated into Neovim via Codetyper.nvim.
You are generating NEW unit tests for existing {{language}} code.
ABSOLUTE RULES - FOLLOW STRICTLY:
1. Output ONLY the test code - NO explanations, NO markdown, NO code fences (```)
2. DO NOT wrap output in ``` or any markdown - just raw test code
3. Use the appropriate testing framework for {{language}}:
1. Output ONLY test code NO explanations, NO markdown, NO code fences
2. Use a testing framework already present in the project when possible:
- JavaScript/TypeScript/JSX/TSX: Jest, Vitest, or Mocha
- Python: pytest or unittest
- Lua: busted or plenary
@@ -105,13 +112,13 @@ ABSOLUTE RULES - FOLLOW STRICTLY:
- PHP: PHPUnit
- Java/Kotlin: JUnit
- C/C++: Google Test or Catch2
4. Cover happy paths, edge cases, and error scenarios
5. Follow AAA pattern: Arrange, Act, Assert
6. Output must be valid {{language}} test code
3. Cover normal behavior, edge cases, and error paths
4. Follow idiomatic patterns of the chosen framework
5. Do NOT test behavior that does not exist
Language: {{language}}
REMEMBER: Output ONLY valid {{language}} test code. No markdown. No explanations.
REMEMBER: Output ONLY valid {{language}} test code.
]]
return M