feat: add pink-purple theme, fix image paste race condition, allow @/commands anywhere in input

- Add Pink Purple theme (hot pink/purple/magenta on dark plum background)
- Fix race condition where clearPastedImages() in input-area ran before
  the async message handler could read the images, silently dropping them
- Allow @ file picker and / command menu to trigger at any cursor position,
  not just when the input is empty
- Update CHANGELOG and README with new changes
This commit is contained in:
2026-02-14 06:39:08 -05:00
parent ddbdb5eb3e
commit 6111530c08
84 changed files with 5643 additions and 1574 deletions

128
README.md
View File

@@ -90,9 +90,11 @@ Full-screen terminal interface with real-time streaming responses.
![CodeTyper Status View](assets/CodetyperView.png)
**Key bindings:**
- `Enter` - Send message
- `Shift+Enter` - New line
- `/` - Open command menu
- `@` - Open file picker (works anywhere in input)
- `/` - Open command menu (works anywhere in input)
- `Ctrl+M` - Toggle interaction mode
- `Ctrl+T` - Toggle todo panel
- `Shift+Up/Down` - Scroll log panel
@@ -103,6 +105,7 @@ Full-screen terminal interface with real-time streaming responses.
Optional vim-style keyboard navigation for power users. Enable in settings.
**Normal Mode:**
- `j/k` - Scroll down/up
- `gg/G` - Jump to top/bottom
- `Ctrl+d/u` - Half page scroll
@@ -111,6 +114,7 @@ Optional vim-style keyboard navigation for power users. Enable in settings.
- `:` - Command mode (`:q` quit, `:w` save)
**Configuration:**
```json
{
"vim": {
@@ -128,26 +132,26 @@ Press `/` to access all commands organized by category.
**Available Commands:**
| Category | Command | Description |
|----------|---------|-------------|
| General | `/help` | Show available commands |
| General | `/clear` | Clear conversation history |
| General | `/exit` | Exit the chat |
| Session | `/save` | Save current session |
| Session | `/context` | Show context information |
| Session | `/usage` | Show token usage statistics |
| Session | `/remember` | Save a learning about the project |
| Session | `/learnings` | Show saved learnings |
| Settings | `/model` | Select AI model |
| Settings | `/agent` | Select agent |
| Settings | `/mode` | Switch interaction mode |
| Settings | `/provider` | Switch LLM provider |
| Settings | `/status` | Show provider status |
| Settings | `/theme` | Change color theme |
| Settings | `/mcp` | Manage MCP servers |
| Account | `/whoami` | Show logged in account |
| Account | `/login` | Authenticate with provider |
| Account | `/logout` | Sign out from provider |
| Category | Command | Description |
| -------- | ------------ | --------------------------------- |
| General | `/help` | Show available commands |
| General | `/clear` | Clear conversation history |
| General | `/exit` | Exit the chat |
| Session | `/save` | Save current session |
| Session | `/context` | Show context information |
| Session | `/usage` | Show token usage statistics |
| Session | `/remember` | Save a learning about the project |
| Session | `/learnings` | Show saved learnings |
| Settings | `/model` | Select AI model |
| Settings | `/agent` | Select agent |
| Settings | `/mode` | Switch interaction mode |
| Settings | `/provider` | Switch LLM provider |
| Settings | `/status` | Show provider status |
| Settings | `/theme` | Change color theme |
| Settings | `/mcp` | Manage MCP servers |
| Account | `/whoami` | Show logged in account |
| Account | `/login` | Authenticate with provider |
| Account | `/logout` | Sign out from provider |
### Agent Mode with Diff View
@@ -156,6 +160,7 @@ When CodeTyper modifies files, you see a clear diff view of changes.
![Agent Mode with Diffs](assets/CodetyperAgentMode.png)
**Interaction Modes:**
- **Agent** - Full access, can modify files
- **Ask** - Read-only, answers questions
- **Code Review** - Review PRs and diffs
@@ -167,6 +172,7 @@ Granular control over what CodeTyper can do. Every file operation requires appro
![Permission Modal](assets/CodetyperPermissionView.png)
**Permission Scopes:**
- `[y]` Yes, this once
- `[s]` Yes, for this session
- `[a]` Always allow for this project
@@ -180,6 +186,7 @@ Access to multiple AI models through GitHub Copilot.
![Model Selection](assets/CodetyperCopilotModels.png)
**Available Models:**
- GPT-5, GPT-5-mini (Unlimited)
- GPT-5.2-codex, GPT-5.1-codex
- Grok-code-fast-1
@@ -187,19 +194,19 @@ Access to multiple AI models through GitHub Copilot.
### Theme System
14+ built-in themes to customize your experience.
15+ built-in themes to customize your experience.
![Theme Selection](assets/CodetyperThemes.png)
**Available Themes:**
default, dracula, nord, tokyo-night, gruvbox, monokai, catppuccin, one-dark, solarized-dark, github-dark, rose-pine, kanagawa, ayu-dark, cargdev-cyberpunk
default, dracula, nord, tokyo-night, gruvbox, monokai, catppuccin, one-dark, solarized-dark, github-dark, rose-pine, kanagawa, ayu-dark, cargdev-cyberpunk, pink-purple
## Providers
| Provider | Models | Auth Method | Use Case |
|----------|--------|-------------|----------|
| **GitHub Copilot** | GPT-5, Claude, Gemini | OAuth (device flow) | Cloud-based, high quality |
| **Ollama** | Llama, DeepSeek, Qwen, etc. | Local server | Private, offline, zero-cost |
| Provider | Models | Auth Method | Use Case |
| ------------------ | --------------------------- | ------------------- | --------------------------- |
| **GitHub Copilot** | GPT-5, Claude, Gemini | OAuth (device flow) | Cloud-based, high quality |
| **Ollama** | Llama, DeepSeek, Qwen, etc. | Local server | Private, offline, zero-cost |
### Cascade Mode
@@ -244,6 +251,7 @@ Settings are stored in `~/.config/codetyper/config.json`:
### Project Context
CodeTyper reads project-specific context from:
- `.github/` - GitHub workflows and templates
- `.codetyper/` - Project-specific rules and learnings
- `rules.md` - Custom instructions for the AI
@@ -274,18 +282,18 @@ codetyper --print "Explain this codebase"
CodeTyper has access to these built-in tools:
| Tool | Description |
|------|-------------|
| `bash` | Execute shell commands |
| `read` | Read file contents |
| `write` | Create or overwrite files |
| `edit` | Find and replace in files |
| `glob` | Find files by pattern |
| `grep` | Search file contents |
| `lsp` | Language Server Protocol operations |
| `web_search` | Search the web |
| `todo-read` | Read current todo list |
| `todo-write` | Update todo list |
| Tool | Description |
| ------------ | ----------------------------------- |
| `bash` | Execute shell commands |
| `read` | Read file contents |
| `write` | Create or overwrite files |
| `edit` | Find and replace in files |
| `glob` | Find files by pattern |
| `grep` | Search file contents |
| `lsp` | Language Server Protocol operations |
| `web_search` | Search the web |
| `todo-read` | Read current todo list |
| `todo-write` | Update todo list |
### MCP Integration
@@ -311,6 +319,7 @@ Connect external MCP (Model Context Protocol) servers for extended capabilities:
```
**MCP Browser Features:**
- Search by name, description, or tags
- Filter by category (database, web, AI, etc.)
- View server details and required environment variables
@@ -324,6 +333,7 @@ Connect external MCP (Model Context Protocol) servers for extended capabilities:
Lifecycle hooks for intercepting tool execution and session events.
**Hook Events:**
- `PreToolUse` - Validate/modify before tool execution
- `PostToolUse` - Side effects after tool execution
- `SessionStart` - At session initialization
@@ -332,16 +342,22 @@ Lifecycle hooks for intercepting tool execution and session events.
- `Stop` - When execution stops
**Configuration** (`.codetyper/hooks.json`):
```json
{
"hooks": [
{ "event": "PreToolUse", "script": ".codetyper/hooks/validate.sh", "timeout": 5000 },
{
"event": "PreToolUse",
"script": ".codetyper/hooks/validate.sh",
"timeout": 5000
},
{ "event": "PostToolUse", "script": ".codetyper/hooks/notify.sh" }
]
}
```
**Exit Codes:**
- `0` - Allow (optionally output `{"updatedInput": {...}}` to modify args)
- `1` - Warn but continue
- `2` - Block execution
@@ -351,6 +367,7 @@ Lifecycle hooks for intercepting tool execution and session events.
Extend CodeTyper with custom tools, commands, and hooks.
**Plugin Structure:**
```
.codetyper/plugins/{name}/
├── plugin.json # Manifest
@@ -363,6 +380,7 @@ Extend CodeTyper with custom tools, commands, and hooks.
```
**Manifest** (`plugin.json`):
```json
{
"name": "my-plugin",
@@ -373,13 +391,18 @@ Extend CodeTyper with custom tools, commands, and hooks.
```
**Custom Tool Definition:**
```typescript
import { z } from "zod";
export default {
name: "custom_tool",
description: "Does something",
parameters: z.object({ input: z.string() }),
execute: async (args, ctx) => ({ success: true, title: "Done", output: "..." }),
execute: async (args, ctx) => ({
success: true,
title: "Done",
output: "...",
}),
};
```
@@ -404,12 +427,12 @@ Sessions are stored in `.codetyper/sessions/` with automatic commit message sugg
The next major release focuses on production-ready autonomous agent execution:
| Feature | Issue | Status |
|---------|-------|--------|
| Plan Approval Gate | [#111](https://github.com/CarGDev/codetyper.cli/issues/111) | Planned |
| Diff Preview Before Write | [#112](https://github.com/CarGDev/codetyper.cli/issues/112) | Planned |
| Execution Control (Pause/Resume/Abort) | [#113](https://github.com/CarGDev/codetyper.cli/issues/113) | Planned |
| Consistent Model Behavior | [#114](https://github.com/CarGDev/codetyper.cli/issues/114) | Planned |
| Feature | Issue | Status |
| --------------------------------------- | ----------------------------------------------------------- | ------- |
| Plan Approval Gate | [#111](https://github.com/CarGDev/codetyper.cli/issues/111) | Planned |
| Diff Preview Before Write | [#112](https://github.com/CarGDev/codetyper.cli/issues/112) | Planned |
| Execution Control (Pause/Resume/Abort) | [#113](https://github.com/CarGDev/codetyper.cli/issues/113) | Planned |
| Consistent Model Behavior | [#114](https://github.com/CarGDev/codetyper.cli/issues/114) | Planned |
| Quality Gates (TypeScript, Lint, Tests) | [#115](https://github.com/CarGDev/codetyper.cli/issues/115) | Planned |
### Known Issues
@@ -439,11 +462,14 @@ bun test
bun run lint
```
## Recent Changes (v0.3.0)
## Recent Changes (v0.4.2)
- **System Prompt Builder**: New modular prompt system with modes, tiers, and providers
- **Module Restructure**: Consistent internal organization with improved imports
- **Solid.js TUI**: Fully migrated to Solid.js + OpenTUI (removed legacy React/Ink)
- **Pink Purple Theme**: New built-in color theme
- **Image Paste Fix**: Fixed race condition where pasted images were silently dropped
- **@ and / Anywhere**: File picker and command menu now work at any cursor position
- **Plan Approval Gate**: User confirmation before agent executes plans
- **Execution Control**: Pause, resume, and abort agent execution
- **Text Clipboard Copy/Read**: Cross-platform clipboard operations with mouse selection
See [CHANGELOG](docs/CHANGELOG.md) for complete version history.

View File

@@ -7,6 +7,16 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [Unreleased]
### Added
- **Pink Purple Theme**: New built-in theme with hot pink primary, purple secondary, and deep magenta accent on a dark plum background
### Fixed
- **Image Paste Race Condition**: Fixed images being silently dropped when pasting via Ctrl+V. The `clearPastedImages()` call in the input area was racing with the async message handler, clearing images before they could be read and attached to the message
- **@ File Picker**: Now works at any cursor position in the input, not just when the input is empty
- **/ Command Menu**: Now works at any cursor position in the input, not just when the input is empty
### Planned
- **Diff Preview**: Show file changes before writing ([#112](https://github.com/CarGDev/codetyper.cli/issues/112))
@@ -292,12 +302,12 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## Version History Summary
| Version | Date | Highlights |
|---------|------|------------|
| 0.4.0 | 2026-02-06 | Clipboard copy/read, plan approval, execution control, safety features |
| 0.3.0 | 2025-02-04 | System prompt builder, module restructure, legacy TUI removal |
| 0.2.x | 2025-01-28 - 02-01 | Hooks, plugins, session forks, vim motions, MCP browser |
| 0.1.x | 2025-01-16 - 01-27 | Initial release, TUI, agent system, providers, permissions |
| Version | Date | Highlights |
| ------- | ------------------ | ---------------------------------------------------------------------- |
| 0.4.0 | 2026-02-06 | Clipboard copy/read, plan approval, execution control, safety features |
| 0.3.0 | 2025-02-04 | System prompt builder, module restructure, legacy TUI removal |
| 0.2.x | 2025-01-28 - 02-01 | Hooks, plugins, session forks, vim motions, MCP browser |
| 0.1.x | 2025-01-16 - 01-27 | Initial release, TUI, agent system, providers, permissions |
---

1016
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -10,9 +10,6 @@
"scripts": {
"dev": "bun src/index.ts",
"dev:nobump": "bun scripts/build.ts && npm link",
"dev:watch": "bun scripts/dev-watch.ts",
"dev:debug": "bun --inspect=localhost:6499/debug src/index.ts",
"dev:debug-brk": "bun --inspect-brk=localhost:6499/debug src/index.ts",
"build": "bun scripts/build.ts",
"sync-version": "bun scripts/sync-version.ts",
"start": "bun src/index.ts",
@@ -20,7 +17,8 @@
"lint": "bun eslint src/**/*.ts",
"format": "npx prettier --write \"src/**/*.ts\"",
"prettier": "npx prettier --write \"src/**/*.ts\" \"src/**/*.tsx\" \"scripts/**/*.ts\"",
"typecheck": "bun tsc --noEmit"
"typecheck": "bun tsc --noEmit",
"version": "bun scripts/sync-version.ts && git add src/version.json"
},
"keywords": [
"ai",
@@ -78,13 +76,13 @@
},
"devDependencies": {
"@eslint/eslintrc": "^3.3.3",
"@eslint/js": "^9.39.2",
"@eslint/js": "^10.0.1",
"@types/inquirer": "^9.0.7",
"@types/node": "^25.0.10",
"@types/uuid": "^10.0.0",
"@typescript-eslint/eslint-plugin": "^8.53.1",
"@typescript-eslint/parser": "^8.53.1",
"eslint": "^9.39.2",
"eslint": "^10.0.0",
"eslint-config-standard": "^17.1.0",
"eslint-config-standard-with-typescript": "^43.0.1",
"eslint-plugin-import": "^2.32.0",

View File

@@ -1,138 +0,0 @@
#!/usr/bin/env bun
/**
* Development watch script for codetyper-cli
*
* Watches for file changes and restarts the TUI application properly,
* handling terminal cleanup between restarts.
*/
import { spawn, type Subprocess } from "bun";
import { watch } from "chokidar";
import { join, dirname } from "path";
import { fileURLToPath } from "url";
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
const ROOT_DIR = join(__dirname, "..");
const WATCH_PATHS = ["src/**/*.ts", "src/**/*.tsx"];
const IGNORE_PATTERNS = [
"**/node_modules/**",
"**/dist/**",
"**/*.test.ts",
"**/*.spec.ts",
];
const DEBOUNCE_MS = 300;
let currentProcess: Subprocess | null = null;
let restartTimeout: ReturnType<typeof setTimeout> | null = null;
const clearTerminal = (): void => {
// Reset terminal and clear screen
process.stdout.write("\x1b[2J\x1b[H\x1b[3J");
};
const killProcess = async (): Promise<void> => {
if (currentProcess) {
try {
currentProcess.kill("SIGTERM");
// Wait a bit for graceful shutdown
await new Promise((resolve) => setTimeout(resolve, 100));
if (currentProcess.exitCode === null) {
currentProcess.kill("SIGKILL");
}
} catch {
// Process might already be dead
}
currentProcess = null;
}
};
const startProcess = (): void => {
clearTerminal();
console.log("\x1b[36m[dev-watch]\x1b[0m Starting codetyper...");
console.log("\x1b[90m─────────────────────────────────────\x1b[0m\n");
currentProcess = spawn({
cmd: ["bun", "src/index.ts"],
cwd: ROOT_DIR,
stdio: ["inherit", "inherit", "inherit"],
env: {
...process.env,
NODE_ENV: "development",
},
});
currentProcess.exited.then((code) => {
if (code !== 0 && code !== null) {
console.log(
`\n\x1b[33m[dev-watch]\x1b[0m Process exited with code ${code}`,
);
}
});
};
const scheduleRestart = (path: string): void => {
if (restartTimeout) {
clearTimeout(restartTimeout);
}
restartTimeout = setTimeout(async () => {
console.log(`\n\x1b[33m[dev-watch]\x1b[0m Change detected: ${path}`);
console.log("\x1b[33m[dev-watch]\x1b[0m Restarting...\n");
await killProcess();
startProcess();
}, DEBOUNCE_MS);
};
const main = async (): Promise<void> => {
console.log("\x1b[36m[dev-watch]\x1b[0m Watching for changes...");
console.log(`\x1b[90mRoot: ${ROOT_DIR}\x1b[0m`);
console.log(`\x1b[90mPaths: ${WATCH_PATHS.join(", ")}\x1b[0m`);
const watcher = watch(WATCH_PATHS, {
ignored: IGNORE_PATTERNS,
persistent: true,
ignoreInitial: true,
cwd: ROOT_DIR,
usePolling: false,
awaitWriteFinish: {
stabilityThreshold: 100,
pollInterval: 100,
},
});
watcher.on("ready", () => {
console.log("\x1b[32m[dev-watch]\x1b[0m Watcher ready\n");
});
watcher.on("error", (error) => {
console.error("\x1b[31m[dev-watch]\x1b[0m Watcher error:", error);
});
watcher.on("change", scheduleRestart);
watcher.on("add", scheduleRestart);
watcher.on("unlink", scheduleRestart);
// Handle exit signals
const cleanup = async (): Promise<void> => {
console.log("\n\x1b[36m[dev-watch]\x1b[0m Shutting down...");
await watcher.close();
await killProcess();
process.exit(0);
};
process.on("SIGINT", cleanup);
process.on("SIGTERM", cleanup);
// Start the initial process
startProcess();
};
main().catch((err) => {
console.error("Error:", err);
process.exit(1);
});

View File

@@ -1,9 +1,3 @@
/**
* Copilot Authentication API
*
* Low-level API calls for GitHub OAuth device flow
*/
import got from "got";
import {
GITHUB_CLIENT_ID,

View File

@@ -1,16 +1,7 @@
/**
* Copilot Token API
*
* Low-level API calls for Copilot token management
*/
import got from "got";
import { COPILOT_AUTH_URL } from "@constants/copilot";
import type { CopilotToken } from "@/types/copilot";
/**
* Refresh Copilot access token using OAuth token
*/
export const fetchCopilotToken = async (
oauthToken: string,
): Promise<CopilotToken> => {
@@ -30,9 +21,6 @@ export const fetchCopilotToken = async (
return response;
};
/**
* Build standard headers for Copilot API requests
*/
export const buildCopilotHeaders = (
token: CopilotToken,
): Record<string, string> => ({

View File

@@ -1,48 +1,17 @@
/**
* Copilot Chat API
*
* Low-level API calls for chat completions
*/
import got from "got";
import type { CopilotToken } from "@/types/copilot";
import type {
Message,
ChatCompletionOptions,
ChatCompletionResponse,
StreamChunk,
} from "@/types/providers";
import { buildCopilotHeaders } from "@api/copilot/auth/token";
interface FormattedMessage {
role: string;
content: string;
tool_call_id?: string;
tool_calls?: Message["tool_calls"];
}
interface ChatRequestBody {
model: string;
messages: FormattedMessage[];
max_tokens: number;
temperature: number;
stream: boolean;
tools?: ChatCompletionOptions["tools"];
tool_choice?: string;
}
interface ChatApiResponse {
error?: { message?: string };
choices?: Array<{
message?: { content?: string; tool_calls?: Message["tool_calls"] };
finish_reason?: ChatCompletionResponse["finishReason"];
}>;
usage?: {
prompt_tokens?: number;
completion_tokens?: number;
total_tokens?: number;
};
}
import {
FormattedMessage,
ChatRequestBody,
ChatApiResponse,
} from "@/interfaces/api/copilot/core";
const formatMessages = (messages: Message[]): FormattedMessage[] =>
messages.map((msg) => {
@@ -137,61 +106,3 @@ export const executeChatRequest = async (
return result;
};
/**
* Execute streaming chat request
*/
export const executeStreamRequest = (
endpoint: string,
token: CopilotToken,
body: ChatRequestBody,
onChunk: (chunk: StreamChunk) => void,
): Promise<void> =>
new Promise((resolve, reject) => {
const stream = got.stream.post(endpoint, {
headers: buildCopilotHeaders(token),
json: body,
});
let buffer = "";
stream.on("data", (data: Buffer) => {
buffer += data.toString();
const lines = buffer.split("\n");
buffer = lines.pop() ?? "";
for (const line of lines) {
if (line.startsWith("data: ")) {
const jsonStr = line.slice(6).trim();
if (jsonStr === "[DONE]") {
onChunk({ type: "done" });
return;
}
try {
const parsed = JSON.parse(jsonStr);
const delta = parsed.choices?.[0]?.delta;
if (delta?.content) {
onChunk({ type: "content", content: delta.content });
}
if (delta?.tool_calls) {
for (const tc of delta.tool_calls) {
onChunk({ type: "tool_call", toolCall: tc });
}
}
} catch {
// Ignore parse errors in stream
}
}
}
});
stream.on("error", (error: Error) => {
onChunk({ type: "error", error: error.message });
reject(error);
});
stream.on("end", resolve);
});

View File

@@ -0,0 +1,60 @@
import got from "got";
import type { CopilotToken } from "@/types/copilot";
import type { StreamChunk } from "@/types/providers";
import { buildCopilotHeaders } from "@api/copilot/auth/token";
import { ChatRequestBody } from "@/interfaces/api/copilot/core";
export const executeStreamRequest = (
endpoint: string,
token: CopilotToken,
body: ChatRequestBody,
onChunk: (chunk: StreamChunk) => void,
): Promise<void> =>
new Promise((resolve, reject) => {
const stream = got.stream.post(endpoint, {
headers: buildCopilotHeaders(token),
json: body,
});
let buffer = "";
stream.on("data", (data: Buffer) => {
buffer += data.toString();
const lines = buffer.split("\n");
buffer = lines.pop() ?? "";
for (const line of lines) {
if (line.startsWith("data: ")) {
const jsonStr = line.slice(6).trim();
if (jsonStr === "[DONE]") {
onChunk({ type: "done" });
return;
}
try {
const parsed = JSON.parse(jsonStr);
const delta = parsed.choices?.[0]?.delta;
if (delta?.content) {
onChunk({ type: "content", content: delta.content });
}
if (delta?.tool_calls) {
for (const tc of delta.tool_calls) {
onChunk({ type: "tool_call", toolCall: tc });
}
}
} catch {
// Ignore parse errors in stream
}
}
}
});
stream.on("error", (error: Error) => {
onChunk({ type: "error", error: error.message });
reject(error);
});
stream.on("end", resolve);
});

View File

@@ -106,6 +106,17 @@ const COMMAND_REGISTRY: Map<string, CommandHandler> = new Map<
},
],
["mcp", async (ctx: CommandContext) => handleMCP(ctx.args)],
[
"mode",
() => {
appStore.toggleInteractionMode();
const { interactionMode } = appStore.getState();
appStore.addLog({
type: "system",
content: `Switched to ${interactionMode} mode`,
});
},
],
[
"logs",
() => {

View File

@@ -1,10 +1,11 @@
import chalk from "chalk";
import { getMessageText } from "@/types/providers";
import type { ChatState } from "@commands/components/chat/state";
export const showContext = (state: ChatState): void => {
const messageCount = state.messages.length - 1;
const totalChars = state.messages.reduce(
(acc, m) => acc + m.content.length,
(acc, m) => acc + getMessageText(m.content).length,
0,
);
const estimatedTokens = Math.round(totalChars / 4);

View File

@@ -1,4 +1,5 @@
import chalk from "chalk";
import { getMessageText } from "@/types/providers";
import type { ChatState } from "@commands/components/chat/state";
export const showHistory = (state: ChatState): void => {
@@ -8,9 +9,10 @@ export const showHistory = (state: ChatState): void => {
const msg = state.messages[i];
const role =
msg.role === "user" ? chalk.cyan("You") : chalk.green("Assistant");
const preview = msg.content.slice(0, 100).replace(/\n/g, " ");
const text = getMessageText(msg.content);
const preview = text.slice(0, 100).replace(/\n/g, " ");
console.log(
` ${i}. ${role}: ${preview}${msg.content.length > 100 ? "..." : ""}`,
` ${i}. ${role}: ${preview}${text.length > 100 ? "..." : ""}`,
);
}

View File

@@ -7,7 +7,10 @@ export const handleInput = async (
state: ChatState,
handleCommand: (command: string, state: ChatState) => Promise<void>,
): Promise<void> => {
if (input.startsWith("/")) {
// Only treat as a slash-command when it looks like one (e.g. /help, /model-gpt4)
// This prevents pasting/debugging content that starts with "/" from invoking command parsing.
const slashCommandMatch = input.match(/^\/([\w-]+)(?:\s|$)/);
if (slashCommandMatch) {
await handleCommand(input, state);
return;
}

View File

@@ -29,6 +29,7 @@ import {
processMemoryCommand,
buildRelevantMemoryPrompt,
} from "@services/memory-service";
import { getMessageText } from "@/types/providers";
import type { ChatState } from "@commands/components/chat/state";
export const sendMessage = async (
@@ -57,7 +58,7 @@ export const sendMessage = async (
// Inject debugging system message before user message if not already present
const hasDebuggingPrompt = state.messages.some(
(msg) => msg.role === "system" && msg.content.includes("debugging mode"),
(msg) => msg.role === "system" && getMessageText(msg.content).includes("debugging mode"),
);
if (!hasDebuggingPrompt) {
@@ -83,7 +84,7 @@ export const sendMessage = async (
// Inject code review system message before user message if not already present
const hasReviewPrompt = state.messages.some(
(msg) =>
msg.role === "system" && msg.content.includes("code review mode"),
msg.role === "system" && getMessageText(msg.content).includes("code review mode"),
);
if (!hasReviewPrompt) {
@@ -109,7 +110,7 @@ export const sendMessage = async (
// Inject refactoring system message before user message if not already present
const hasRefactoringPrompt = state.messages.some(
(msg) =>
msg.role === "system" && msg.content.includes("refactoring mode"),
msg.role === "system" && getMessageText(msg.content).includes("refactoring mode"),
);
if (!hasRefactoringPrompt) {

View File

@@ -99,24 +99,32 @@ const parseArgs = (argsString: string): string[] | undefined => {
};
const defaultHandleMCPAdd = async (data: MCPAddFormData): Promise<void> => {
const serverArgs = parseArgs(data.args);
// Build config based on transport type
const config: Omit<import("@/types/mcp").MCPServerConfig, "name"> =
data.type === "stdio"
? {
type: "stdio",
command: data.command!,
args: parseArgs(data.args ?? "") ?? undefined,
enabled: true,
}
: {
type: data.type,
url: data.url!,
enabled: true,
};
await addServer(
data.name,
{
command: data.command,
args: serverArgs,
enabled: true,
},
data.isGlobal,
);
await addServer(data.name, config, data.isGlobal);
// Add to store with "connecting" status
const description =
data.type === "stdio" ? data.command! : data.url!;
// Add to store with "disconnected" status
appStore.addMcpServer({
id: data.name,
name: data.name,
status: "disconnected",
description: data.command,
description,
});
try {

View File

@@ -77,11 +77,14 @@ const handleList = async (_args: string[]): Promise<void> => {
const enabled =
server.enabled !== false ? chalk.green("✓") : chalk.gray("○");
console.log(` ${enabled} ${chalk.cyan(name)}`);
console.log(
` Command: ${server.command} ${(server.args || []).join(" ")}`,
);
if (server.transport && server.transport !== "stdio") {
console.log(` Transport: ${server.transport}`);
const t = server.type ?? "stdio";
if (t === "stdio") {
console.log(
` Command: ${server.command ?? ""} ${(server.args || []).join(" ")}`,
);
} else {
console.log(` Type: ${t}`);
console.log(` URL: ${server.url ?? "(none)"}`);
}
console.log();
}
@@ -95,13 +98,18 @@ const handleAdd = async (args: string[]): Promise<void> => {
if (!name) {
errorMessage("Server name required");
infoMessage(
"Usage: codetyper mcp add <name> --command <cmd> [--args <args>]",
"Usage: codetyper mcp add <name> --url <url> [--type http|sse]",
);
infoMessage(
" codetyper mcp add <name> --command <cmd> [--args <args>]",
);
return;
}
// Parse options
let command = "";
let url = "";
let type: "stdio" | "http" | "sse" = "stdio";
const serverArgs: string[] = [];
let isGlobal = false;
@@ -109,8 +117,12 @@ const handleAdd = async (args: string[]): Promise<void> => {
const arg = args[i];
if (arg === "--command" || arg === "-c") {
command = args[++i] || "";
} else if (arg === "--url" || arg === "-u") {
url = args[++i] || "";
if (!type || type === "stdio") type = "http";
} else if (arg === "--type" || arg === "-t") {
type = (args[++i] || "stdio") as "stdio" | "http" | "sse";
} else if (arg === "--args" || arg === "-a") {
// Collect remaining args
while (args[i + 1] && !args[i + 1].startsWith("--")) {
serverArgs.push(args[++i]);
}
@@ -119,26 +131,24 @@ const handleAdd = async (args: string[]): Promise<void> => {
}
}
if (!command) {
// Interactive mode - ask for command
infoMessage("Adding MCP server interactively...");
infoMessage("Example: npx @modelcontextprotocol/server-sqlite");
// For now, require command flag
errorMessage("Command required. Use --command <cmd>");
if (!command && !url) {
errorMessage(
"Either --command (stdio) or --url (http/sse) is required.",
);
return;
}
try {
await addServer(
name,
{
command,
args: serverArgs.length > 0 ? serverArgs : undefined,
enabled: true,
},
isGlobal,
);
const config: Omit<import("@/types/mcp").MCPServerConfig, "name"> = url
? { type, url, enabled: true }
: {
type: "stdio",
command,
args: serverArgs.length > 0 ? serverArgs : undefined,
enabled: true,
};
await addServer(name, config, isGlobal);
successMessage(`Added MCP server: ${name}`);
infoMessage(`Connect with: codetyper mcp connect ${name}`);

View File

@@ -1,14 +1,3 @@
/**
* Brain API Constants
*
* Configuration constants for the CodeTyper Brain service
*/
/**
* Feature flag to disable all Brain functionality.
* Set to true to hide Brain menu, disable Brain API calls,
* and remove Brain-related UI elements.
*/
export const BRAIN_DISABLED = true;
export const BRAIN_PROVIDER_NAME = "brain" as const;

View File

@@ -20,6 +20,21 @@ export const COPILOT_MODELS_CACHE_TTL = 5 * 60 * 1000; // 5 minutes
export const COPILOT_MAX_RETRIES = 3;
export const COPILOT_INITIAL_RETRY_DELAY = 1000; // 1 second
// Streaming timeout and connection retry
export const COPILOT_STREAM_TIMEOUT = 120_000; // 2 minutes
export const COPILOT_CONNECTION_RETRY_DELAY = 2000; // 2 seconds
// Connection error patterns for retry logic
export const CONNECTION_ERROR_PATTERNS = [
/socket.*closed/i,
/ECONNRESET/i,
/ECONNREFUSED/i,
/ETIMEDOUT/i,
/network.*error/i,
/fetch.*failed/i,
/aborted/i,
] as const;
// Default model
export const COPILOT_DEFAULT_MODEL = "gpt-5-mini";

180
src/constants/keybinds.ts Normal file
View File

@@ -0,0 +1,180 @@
/**
* Keybind Configuration
*
* Defines all configurable keybindings with defaults.
* Modeled after OpenCode's keybind system with leader key support,
* comma-separated alternatives, and `<leader>` prefix expansion.
*
* Format: "mod+key" or "mod+key,mod+key" for alternatives
* Special: "<leader>key" expands to "${leader}+key"
* "none" disables the binding
*/
// ============================================================================
// Keybind Action IDs
// ============================================================================
/**
* All possible keybind action identifiers.
* These map 1:1 to the defaults below.
*/
export type KeybindAction =
// Application
| "app_exit"
| "app_interrupt"
// Session / execution
| "session_interrupt"
| "session_abort_rollback"
| "session_pause_resume"
| "session_step_toggle"
| "session_step_advance"
// Navigation & scrolling
| "messages_page_up"
| "messages_page_down"
// Mode switching
| "mode_toggle"
// Input area
| "input_submit"
| "input_newline"
| "input_clear"
| "input_paste"
// Menus & pickers
| "command_menu"
| "file_picker"
| "model_list"
| "theme_list"
| "agent_list"
| "help_menu"
// Clipboard
| "clipboard_copy"
// Sidebar / panels
| "sidebar_toggle"
| "activity_toggle";
// ============================================================================
// Default Keybinds
// ============================================================================
/**
* Default leader key prefix (similar to vim leader or OpenCode's ctrl+x).
*/
export const DEFAULT_LEADER = "ctrl+x";
/**
* Default keybindings for all actions.
* Format: comma-separated list of key combos.
* - `ctrl+c` — modifier + key
* - `escape` — single key
* - `<leader>q` — leader prefix + key (expands to e.g. `ctrl+x,q`)
* - `none` — binding disabled
*/
export const DEFAULT_KEYBINDS: Readonly<Record<KeybindAction, string>> = {
// Application
app_exit: "ctrl+c",
app_interrupt: "ctrl+c",
// Session / execution control
session_interrupt: "escape",
session_abort_rollback: "ctrl+z",
session_pause_resume: "ctrl+p",
session_step_toggle: "ctrl+shift+s",
session_step_advance: "return",
// Navigation
messages_page_up: "pageup",
messages_page_down: "pagedown",
// Mode switching
mode_toggle: "ctrl+e",
// Input area
input_submit: "return",
input_newline: "shift+return,ctrl+return",
input_clear: "ctrl+c",
input_paste: "ctrl+v",
// Menus & pickers
command_menu: "/",
file_picker: "@",
model_list: "<leader>m",
theme_list: "<leader>t",
agent_list: "<leader>a",
help_menu: "<leader>h",
// Clipboard
clipboard_copy: "ctrl+y",
// Sidebar / panels
sidebar_toggle: "<leader>b",
activity_toggle: "<leader>s",
} as const;
/**
* Descriptions for each keybind action (used in help menus)
*/
export const KEYBIND_DESCRIPTIONS: Readonly<Record<KeybindAction, string>> = {
app_exit: "Exit the application",
app_interrupt: "Interrupt / abort current action",
session_interrupt: "Cancel current operation",
session_abort_rollback: "Abort with rollback",
session_pause_resume: "Toggle pause/resume execution",
session_step_toggle: "Toggle step-by-step mode",
session_step_advance: "Advance one step",
messages_page_up: "Scroll messages up by one page",
messages_page_down: "Scroll messages down by one page",
mode_toggle: "Toggle interaction mode (agent/ask/code-review)",
input_submit: "Submit input",
input_newline: "Insert newline in input",
input_clear: "Clear input field",
input_paste: "Paste from clipboard",
command_menu: "Open command menu",
file_picker: "Open file picker",
model_list: "List available models",
theme_list: "List available themes",
agent_list: "List agents",
help_menu: "Show help",
clipboard_copy: "Copy selection to clipboard",
sidebar_toggle: "Toggle sidebar panel",
activity_toggle: "Toggle activity/status panel",
} as const;
/**
* Categories for grouping keybinds in the help menu
*/
export const KEYBIND_CATEGORIES: Readonly<
Record<string, readonly KeybindAction[]>
> = {
Application: ["app_exit", "app_interrupt"],
"Execution Control": [
"session_interrupt",
"session_abort_rollback",
"session_pause_resume",
"session_step_toggle",
"session_step_advance",
],
Navigation: ["messages_page_up", "messages_page_down"],
"Mode & Input": [
"mode_toggle",
"input_submit",
"input_newline",
"input_clear",
"input_paste",
],
"Menus & Pickers": [
"command_menu",
"file_picker",
"model_list",
"theme_list",
"agent_list",
"help_menu",
],
Panels: ["sidebar_toggle", "activity_toggle"],
Clipboard: ["clipboard_copy"],
} as const;

View File

@@ -25,6 +25,36 @@ export const SKILL_DIRS = {
PROJECT: ".codetyper/skills",
} as const;
/**
* External agent directories (relative to project root)
* These directories may contain agent definition files from
* various tools (Claude, GitHub Copilot, CodeTyper, etc.)
*/
export const EXTERNAL_AGENT_DIRS = {
CLAUDE: ".claude",
GITHUB: ".github",
CODETYPER: ".codetyper",
} as const;
/**
* Recognized external agent file patterns
*/
export const EXTERNAL_AGENT_FILES = {
/** File extensions recognized as agent definitions */
EXTENSIONS: [".md", ".yaml", ".yml"],
/** Known agent file names (case-insensitive) */
KNOWN_FILES: [
"AGENTS.md",
"agents.md",
"AGENT.md",
"agent.md",
"SKILL.md",
"copilot-instructions.md",
],
/** Subdirectories to scan for agents */
SUBDIRS: ["agents", "skills", "prompts"],
} as const;
/**
* Skill loading configuration
*/
@@ -89,8 +119,196 @@ export const BUILTIN_SKILLS = {
REVIEW_PR: "review-pr",
EXPLAIN: "explain",
FEATURE_DEV: "feature-dev",
TYPESCRIPT: "typescript",
REACT: "react",
CSS_SCSS: "css-scss",
SECURITY: "security",
CODE_AUDIT: "code-audit",
RESEARCHER: "researcher",
TESTING: "testing",
PERFORMANCE: "performance",
API_DESIGN: "api-design",
DATABASE: "database",
DEVOPS: "devops",
ACCESSIBILITY: "accessibility",
DOCUMENTATION: "documentation",
REFACTORING: "refactoring",
GIT_WORKFLOW: "git-workflow",
NODE_BACKEND: "node-backend",
} as const;
/**
* Skill auto-detection keyword map.
* Maps keywords found in user prompts to skill IDs.
* Each entry: [keyword, skillId, category, weight]
*/
export const SKILL_DETECTION_KEYWORDS: ReadonlyArray<
readonly [string, string, string, number]
> = [
// TypeScript
["typescript", "typescript", "language", 0.9],
["type error", "typescript", "language", 0.85],
["ts error", "typescript", "language", 0.85],
["generics", "typescript", "language", 0.8],
["type system", "typescript", "language", 0.85],
["interface", "typescript", "language", 0.5],
["type alias", "typescript", "language", 0.8],
[".ts", "typescript", "language", 0.4],
[".tsx", "typescript", "language", 0.5],
// React
["react", "react", "framework", 0.9],
["component", "react", "framework", 0.5],
["hooks", "react", "framework", 0.7],
["usestate", "react", "framework", 0.9],
["useeffect", "react", "framework", 0.9],
["jsx", "react", "framework", 0.8],
["tsx", "react", "framework", 0.7],
["react component", "react", "framework", 0.95],
["props", "react", "framework", 0.5],
["useState", "react", "framework", 0.9],
// CSS/SCSS
["css", "css-scss", "styling", 0.8],
["scss", "css-scss", "styling", 0.9],
["sass", "css-scss", "styling", 0.9],
["styling", "css-scss", "styling", 0.6],
["flexbox", "css-scss", "styling", 0.9],
["grid layout", "css-scss", "styling", 0.85],
["responsive", "css-scss", "styling", 0.6],
["animation", "css-scss", "styling", 0.5],
["tailwind", "css-scss", "styling", 0.7],
// Security
["security", "security", "domain", 0.9],
["vulnerability", "security", "domain", 0.95],
["xss", "security", "domain", 0.95],
["sql injection", "security", "domain", 0.95],
["csrf", "security", "domain", 0.95],
["authentication", "security", "domain", 0.6],
["authorization", "security", "domain", 0.6],
["owasp", "security", "domain", 0.95],
["cve", "security", "domain", 0.9],
["penetration", "security", "domain", 0.85],
// Code Audit
["audit", "code-audit", "domain", 0.85],
["code quality", "code-audit", "domain", 0.9],
["tech debt", "code-audit", "domain", 0.9],
["dead code", "code-audit", "domain", 0.9],
["complexity", "code-audit", "domain", 0.6],
["code smell", "code-audit", "domain", 0.9],
["code review", "code-audit", "domain", 0.5],
// Research
["research", "researcher", "workflow", 0.8],
["find out", "researcher", "workflow", 0.5],
["look up", "researcher", "workflow", 0.5],
["documentation", "researcher", "workflow", 0.5],
["best practice", "researcher", "workflow", 0.6],
["compare", "researcher", "workflow", 0.4],
// Testing
["test", "testing", "workflow", 0.5],
["testing", "testing", "workflow", 0.8],
["unit test", "testing", "workflow", 0.9],
["integration test", "testing", "workflow", 0.9],
["e2e", "testing", "workflow", 0.85],
["tdd", "testing", "workflow", 0.9],
["jest", "testing", "workflow", 0.85],
["vitest", "testing", "workflow", 0.9],
["playwright", "testing", "workflow", 0.9],
["coverage", "testing", "workflow", 0.6],
// Performance
["performance", "performance", "domain", 0.8],
["optimization", "performance", "domain", 0.7],
["optimize", "performance", "domain", 0.7],
["slow", "performance", "domain", 0.5],
["bundle size", "performance", "domain", 0.9],
["memory leak", "performance", "domain", 0.9],
["latency", "performance", "domain", 0.7],
["profiling", "performance", "domain", 0.85],
// API Design
["api", "api-design", "domain", 0.5],
["endpoint", "api-design", "domain", 0.6],
["rest", "api-design", "domain", 0.7],
["graphql", "api-design", "domain", 0.9],
["openapi", "api-design", "domain", 0.9],
["swagger", "api-design", "domain", 0.9],
// Database
["database", "database", "domain", 0.9],
["sql", "database", "domain", 0.8],
["query", "database", "domain", 0.4],
["migration", "database", "domain", 0.7],
["schema", "database", "domain", 0.7],
["orm", "database", "domain", 0.85],
["prisma", "database", "domain", 0.9],
["drizzle", "database", "domain", 0.9],
["postgres", "database", "domain", 0.9],
["mysql", "database", "domain", 0.9],
["mongodb", "database", "domain", 0.9],
// DevOps
["devops", "devops", "domain", 0.9],
["docker", "devops", "domain", 0.9],
["ci/cd", "devops", "domain", 0.9],
["pipeline", "devops", "domain", 0.7],
["deploy", "devops", "domain", 0.7],
["kubernetes", "devops", "domain", 0.95],
["k8s", "devops", "domain", 0.95],
["github actions", "devops", "domain", 0.9],
// Accessibility
["accessibility", "accessibility", "domain", 0.95],
["a11y", "accessibility", "domain", 0.95],
["wcag", "accessibility", "domain", 0.95],
["aria", "accessibility", "domain", 0.85],
["screen reader", "accessibility", "domain", 0.9],
// Documentation
["documentation", "documentation", "workflow", 0.7],
["readme", "documentation", "workflow", 0.8],
["jsdoc", "documentation", "workflow", 0.9],
["document this", "documentation", "workflow", 0.7],
// Refactoring
["refactor", "refactoring", "workflow", 0.9],
["refactoring", "refactoring", "workflow", 0.9],
["clean up", "refactoring", "workflow", 0.6],
["restructure", "refactoring", "workflow", 0.7],
["simplify", "refactoring", "workflow", 0.5],
["solid principles", "refactoring", "workflow", 0.85],
["design pattern", "refactoring", "workflow", 0.7],
// Git
["git", "git-workflow", "tool", 0.5],
["branch", "git-workflow", "tool", 0.4],
["merge conflict", "git-workflow", "tool", 0.9],
["rebase", "git-workflow", "tool", 0.85],
["cherry-pick", "git-workflow", "tool", 0.9],
// Node.js Backend
["express", "node-backend", "framework", 0.85],
["fastify", "node-backend", "framework", 0.9],
["middleware", "node-backend", "framework", 0.6],
["api server", "node-backend", "framework", 0.8],
["backend", "node-backend", "framework", 0.5],
["server", "node-backend", "framework", 0.4],
] as const;
/**
* Minimum confidence for auto-detection to trigger
*/
export const SKILL_AUTO_DETECT_THRESHOLD = 0.6;
/**
* Maximum number of skills to auto-activate per prompt
*/
export const SKILL_AUTO_DETECT_MAX = 3;
/**
* Skill trigger patterns for common commands
*/

View File

@@ -790,6 +790,62 @@ const CARGDEV_CYBERPUNK_COLORS: ThemeColors = {
headerGradient: ["#ff79c6", "#bd93f9", "#8be9fd"],
};
const PINK_PURPLE_COLORS: ThemeColors = {
primary: "#ff69b4",
secondary: "#b47ee5",
accent: "#e84393",
success: "#a3e048",
error: "#ff4757",
warning: "#ffa502",
info: "#cf6fef",
text: "#f5e6f0",
textDim: "#9a7aa0",
textMuted: "#4a3050",
background: "#1a0a20",
backgroundPanel: "#120818",
backgroundElement: "#2a1535",
border: "#3d1f4e",
borderFocus: "#ff69b4",
borderWarning: "#ffa502",
borderModal: "#b47ee5",
bgHighlight: "#2a1535",
bgCursor: "#e84393",
bgAdded: "#a3e048",
bgRemoved: "#ff4757",
diffAdded: "#a3e048",
diffRemoved: "#ff4757",
diffContext: "#9a7aa0",
diffHeader: "#f5e6f0",
diffHunk: "#cf6fef",
diffLineBgAdded: "#1a2d1a",
diffLineBgRemoved: "#2d1a1a",
diffLineText: "#f5e6f0",
roleUser: "#ff69b4",
roleAssistant: "#b47ee5",
roleSystem: "#ffa502",
roleTool: "#cf6fef",
modeIdle: "#b47ee5",
modeEditing: "#ff69b4",
modeThinking: "#e84393",
modeToolExecution: "#ffa502",
modePermission: "#cf6fef",
toolPending: "#9a7aa0",
toolRunning: "#ffa502",
toolSuccess: "#a3e048",
toolError: "#ff4757",
headerGradient: ["#ff69b4", "#e84393", "#b47ee5"],
};
export const THEMES: Record<string, Theme> = {
default: {
name: "default",
@@ -861,6 +917,11 @@ export const THEMES: Record<string, Theme> = {
displayName: "Cargdev Cyberpunk",
colors: CARGDEV_CYBERPUNK_COLORS,
},
"pink-purple": {
name: "pink-purple",
displayName: "Pink Purple",
colors: PINK_PURPLE_COLORS,
},
};
export const THEME_NAMES = Object.keys(THEMES);

View File

@@ -13,6 +13,12 @@ export type SchemaSkipKey = (typeof SCHEMA_SKIP_KEYS)[number];
export const TOOL_NAMES = ["read", "glob", "grep"];
/**
* Tools that can modify files
* Tools that can modify files — used for tracking modified files in the TUI
*/
export const FILE_MODIFYING_TOOLS = ["write", "edit"] as const;
export const FILE_MODIFYING_TOOLS = [
"write",
"edit",
"multi_edit",
"apply_patch",
"bash",
] as const;

View File

@@ -4,9 +4,21 @@
import type { ToolCall, ToolResult } from "@/types/tools";
/** Why the agent loop stopped */
export type AgentStopReason =
| "completed" // LLM returned a final response (no more tool calls)
| "max_iterations" // Hit the iteration limit
| "consecutive_errors" // Repeated tool failures
| "aborted" // User abort
| "error" // Unrecoverable error
| "plan_approval" // Stopped to request plan approval
;
export interface AgentResult {
success: boolean;
finalResponse: string;
iterations: number;
toolCalls: { call: ToolCall; result: ToolResult }[];
/** Why the agent stopped — helps the user understand what happened */
stopReason?: AgentStopReason;
}

View File

@@ -7,4 +7,5 @@ import type { StreamCallbacks } from "@/types/streaming";
export interface StreamCallbacksWithState {
callbacks: StreamCallbacks;
hasReceivedContent: () => boolean;
hasReceivedUsage: () => boolean;
}

View File

@@ -0,0 +1,36 @@
import type {
Message,
MessageContent,
ChatCompletionOptions,
ChatCompletionResponse,
} from "@/types/providers";
export interface FormattedMessage {
role: string;
content: MessageContent;
tool_call_id?: string;
tool_calls?: Message["tool_calls"];
}
export interface ChatRequestBody {
model: string;
messages: FormattedMessage[];
max_tokens: number;
temperature: number;
stream: boolean;
tools?: ChatCompletionOptions["tools"];
tool_choice?: string;
}
export interface ChatApiResponse {
error?: { message?: string };
choices?: Array<{
message?: { content?: string; tool_calls?: Message["tool_calls"] };
finish_reason?: ChatCompletionResponse["finishReason"];
}>;
usage?: {
prompt_tokens?: number;
completion_tokens?: number;
total_tokens?: number;
};
}

View File

@@ -20,16 +20,6 @@ Before using ANY tools, think through:
- What files might be involved?
- What's my initial approach?
Output your thinking in a <thinking> block:
\`\`\`
<thinking>
Task: [what the user wants]
Need to find: [what information I need]
Likely files: [patterns to search for]
Approach: [my plan]
</thinking>
\`\`\`
## Step 2: EXPLORE - Gather Context
Use tools to understand the codebase:
- **glob** - Find relevant files by pattern

View File

@@ -7,6 +7,8 @@ import got from "got";
import {
COPILOT_MAX_RETRIES,
COPILOT_UNLIMITED_MODEL,
COPILOT_STREAM_TIMEOUT,
COPILOT_CONNECTION_RETRY_DELAY,
} from "@constants/copilot";
import { refreshToken, buildHeaders } from "@providers/copilot/auth/token";
import {
@@ -16,12 +18,14 @@ import {
import {
sleep,
isRateLimitError,
isConnectionError,
getRetryDelay,
isQuotaExceededError,
} from "@providers/copilot/utils";
import type { CopilotToken } from "@/types/copilot";
import type {
Message,
MessageContent,
ChatCompletionOptions,
ChatCompletionResponse,
StreamChunk,
@@ -30,7 +34,7 @@ import { addDebugLog } from "@tui-solid/components/logs/debug-log-panel";
interface FormattedMessage {
role: string;
content: string;
content: MessageContent;
tool_call_id?: string;
tool_calls?: Message["tool_calls"];
}
@@ -39,7 +43,7 @@ const formatMessages = (messages: Message[]): FormattedMessage[] =>
messages.map((msg) => {
const formatted: FormattedMessage = {
role: msg.role,
content: msg.content,
content: msg.content, // Already string or ContentPart[] — pass through
};
if (msg.tool_call_id) {
@@ -61,6 +65,8 @@ interface ChatRequestBody {
stream: boolean;
tools?: ChatCompletionOptions["tools"];
tool_choice?: string;
/** Request usage data in stream responses (OpenAI-compatible) */
stream_options?: { include_usage: boolean };
}
// Default max tokens for requests without tools
@@ -95,6 +101,11 @@ const buildRequestBody = (
stream,
};
// Request usage data when streaming
if (stream) {
body.stream_options = { include_usage: true };
}
if (hasTools) {
body.tools = options.tools;
body.tool_choice = "auto";
@@ -255,6 +266,16 @@ const processStreamLine = (
}
}
// Capture usage data (OpenAI sends it in the final chunk before [DONE])
if (parsed.usage) {
const promptTokens = parsed.usage.prompt_tokens ?? 0;
const completionTokens = parsed.usage.completion_tokens ?? 0;
onChunk({
type: "usage",
usage: { promptTokens, completionTokens },
});
}
// Handle truncation: if finish_reason is "length", content was cut off
if (finishReason === "length") {
addDebugLog("api", "Stream truncated due to max_tokens limit");
@@ -270,23 +291,43 @@ const processStreamLine = (
return false;
};
const executeStream = (
const executeStream = async (
endpoint: string,
token: CopilotToken,
body: ChatRequestBody,
onChunk: (chunk: StreamChunk) => void,
): Promise<void> =>
new Promise((resolve, reject) => {
const stream = got.stream.post(endpoint, {
headers: buildHeaders(token),
json: body,
});
): Promise<void> => {
const response = await fetch(endpoint, {
method: "POST",
headers: {
...buildHeaders(token),
Accept: "text/event-stream",
},
body: JSON.stringify(body),
signal: AbortSignal.timeout(COPILOT_STREAM_TIMEOUT),
});
let buffer = "";
let doneReceived = false;
if (!response.ok) {
throw new Error(
`Copilot API error: ${response.status} ${response.statusText}`,
);
}
stream.on("data", (data: Buffer) => {
buffer += data.toString();
if (!response.body) {
throw new Error("No response body from Copilot stream");
}
const reader = response.body.getReader();
const decoder = new TextDecoder();
let buffer = "";
let doneReceived = false;
try {
while (true) {
const { done, value } = await reader.read();
if (done) break;
buffer += decoder.decode(value, { stream: true });
const lines = buffer.split("\n");
buffer = lines.pop() ?? "";
@@ -296,31 +337,24 @@ const executeStream = (
return;
}
}
});
}
} finally {
reader.releaseLock();
}
stream.on("error", (error: Error) => {
onChunk({ type: "error", error: error.message });
reject(error);
});
// Process remaining buffer
if (buffer.trim()) {
processStreamLine(buffer, onChunk);
}
stream.on("end", () => {
// Process any remaining data in buffer that didn't have trailing newline
if (buffer.trim()) {
processStreamLine(buffer, onChunk);
}
// Ensure done is sent even if stream ended without [DONE] message
if (!doneReceived) {
addDebugLog(
"api",
"Stream ended without [DONE] message, sending done chunk",
);
onChunk({ type: "done" });
}
resolve();
});
});
if (!doneReceived) {
addDebugLog(
"api",
"Stream ended without [DONE] message, sending done chunk",
);
onChunk({ type: "done" });
}
};
export const chatStream = async (
messages: Message[],
@@ -371,6 +405,16 @@ export const chatStream = async (
continue;
}
if (isConnectionError(error) && attempt < COPILOT_MAX_RETRIES - 1) {
const delay = COPILOT_CONNECTION_RETRY_DELAY * Math.pow(2, attempt);
addDebugLog(
"api",
`Connection error, retrying in ${delay}ms (attempt ${attempt + 1})`,
);
await sleep(delay);
continue;
}
if (isRateLimitError(error) && attempt < COPILOT_MAX_RETRIES - 1) {
const delay = getRetryDelay(error, attempt);
await sleep(delay);

View File

@@ -2,11 +2,19 @@
* Copilot provider utility functions
*/
import { COPILOT_INITIAL_RETRY_DELAY } from "@constants/copilot";
import {
COPILOT_INITIAL_RETRY_DELAY,
CONNECTION_ERROR_PATTERNS,
} from "@constants/copilot";
export const sleep = (ms: number): Promise<void> =>
new Promise((resolve) => setTimeout(resolve, ms));
export const isConnectionError = (error: unknown): boolean => {
const message = error instanceof Error ? error.message : String(error);
return CONNECTION_ERROR_PATTERNS.some((pattern) => pattern.test(message));
};
export const isRateLimitError = (error: unknown): boolean => {
if (error && typeof error === "object" && "response" in error) {
const response = (error as { response?: { statusCode?: number } }).response;

View File

@@ -16,6 +16,7 @@ import type {
ChatCompletionOptions,
ChatCompletionResponse,
ToolCall,
ContentPart,
} from "@/types/providers";
import type {
OllamaChatRequest,
@@ -25,17 +26,53 @@ import type {
OllamaMessage,
} from "@/types/ollama";
/**
* Extract text and images from multimodal content.
* Ollama uses a separate `images` array of base64 strings rather than
* inline content parts.
*/
const extractContentParts = (
content: string | ContentPart[],
): { text: string; images: string[] } => {
if (typeof content === "string") {
return { text: content, images: [] };
}
const textParts: string[] = [];
const images: string[] = [];
for (const part of content) {
if (part.type === "text") {
textParts.push(part.text);
} else if (part.type === "image_url") {
// Strip the data:image/xxx;base64, prefix if present
const url = part.image_url.url;
const base64Match = url.match(/^data:[^;]+;base64,(.+)$/);
images.push(base64Match ? base64Match[1] : url);
}
}
return { text: textParts.join("\n"), images };
};
/**
* Format messages for Ollama API
* Handles regular messages, assistant messages with tool_calls, and tool response messages
* Handles regular messages, assistant messages with tool_calls, and tool response messages.
* Multimodal content (images) is converted to Ollama's `images` array format.
*/
const formatMessages = (messages: Message[]): OllamaMessage[] =>
messages.map((msg) => {
const { text, images } = extractContentParts(msg.content);
const formatted: OllamaMessage = {
role: msg.role,
content: msg.content,
content: text,
};
if (images.length > 0) {
formatted.images = images;
}
// Include tool_calls for assistant messages that made tool calls
if (msg.tool_calls && msg.tool_calls.length > 0) {
formatted.tool_calls = msg.tool_calls.map((tc) => ({

View File

@@ -2,8 +2,6 @@
* Ollama provider streaming
*/
import got from "got";
import { OLLAMA_ENDPOINTS, OLLAMA_TIMEOUTS } from "@constants/ollama";
import { getOllamaBaseUrl } from "@providers/ollama/state";
import { buildChatRequest, mapToolCall } from "@providers/ollama/core/chat";
@@ -37,6 +35,17 @@ const parseStreamLine = (
}
}
// Capture token usage from Ollama response (sent with done=true)
if (parsed.done && (parsed.prompt_eval_count || parsed.eval_count)) {
onChunk({
type: "usage",
usage: {
promptTokens: parsed.prompt_eval_count ?? 0,
completionTokens: parsed.eval_count ?? 0,
},
});
}
if (parsed.done) {
onChunk({ type: "done" });
}
@@ -45,22 +54,6 @@ const parseStreamLine = (
}
};
const processStreamData = (
data: Buffer,
buffer: string,
onChunk: (chunk: StreamChunk) => void,
): string => {
const combined = buffer + data.toString();
const lines = combined.split("\n");
const remaining = lines.pop() || "";
for (const line of lines) {
parseStreamLine(line, onChunk);
}
return remaining;
};
export const ollamaChatStream = async (
messages: Message[],
options: ChatCompletionOptions | undefined,
@@ -73,50 +66,65 @@ export const ollamaChatStream = async (
`Ollama stream request: ${messages.length} msgs, model=${body.model}`,
);
const stream = got.stream.post(`${baseUrl}${OLLAMA_ENDPOINTS.CHAT}`, {
json: body,
timeout: { request: OLLAMA_TIMEOUTS.CHAT },
const response = await fetch(`${baseUrl}${OLLAMA_ENDPOINTS.CHAT}`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify(body),
signal: AbortSignal.timeout(OLLAMA_TIMEOUTS.CHAT),
});
if (!response.ok) {
throw new Error(
`Ollama API error: ${response.status} ${response.statusText}`,
);
}
if (!response.body) {
throw new Error("No response body from Ollama stream");
}
const reader = response.body.getReader();
const decoder = new TextDecoder();
let buffer = "";
let doneReceived = false;
stream.on("data", (data: Buffer) => {
buffer = processStreamData(data, buffer, (chunk) => {
if (chunk.type === "done") {
doneReceived = true;
}
onChunk(chunk);
});
});
try {
while (true) {
const { done, value } = await reader.read();
if (done) break;
stream.on("error", (error: Error) => {
onChunk({ type: "error", error: error.message });
});
buffer += decoder.decode(value, { stream: true });
const lines = buffer.split("\n");
buffer = lines.pop() ?? "";
return new Promise((resolve, reject) => {
stream.on("end", () => {
// Process any remaining data in buffer that didn't have trailing newline
if (buffer.trim()) {
parseStreamLine(buffer, (chunk) => {
for (const line of lines) {
parseStreamLine(line, (chunk) => {
if (chunk.type === "done") {
doneReceived = true;
}
onChunk(chunk);
});
}
}
} finally {
reader.releaseLock();
}
// Ensure done is sent even if stream ended without done message
if (!doneReceived) {
addDebugLog(
"api",
"Ollama stream ended without done, sending done chunk",
);
onChunk({ type: "done" });
// Process remaining buffer
if (buffer.trim()) {
parseStreamLine(buffer, (chunk) => {
if (chunk.type === "done") {
doneReceived = true;
}
resolve();
onChunk(chunk);
});
stream.on("error", reject);
});
}
if (!doneReceived) {
addDebugLog(
"api",
"Ollama stream ended without done, sending done chunk",
);
onChunk({ type: "done" });
}
};

View File

@@ -195,6 +195,12 @@ const processStreamChunk = (
}
},
usage: () => {
if (chunk.usage) {
callbacks.onUsage?.(chunk.usage);
}
},
done: () => {
// Finalize all pending tool calls
for (const partial of accumulator.toolCalls.values()) {
@@ -657,6 +663,7 @@ export const runAgentLoopStream = async (
finalResponse: "Execution aborted by user",
iterations,
toolCalls: allToolCalls,
stopReason: "aborted",
};
}
@@ -726,13 +733,14 @@ export const runAgentLoopStream = async (
if (allFailed) {
consecutiveErrors++;
if (consecutiveErrors >= MAX_CONSECUTIVE_ERRORS) {
const errorMsg = `Stopping: ${consecutiveErrors} consecutive tool errors. Check model compatibility with tool calling.`;
const errorMsg = `Stopping after ${consecutiveErrors} consecutive tool errors. Check model compatibility with tool calling.`;
state.options.onError?.(errorMsg);
return {
success: false,
finalResponse: errorMsg,
iterations,
toolCalls: allToolCalls,
stopReason: "consecutive_errors",
};
}
}
@@ -751,12 +759,18 @@ export const runAgentLoopStream = async (
finalResponse: `Error: ${errorMessage}`,
iterations,
toolCalls: allToolCalls,
stopReason: "error",
};
}
}
if (iterations >= maxIterations) {
state.options.onWarning?.(`Reached max iterations (${maxIterations})`);
const hitMaxIterations = iterations >= maxIterations;
if (hitMaxIterations) {
const warnMsg = `Agent reached max iterations (${maxIterations}). ` +
`Completed ${allToolCalls.length} tool call(s) across ${iterations} iteration(s). ` +
`The task may be incomplete — you can send another message to continue.`;
state.options.onWarning?.(warnMsg);
}
return {
@@ -764,6 +778,7 @@ export const runAgentLoopStream = async (
finalResponse,
iterations,
toolCalls: allToolCalls,
stopReason: hitMaxIterations ? "max_iterations" : "completed",
};
};

View File

@@ -4,6 +4,7 @@
import { saveSession as saveSessionSession } from "@services/core/session";
import { appStore } from "@tui-solid/context/app";
import { getMessageText } from "@/types/providers";
import { CHAT_MESSAGES, type CommandName } from "@constants/chat-service";
import { handleLogin, handleLogout, showWhoami } from "@services/chat-tui/auth";
import {
@@ -44,7 +45,7 @@ const saveSession: CommandHandler = async (_, callbacks) => {
const showContext: CommandHandler = (state, callbacks) => {
const tokenEstimate = state.messages.reduce(
(sum, msg) => sum + Math.ceil(msg.content.length / 4),
(sum, msg) => sum + Math.ceil(getMessageText(msg.content).length / 4),
0,
);
callbacks.onLog(

View File

@@ -19,6 +19,8 @@ import {
buildCompletePrompt,
} from "@services/prompt-builder";
import { initSuggestionService } from "@services/command-suggestion-service";
import { initializeRegistry as initializeSkillRegistry } from "@services/skill-registry";
import { initializeKeybinds } from "@services/keybind-resolver";
import * as brainService from "@services/brain";
import { BRAIN_DISABLED } from "@constants/brain";
import { addContextFile } from "@services/chat-tui/files";
@@ -27,6 +29,8 @@ import type { ChatSession } from "@/types/common";
import type { ChatTUIOptions } from "@interfaces/ChatTUIOptions";
import type { ChatServiceState } from "@/types/chat-service";
import type { InteractionMode } from "@/types/tui";
import { getModelContextSize } from "@constants/copilot";
import { getDefaultModel } from "@providers/core/chat";
const createInitialState = async (
options: ChatTUIOptions,
@@ -223,6 +227,10 @@ export const initializeChatService = async (
const session = await initializeSession(state, options);
// Set context max tokens based on the resolved provider + model
const effectiveModel = state.model ?? getDefaultModel(state.provider);
appStore.setContextMaxTokens(getModelContextSize(effectiveModel).input);
if (state.messages.length === 0) {
state.messages.push({ role: "system", content: state.systemPrompt });
}
@@ -231,6 +239,8 @@ export const initializeChatService = async (
await Promise.all([
addInitialContextFiles(state, options.files),
initializePermissions(),
initializeSkillRegistry(),
initializeKeybinds(),
]);
initSuggestionService(process.cwd());

View File

@@ -7,6 +7,7 @@ import {
LEARNING_CONFIDENCE_THRESHOLD,
MAX_LEARNINGS_DISPLAY,
} from "@constants/chat-service";
import { getMessageText } from "@/types/providers";
import {
detectLearnings,
saveLearning,
@@ -35,8 +36,8 @@ export const handleRememberCommand = async (
}
const candidates = detectLearnings(
lastUserMsg.content,
lastAssistantMsg.content,
getMessageText(lastUserMsg.content),
getMessageText(lastAssistantMsg.content),
);
if (candidates.length === 0) {

View File

@@ -2,6 +2,7 @@
* Chat TUI message handling
*/
import { v4 as uuidv4 } from "uuid";
import { addMessage, saveSession } from "@services/core/session";
import {
createStreamingAgent,
@@ -56,6 +57,7 @@ import { PROVIDER_IDS } from "@constants/provider-quality";
import { appStore } from "@tui-solid/context/app";
import type { StreamCallbacks } from "@/types/streaming";
import type { TaskType } from "@/types/provider-quality";
import type { ContentPart, MessageContent } from "@/types/providers";
import type {
ChatServiceState,
ChatServiceCallbacks,
@@ -69,6 +71,12 @@ import {
executeDetectedCommand,
} from "@services/command-detection";
import { detectSkillCommand, executeSkill } from "@services/skill-service";
import {
buildSkillInjectionForPrompt,
getDetectedSkillsSummary,
} from "@services/skill-registry";
import { stripMarkdown } from "@/utils/markdown/strip";
import { createThinkingParser } from "@services/reasoning/thinking-parser";
import {
getActivePlans,
isApprovalMessage,
@@ -105,7 +113,9 @@ export const abortCurrentOperation = async (
appStore.setMode("idle");
addDebugLog(
"state",
rollback ? "Operation aborted with rollback" : "Operation aborted by user",
rollback
? "Operation aborted with rollback"
: "Operation aborted by user",
);
return true;
}
@@ -213,6 +223,48 @@ export const getExecutionState = (): {
};
};
/**
* Extract file path(s) from a tool call's arguments.
*
* Different tools store the path in different places:
* - write / edit / delete : `args.filePath` or `args.path`
* - multi_edit : `args.edits[].file_path`
* - apply_patch : `args.targetFile` (or parsed from patch header)
* - bash : no reliable path, skip
*/
const extractToolPaths = (
toolName: string,
args?: Record<string, unknown>,
): { primary?: string; all: string[] } => {
if (!args) return { all: [] };
// Standard single-file tools
const singlePath =
(args.filePath as string) ??
(args.file_path as string) ??
(args.path as string);
if (singlePath && toolName !== "multi_edit") {
return { primary: String(singlePath), all: [String(singlePath)] };
}
// multi_edit: array of edits with file_path
if (toolName === "multi_edit" && Array.isArray(args.edits)) {
const paths = (args.edits as Array<{ file_path?: string }>)
.map((e) => e.file_path)
.filter((p): p is string => Boolean(p));
const unique = [...new Set(paths)];
return { primary: unique[0], all: unique };
}
// apply_patch: targetFile override or embedded in patch content
if (toolName === "apply_patch" && args.targetFile) {
return { primary: String(args.targetFile), all: [String(args.targetFile)] };
}
return { all: [] };
};
const createToolCallHandler =
(
callbacks: ChatServiceCallbacks,
@@ -220,11 +272,13 @@ const createToolCallHandler =
) =>
(call: { id: string; name: string; arguments?: Record<string, unknown> }) => {
const args = call.arguments;
if (
(FILE_MODIFYING_TOOLS as readonly string[]).includes(call.name) &&
args?.path
) {
toolCallRef.current = { name: call.name, path: String(args.path) };
const isModifying = (FILE_MODIFYING_TOOLS as readonly string[]).includes(
call.name,
);
if (isModifying) {
const { primary, all } = extractToolPaths(call.name, args);
toolCallRef.current = { name: call.name, path: primary, paths: all };
} else {
toolCallRef.current = { name: call.name };
}
@@ -238,6 +292,28 @@ const createToolCallHandler =
});
};
/**
* Estimate additions/deletions from tool output text
*/
const estimateChanges = (
output: string,
): { additions: number; deletions: number } => {
let additions = 0;
let deletions = 0;
for (const line of output.split("\n")) {
if (line.startsWith("+") && !line.startsWith("+++")) additions++;
else if (line.startsWith("-") && !line.startsWith("---")) deletions++;
}
// Fallback estimate when no diff markers are found
if (additions === 0 && deletions === 0 && output.length > 0) {
additions = output.split("\n").length;
}
return { additions, deletions };
};
const createToolResultHandler =
(
callbacks: ChatServiceCallbacks,
@@ -252,8 +328,33 @@ const createToolResultHandler =
error?: string;
},
) => {
if (result.success && toolCallRef.current?.path) {
analyzeFileChange(toolCallRef.current.path);
const ref = toolCallRef.current;
if (result.success && ref) {
const output = result.output ?? "";
const paths = ref.paths?.length ? ref.paths : ref.path ? [ref.path] : [];
if (paths.length > 0) {
const { additions, deletions } = estimateChanges(output);
// Distribute changes across paths (or assign all to the single path)
const perFile = paths.length > 1
? {
additions: Math.max(1, Math.ceil(additions / paths.length)),
deletions: Math.ceil(deletions / paths.length),
}
: { additions, deletions };
for (const filePath of paths) {
analyzeFileChange(filePath);
appStore.addModifiedFile({
filePath,
additions: perFile.additions,
deletions: perFile.deletions,
lastModified: Date.now(),
});
}
}
}
callbacks.onToolResult(
@@ -270,6 +371,34 @@ const createToolResultHandler =
*/
const createStreamCallbacks = (): StreamCallbacksWithState => {
let chunkCount = 0;
let currentSegmentHasContent = false;
let receivedUsage = false;
const thinkingParser = createThinkingParser();
const emitThinking = (thinking: string | null): void => {
if (!thinking) return;
appStore.addLog({ type: "thinking", content: thinking });
};
/**
* Finalize the current streaming segment (if it has content) so that
* tool logs appear below the pre-tool text and a new streaming segment
* can be started afterward for post-tool text (e.g. summary).
*/
const finalizeCurrentSegment = (): void => {
if (!currentSegmentHasContent) return;
// Flush thinking parser before finalizing the segment
const flushed = thinkingParser.flush();
if (flushed.visible) {
appStore.appendStreamContent(flushed.visible);
}
emitThinking(flushed.thinking);
appStore.completeStreaming();
currentSegmentHasContent = false;
addDebugLog("stream", "Finalized streaming segment before tool call");
};
const callbacks: StreamCallbacks = {
onContentChunk: (content: string) => {
@@ -278,11 +407,30 @@ const createStreamCallbacks = (): StreamCallbacksWithState => {
"stream",
`Chunk #${chunkCount}: "${content.substring(0, 30)}${content.length > 30 ? "..." : ""}"`,
);
appStore.appendStreamContent(content);
// Feed through the thinking parser — only append visible content.
// <thinking>…</thinking> blocks are stripped and emitted separately.
const result = thinkingParser.feed(content);
if (result.visible) {
// If the previous streaming segment was finalized (e.g. before a tool call),
// start a new one so post-tool text appears after tool output logs.
if (!currentSegmentHasContent && !appStore.getState().streamingLog.isStreaming) {
appStore.startStreaming();
addDebugLog("stream", "Started new streaming segment for post-tool content");
}
appStore.appendStreamContent(result.visible);
currentSegmentHasContent = true;
}
emitThinking(result.thinking);
},
onToolCallStart: (toolCall) => {
addDebugLog("tool", `Tool start: ${toolCall.name} (${toolCall.id})`);
// Finalize accumulated streaming text so it stays above tool output
// and the post-tool summary will appear below.
finalizeCurrentSegment();
appStore.setCurrentToolCall({
id: toolCall.id,
name: toolCall.name,
@@ -308,7 +456,28 @@ const createStreamCallbacks = (): StreamCallbacksWithState => {
});
},
onUsage: (usage) => {
receivedUsage = true;
addDebugLog(
"api",
`Token usage: prompt=${usage.promptTokens}, completion=${usage.completionTokens}`,
);
appStore.addTokens(usage.promptTokens, usage.completionTokens);
},
onComplete: () => {
// Flush any remaining buffered content from the thinking parser
const flushed = thinkingParser.flush();
if (flushed.visible) {
// Ensure a streaming log exists if we're flushing post-tool content
if (!currentSegmentHasContent && !appStore.getState().streamingLog.isStreaming) {
appStore.startStreaming();
}
appStore.appendStreamContent(flushed.visible);
currentSegmentHasContent = true;
}
emitThinking(flushed.thinking);
// Note: Don't call completeStreaming() here!
// The agent loop may have multiple iterations (tool calls + final response)
// Streaming will be completed manually after the entire agent finishes
@@ -320,6 +489,7 @@ const createStreamCallbacks = (): StreamCallbacksWithState => {
onError: (error: string) => {
addDebugLog("error", `Stream error: ${error}`);
thinkingParser.reset();
appStore.cancelStreaming();
appStore.addLog({
type: "error",
@@ -331,6 +501,7 @@ const createStreamCallbacks = (): StreamCallbacksWithState => {
return {
callbacks,
hasReceivedContent: () => chunkCount > 0,
hasReceivedUsage: () => receivedUsage,
};
};
@@ -426,7 +597,10 @@ export const handleMessage = async (
if (isApprovalMessage(message)) {
approvePlan(plan.id, message);
startPlanExecution(plan.id);
callbacks.onLog("system", `Plan "${plan.title}" approved. Proceeding with implementation.`);
callbacks.onLog(
"system",
`Plan "${plan.title}" approved. Proceeding with implementation.`,
);
addDebugLog("state", `Plan ${plan.id} approved by user`);
// Continue with agent execution - the agent will see the approved status
@@ -438,7 +612,10 @@ export const handleMessage = async (
// Fall through to normal agent processing
} else if (isRejectionMessage(message)) {
rejectPlan(plan.id, message);
callbacks.onLog("system", `Plan "${plan.title}" rejected. Please provide feedback or a new approach.`);
callbacks.onLog(
"system",
`Plan "${plan.title}" rejected. Please provide feedback or a new approach.`,
);
addDebugLog("state", `Plan ${plan.id} rejected by user`);
// Add rejection to messages so agent can respond
@@ -449,7 +626,10 @@ export const handleMessage = async (
// Fall through to normal agent processing to get revised plan
} else {
// Neither approval nor rejection - treat as feedback/modification request
callbacks.onLog("system", `Plan "${plan.title}" awaiting approval. Reply 'yes' to approve or 'no' to reject.`);
callbacks.onLog(
"system",
`Plan "${plan.title}" awaiting approval. Reply 'yes' to approve or 'no' to reject.`,
);
// Show the plan again with the feedback
const planDisplay = formatPlanForDisplay(plan);
@@ -611,6 +791,90 @@ export const handleMessage = async (
const { enrichedMessage, issues } =
await enrichMessageWithIssues(processedMessage);
// Inline @mention subagent invocation (e.g. "Find all API endpoints @explore")
try {
const mentionRegex = /@([a-zA-Z_]+)/g;
const mentionMap: Record<string, string> = {
explore: "explore",
general: "implement",
plan: "plan",
};
const mentions: string[] = [];
let m: RegExpExecArray | null;
while ((m = mentionRegex.exec(message))) {
const key = m[1]?.toLowerCase();
if (key && mentionMap[key]) mentions.push(key);
}
if (mentions.length > 0) {
// Clean message to use as task prompt (remove mentions)
const cleaned = enrichedMessage.replace(/@[a-zA-Z_]+/g, "").trim();
// Lazy import task agent helpers (avoid circular deps)
const { executeTaskAgent, getBackgroundAgentStatus } =
await import("@/tools/task-agent/execute");
const { v4: uuidv4 } = await import("uuid");
// Minimal tool context for invoking the task agent
const toolCtx = {
sessionId: uuidv4(),
messageId: uuidv4(),
workingDir: process.cwd(),
abort: new AbortController(),
autoApprove: true,
onMetadata: () => {},
} as any;
for (const key of mentions) {
const agentType = mentionMap[key];
try {
const params = {
agent_type: agentType,
task: cleaned || message,
run_in_background: true,
} as any;
const startResult = await executeTaskAgent(params, toolCtx);
// Show started message in UI
appStore.addLog({
type: "system",
content: `Started subagent @${key} (ID: ${startResult.metadata?.agentId ?? "?"}).`,
});
// Poll briefly for completion and attach result if ready
const agentId = startResult.metadata?.agentId as string | undefined;
if (agentId) {
const maxAttempts = 10;
const interval = 300;
for (let i = 0; i < maxAttempts; i++) {
// eslint-disable-next-line no-await-in-loop
const status = await getBackgroundAgentStatus(agentId);
if (status && status.success && status.output) {
// Attach assistant result to conversation
appStore.addLog({ type: "assistant", content: status.output });
addMessage("assistant", status.output);
await saveSession();
break;
}
// eslint-disable-next-line no-await-in-loop
await new Promise((res) => setTimeout(res, interval));
}
}
} catch (err) {
appStore.addLog({
type: "error",
content: `Subagent @${key} failed to start: ${String(err)}`,
});
}
}
}
} catch (err) {
// Non-fatal - don't block main flow on subagent helpers
addDebugLog("error", `Subagent invocation error: ${String(err)}`);
}
if (issues.length > 0) {
callbacks.onLog(
"system",
@@ -623,7 +887,35 @@ export const handleMessage = async (
const userMessage = buildContextMessage(state, enrichedMessage);
state.messages.push({ role: "user", content: userMessage });
// Build multimodal content if there are pasted images
const { pastedImages } = appStore.getState();
let messageContent: MessageContent = userMessage;
if (pastedImages.length > 0) {
const parts: ContentPart[] = [
{ type: "text", text: userMessage },
];
for (const img of pastedImages) {
parts.push({
type: "image_url",
image_url: {
url: `data:${img.mediaType};base64,${img.data}`,
detail: "auto",
},
});
}
messageContent = parts;
addDebugLog(
"info",
`[images] Attached ${pastedImages.length} image(s) to user message`,
);
// Images are consumed; clear from store
appStore.clearPastedImages();
}
state.messages.push({ role: "user", content: messageContent });
clearSuggestions();
@@ -707,6 +999,37 @@ export const handleMessage = async (
? state.model
: getDefaultModel(effectiveProvider);
// Auto-detect and inject relevant skills based on the user prompt.
// Skills are activated transparently and their instructions are injected
// into the conversation as a system message so the agent benefits from
// specialized knowledge (e.g., TypeScript, React, Security, etc.).
try {
const { injection, detected } =
await buildSkillInjectionForPrompt(message);
if (detected.length > 0 && injection) {
const summary = getDetectedSkillsSummary(detected);
addDebugLog("info", `[skills] ${summary}`);
callbacks.onLog("system", summary);
// Inject skill context as a system message right before the user message
// so the agent has specialized knowledge for this prompt.
const insertIdx = Math.max(0, state.messages.length - 1);
state.messages.splice(insertIdx, 0, {
role: "system" as const,
content: injection,
});
addDebugLog(
"info",
`[skills] Injected ${detected.length} skill(s) as system context`,
);
}
} catch (error) {
addDebugLog(
"error",
`Skill detection failed: ${error instanceof Error ? error.message : String(error)}`,
);
}
// Start streaming UI
addDebugLog(
"state",
@@ -731,8 +1054,10 @@ export const handleMessage = async (
autoApprove: state.autoApprove,
chatMode: isReadOnlyMode,
onText: (text: string) => {
// Note: Do NOT call appStore.appendStreamContent() here.
// Streaming content is already handled by onContentChunk in streamState.callbacks.
// Calling appendStreamContent from both onText and onContentChunk causes double content.
addDebugLog("info", `onText callback: "${text.substring(0, 50)}..."`);
appStore.appendStreamContent(text);
},
onToolCall: createToolCallHandler(callbacks, toolCallRef),
onToolResult: createToolResultHandler(callbacks, toolCallRef),
@@ -758,7 +1083,10 @@ export const handleMessage = async (
onStepModeDisabled: () => {
addDebugLog("state", "Step mode disabled");
},
onWaitingForStep: (toolName: string, _toolArgs: Record<string, unknown>) => {
onWaitingForStep: (
toolName: string,
_toolArgs: Record<string, unknown>,
) => {
appStore.addLog({
type: "system",
content: `⏳ Step mode: Ready to execute ${toolName}. Press Enter to continue.`,
@@ -766,14 +1094,20 @@ export const handleMessage = async (
addDebugLog("state", `Waiting for step: ${toolName}`);
},
onAbort: (rollbackCount: number) => {
addDebugLog("state", `Abort initiated, ${rollbackCount} actions to rollback`);
addDebugLog(
"state",
`Abort initiated, ${rollbackCount} actions to rollback`,
);
},
onRollback: (action: { type: string; description: string }) => {
appStore.addLog({
type: "system",
content: `↩ Rolling back: ${action.description}`,
});
addDebugLog("state", `Rollback: ${action.type} - ${action.description}`);
addDebugLog(
"state",
`Rollback: ${action.type} - ${action.description}`,
);
},
onRollbackComplete: (actionsRolledBack: number) => {
appStore.addLog({
@@ -788,20 +1122,33 @@ export const handleMessage = async (
// Store agent reference for abort capability
currentAgent = agent;
try {
addDebugLog(
"api",
`Agent.run() started with ${state.messages.length} messages`,
);
const result = await agent.run(state.messages);
addDebugLog(
"api",
`Agent.run() completed: success=${result.success}, iterations=${result.iterations}`,
);
/**
* Process the result of an agent run: finalize streaming, show stop reason,
* persist to session.
*/
const processAgentResult = async (
result: Awaited<ReturnType<typeof agent.run>>,
userMessage: string,
): Promise<void> => {
// Stop thinking timer
appStore.stopThinking();
// If the stream didn't deliver API-reported usage data, estimate tokens
// from message lengths so the context counter never stays stuck at 0.
if (!streamState.hasReceivedUsage()) {
const inputEstimate = Math.ceil(userMessage.length / 4);
const outputEstimate = Math.ceil((result.finalResponse?.length ?? 0) / 4);
// Add tool I/O overhead: each tool call/result adds tokens
const toolOverhead = result.toolCalls.length * 150; // ~150 tokens per tool exchange
if (inputEstimate > 0 || outputEstimate > 0) {
appStore.addTokens(inputEstimate + toolOverhead, outputEstimate + toolOverhead);
addDebugLog(
"info",
`Token estimate (no API usage): ~${inputEstimate + toolOverhead} in, ~${outputEstimate + toolOverhead} out`,
);
}
}
if (result.finalResponse) {
addDebugLog(
"info",
@@ -812,7 +1159,7 @@ export const handleMessage = async (
// Run audit if cascade mode with Ollama
if (shouldAudit && effectiveProvider === "ollama") {
const auditResult = await runAudit(
message,
userMessage,
result.finalResponse,
callbacks,
);
@@ -844,30 +1191,36 @@ export const handleMessage = async (
content: finalResponse,
});
// Check if streaming content was received - if not, add the response as a log
// This handles cases where streaming didn't work or content was all in final response
if (!streamState.hasReceivedContent() && finalResponse) {
// Single source of truth: decide based on whether the provider
// actually streamed visible content, not whether we asked for streaming.
const streamedContent = streamState.hasReceivedContent();
if (streamedContent) {
// Streaming delivered content — finalize the last streaming segment.
addDebugLog("info", "Completing streaming with received content");
if (appStore.getState().streamingLog.isStreaming) {
appStore.completeStreaming();
}
} else if (finalResponse) {
addDebugLog(
"info",
"No streaming content received, adding fallback log",
);
// Streaming didn't receive content, manually add the response
appStore.cancelStreaming(); // Remove empty streaming log
if (appStore.getState().streamingLog.isStreaming) {
appStore.cancelStreaming();
}
appStore.addLog({
type: "assistant",
content: finalResponse,
content: stripMarkdown(finalResponse),
});
} else {
// Streaming received content - finalize the streaming log
addDebugLog("info", "Completing streaming with received content");
appStore.completeStreaming();
}
addMessage("user", message);
// Persist to session
addMessage("user", userMessage);
addMessage("assistant", finalResponse);
await saveSession();
await processLearningsFromExchange(message, finalResponse, callbacks);
await processLearningsFromExchange(userMessage, finalResponse, callbacks);
const suggestions = getPendingSuggestions();
if (suggestions.length > 0) {
@@ -875,6 +1228,130 @@ export const handleMessage = async (
callbacks.onLog("system", formatted);
}
}
// Show agent stop reason to the user so they know why it ended
const stopReason = result.stopReason ?? "completed";
const toolCount = result.toolCalls.length;
const iters = result.iterations;
if (stopReason === "max_iterations") {
appStore.addLog({
type: "system",
content: `Agent stopped: reached max iterations (${iters}). ` +
`${toolCount} tool call(s) completed. ` +
`Send another message to continue where it left off.`,
});
} else if (stopReason === "consecutive_errors") {
appStore.addLog({
type: "error",
content: `Agent stopped: repeated tool failures. ${toolCount} tool call(s) attempted across ${iters} iteration(s).`,
});
} else if (stopReason === "aborted") {
appStore.addLog({
type: "system",
content: `Agent aborted by user after ${iters} iteration(s) and ${toolCount} tool call(s).`,
});
} else if (stopReason === "error") {
appStore.addLog({
type: "error",
content: `Agent encountered an error after ${iters} iteration(s) and ${toolCount} tool call(s).`,
});
} else if (stopReason === "completed" && toolCount > 0) {
// Only show a summary for non-trivial agent runs (with tool calls)
appStore.addLog({
type: "system",
content: `Agent completed: ${toolCount} tool call(s) in ${iters} iteration(s).`,
});
}
};
try {
addDebugLog(
"api",
`Agent.run() started with ${state.messages.length} messages`,
);
let result = await agent.run(state.messages);
addDebugLog(
"api",
`Agent.run() completed: success=${result.success}, iterations=${result.iterations}, stopReason=${result.stopReason}`,
);
await processAgentResult(result, message);
// After agent finishes, check for pending plans and auto-continue on approval
let continueAfterPlan = true;
while (continueAfterPlan) {
continueAfterPlan = false;
const newPendingPlans = getActivePlans().filter(
(p) => p.status === "pending",
);
if (newPendingPlans.length === 0) break;
const plan = newPendingPlans[0];
const planContent = formatPlanForDisplay(plan);
addDebugLog("state", `Showing plan approval modal: ${plan.id}`);
const approved = await new Promise<boolean>((resolve) => {
appStore.setMode("plan_approval");
appStore.setPlanApprovalPrompt({
id: uuidv4(),
planTitle: plan.title,
planSummary: plan.summary,
planContent,
resolve: (response) => {
appStore.setPlanApprovalPrompt(null);
if (response.approved) {
approvePlan(plan.id, response.editMode);
startPlanExecution(plan.id);
addDebugLog("state", `Plan ${plan.id} approved via modal`);
appStore.addLog({
type: "system",
content: `Plan "${plan.title}" approved. Continuing implementation...`,
});
state.messages.push({
role: "user",
content: `The user approved the plan "${plan.title}". ` +
`Proceed with the full implementation — complete ALL steps in the plan. ` +
`Do not stop until every step is done or you need further user input.`,
});
} else {
rejectPlan(plan.id, response.feedback ?? "User cancelled");
addDebugLog("state", `Plan ${plan.id} rejected via modal`);
appStore.addLog({
type: "system",
content: `Plan "${plan.title}" cancelled.`,
});
}
resolve(response.approved);
},
});
});
// If the plan was approved, re-run the agent loop so it continues working
if (approved) {
addDebugLog("api", "Re-running agent after plan approval");
appStore.setMode("thinking");
appStore.startThinking();
appStore.startStreaming();
result = await agent.run(state.messages);
addDebugLog(
"api",
`Agent.run() (post-plan) completed: success=${result.success}, iterations=${result.iterations}, stopReason=${result.stopReason}`,
);
await processAgentResult(result, message);
// Loop again to check for new pending plans from this agent run
continueAfterPlan = true;
} else {
appStore.setMode("idle");
}
}
} catch (error) {
appStore.cancelStreaming();
appStore.stopThinking();

View File

@@ -3,6 +3,7 @@
*/
import { MODEL_MESSAGES } from "@constants/chat-service";
import { getModelContextSize } from "@constants/copilot";
import { getConfig } from "@services/core/config";
import { getProvider } from "@providers/core/registry";
import {
@@ -35,6 +36,19 @@ export const loadModels = async (
}
};
/**
* Resolve the context window size for a given provider + model.
* Uses the Copilot context-size table when available, otherwise
* falls back to DEFAULT_CONTEXT_SIZE.
*/
const resolveContextMaxTokens = (
provider: ProviderName,
modelId: string | undefined,
): number => {
const effectiveModel = modelId ?? getDefaultModel(provider);
return getModelContextSize(effectiveModel).input;
};
export const handleModelSelect = async (
state: ChatServiceState,
model: string,
@@ -49,6 +63,12 @@ export const handleModelSelect = async (
}
appStore.setModel(model);
// Update context max tokens for the newly selected model
const effectiveModel = model === "auto" ? undefined : model;
appStore.setContextMaxTokens(
resolveContextMaxTokens(state.provider, effectiveModel),
);
const config = await getConfig();
config.set("model", model === "auto" ? undefined : model);
await config.save();

View File

@@ -11,6 +11,7 @@ import { v4 as uuidv4 } from "uuid";
import type { PlanApprovalPromptResponse } from "@/types/tui";
import type { ImplementationPlan } from "@/types/plan-mode";
import { appStore } from "@tui-solid/context/app";
import { formatPlanForDisplay } from "@services/plan-mode/plan-service";
export interface PlanApprovalHandlerRequest {
plan: ImplementationPlan;
@@ -43,6 +44,7 @@ export const createPlanApprovalHandler = (): PlanApprovalHandler => {
id: uuidv4(),
planTitle: request.plan.title,
planSummary: request.plan.summary,
planContent: formatPlanForDisplay(request.plan),
planFilePath: request.planFilePath,
resolve: (response) => {
appStore.setPlanApprovalPrompt(null);

View File

@@ -1,289 +0,0 @@
/**
* Streaming Chat TUI Integration
*
* Connects the streaming agent loop to the TUI store for real-time updates.
*/
import type { Message } from "@/types/providers";
import type { AgentOptions } from "@interfaces/AgentOptions";
import type { AgentResult } from "@interfaces/AgentResult";
import type { StreamingChatOptions } from "@interfaces/StreamingChatOptions";
import type {
StreamCallbacks,
PartialToolCall,
ModelSwitchInfo,
} from "@/types/streaming";
import type { ToolCall, ToolResult } from "@/types/tools";
import { createStreamingAgent } from "@services/agent-stream";
import { createThinkingParser } from "@services/reasoning/thinking-parser";
import { appStore } from "@tui-solid/context/app";
// Re-export for convenience
export type { StreamingChatOptions } from "@interfaces/StreamingChatOptions";
// =============================================================================
// TUI Streaming Callbacks
// =============================================================================
const createTUIStreamCallbacks = (
options?: Partial<StreamingChatOptions>,
): { callbacks: StreamCallbacks; resetParser: () => void } => {
const parser = createThinkingParser();
const emitThinking = (thinking: string | null): void => {
if (!thinking) return;
appStore.addLog({
type: "thinking",
content: thinking,
});
};
const callbacks: StreamCallbacks = {
onContentChunk: (content: string) => {
const result = parser.feed(content);
if (result.visible) {
appStore.appendStreamContent(result.visible);
}
emitThinking(result.thinking);
},
onToolCallStart: (toolCall: PartialToolCall) => {
appStore.setCurrentToolCall({
id: toolCall.id,
name: toolCall.name,
description: `Calling ${toolCall.name}...`,
status: "pending",
});
},
onToolCallComplete: (toolCall: ToolCall) => {
appStore.updateToolCall({
id: toolCall.id,
name: toolCall.name,
status: "running",
});
},
onModelSwitch: (info: ModelSwitchInfo) => {
appStore.addLog({
type: "system",
content: `Model switched: ${info.from}${info.to} (${info.reason})`,
});
options?.onModelSwitch?.(info);
},
onComplete: () => {
const flushed = parser.flush();
if (flushed.visible) {
appStore.appendStreamContent(flushed.visible);
}
emitThinking(flushed.thinking);
appStore.completeStreaming();
},
onError: (error: string) => {
parser.reset();
appStore.cancelStreaming();
appStore.addLog({
type: "error",
content: error,
});
},
};
return { callbacks, resetParser: () => parser.reset() };
};
// =============================================================================
// Agent Options with TUI Integration
// =============================================================================
const createAgentOptionsWithTUI = (
options: StreamingChatOptions,
): AgentOptions => ({
...options,
onText: (text: string) => {
// Text is handled by streaming callbacks, but we may want to notify
options.onText?.(text);
},
onToolCall: (toolCall: ToolCall) => {
appStore.setMode("tool_execution");
appStore.setCurrentToolCall({
id: toolCall.id,
name: toolCall.name,
description: `Executing ${toolCall.name}...`,
status: "running",
});
appStore.addLog({
type: "tool",
content: `${toolCall.name}`,
metadata: {
toolName: toolCall.name,
toolStatus: "running",
toolDescription: `Executing ${toolCall.name}`,
toolArgs: toolCall.arguments,
},
});
options.onToolCall?.(toolCall);
},
onToolResult: (toolCallId: string, result: ToolResult) => {
appStore.updateToolCall({
status: result.success ? "success" : "error",
result: result.output,
error: result.error,
});
appStore.addLog({
type: "tool",
content: result.output || result.error || "",
metadata: {
toolName: appStore.getState().currentToolCall?.name,
toolStatus: result.success ? "success" : "error",
toolDescription: result.title,
},
});
appStore.setCurrentToolCall(null);
appStore.setMode("thinking");
options.onToolResult?.(toolCallId, result);
},
onError: (error: string) => {
appStore.setMode("idle");
appStore.addLog({
type: "error",
content: error,
});
options.onError?.(error);
},
onWarning: (warning: string) => {
appStore.addLog({
type: "system",
content: warning,
});
options.onWarning?.(warning);
},
});
// =============================================================================
// Main API
// =============================================================================
/**
* Run a streaming chat session with TUI integration
*/
export const runStreamingChat = async (
messages: Message[],
options: StreamingChatOptions,
): Promise<AgentResult> => {
// Set up TUI state
appStore.setMode("thinking");
appStore.startThinking();
appStore.startStreaming();
// Create callbacks that update the TUI
const { callbacks: streamCallbacks, resetParser } =
createTUIStreamCallbacks(options);
const agentOptions = createAgentOptionsWithTUI(options);
// Reset parser for fresh session
resetParser();
// Create and run the streaming agent
const agent = createStreamingAgent(
process.cwd(),
agentOptions,
streamCallbacks,
);
try {
const result = await agent.run(messages);
appStore.stopThinking();
appStore.setMode("idle");
return result;
} catch (error) {
appStore.cancelStreaming();
appStore.stopThinking();
appStore.setMode("idle");
const errorMessage = error instanceof Error ? error.message : String(error);
appStore.addLog({
type: "error",
content: errorMessage,
});
return {
success: false,
finalResponse: errorMessage,
iterations: 0,
toolCalls: [],
};
}
};
/**
* Create a streaming chat instance with stop capability
*/
export const createStreamingChat = (
options: StreamingChatOptions,
): {
run: (messages: Message[]) => Promise<AgentResult>;
stop: () => void;
} => {
const { callbacks: streamCallbacks, resetParser } =
createTUIStreamCallbacks(options);
const agentOptions = createAgentOptionsWithTUI(options);
const agent = createStreamingAgent(
process.cwd(),
agentOptions,
streamCallbacks,
);
return {
run: async (messages: Message[]) => {
resetParser();
appStore.setMode("thinking");
appStore.startThinking();
appStore.startStreaming();
try {
const result = await agent.run(messages);
appStore.stopThinking();
appStore.setMode("idle");
return result;
} catch (error) {
appStore.cancelStreaming();
appStore.stopThinking();
appStore.setMode("idle");
const errorMessage =
error instanceof Error ? error.message : String(error);
return {
success: false,
finalResponse: errorMessage,
iterations: 0,
toolCalls: [],
};
}
},
stop: () => {
agent.stop();
appStore.cancelStreaming();
appStore.stopThinking();
appStore.setMode("idle");
},
};
};

View File

@@ -204,47 +204,86 @@ export const matchesPathPattern = (
};
/**
* Check if a Bash command is allowed
* Split a shell command into individual sub-commands on chaining operators.
* Handles &&, ||, ;, and | (pipe).
* This prevents a pattern like Bash(cd:*) from silently approving
* "cd /safe && rm -rf /dangerous".
*/
const splitChainedCommands = (command: string): string[] => {
// Split on shell chaining operators, but not inside quoted strings.
// Simple heuristic: split on &&, ||, ;, | (not ||) that are not inside quotes.
const parts: string[] = [];
let current = "";
let inSingle = false;
let inDouble = false;
for (let i = 0; i < command.length; i++) {
const ch = command[i];
const next = command[i + 1];
// Track quoting
if (ch === "'" && !inDouble) { inSingle = !inSingle; current += ch; continue; }
if (ch === '"' && !inSingle) { inDouble = !inDouble; current += ch; continue; }
if (inSingle || inDouble) { current += ch; continue; }
// Check for operators
if (ch === "&" && next === "&") { parts.push(current); current = ""; i++; continue; }
if (ch === "|" && next === "|") { parts.push(current); current = ""; i++; continue; }
if (ch === ";") { parts.push(current); current = ""; continue; }
if (ch === "|") { parts.push(current); current = ""; continue; }
current += ch;
}
if (current.trim()) parts.push(current);
return parts.map((p) => p.trim()).filter(Boolean);
};
/**
* Check if a Bash command is allowed.
* For chained commands (&&, ||, ;, |), EVERY sub-command must be allowed.
*/
export const isBashAllowed = (command: string): boolean => {
const subCommands = splitChainedCommands(command);
const allPatterns = [
...sessionAllowPatterns,
...localAllowPatterns,
...globalAllowPatterns,
];
for (const patternStr of allPatterns) {
const pattern = parsePattern(patternStr);
if (
pattern &&
pattern.tool === "Bash" &&
matchesBashPattern(command, pattern)
) {
return true;
}
}
return false;
// Every sub-command must match at least one allow pattern
return subCommands.every((subCmd) =>
allPatterns.some((patternStr) => {
const pattern = parsePattern(patternStr);
return (
pattern &&
pattern.tool === "Bash" &&
matchesBashPattern(subCmd, pattern)
);
}),
);
};
/**
* Check if a Bash command is denied
* Check if a Bash command is denied.
* For chained commands, if ANY sub-command is denied, the whole command is denied.
*/
export const isBashDenied = (command: string): boolean => {
const subCommands = splitChainedCommands(command);
const denyPatterns = [...localDenyPatterns, ...globalDenyPatterns];
for (const patternStr of denyPatterns) {
const pattern = parsePattern(patternStr);
if (
pattern &&
pattern.tool === "Bash" &&
matchesBashPattern(command, pattern)
) {
return true;
}
}
return false;
// If any sub-command matches a deny pattern, deny the whole command
return subCommands.some((subCmd) =>
denyPatterns.some((patternStr) => {
const pattern = parsePattern(patternStr);
return (
pattern &&
pattern.tool === "Bash" &&
matchesBashPattern(subCmd, pattern)
);
}),
);
};
/**
@@ -273,9 +312,9 @@ export const isFileOpAllowed = (
};
/**
* Generate a pattern for the given command
* Generate a pattern for a single (non-chained) command
*/
export const generateBashPattern = (command: string): string => {
const generateSingleBashPattern = (command: string): string => {
const parts = command.trim().split(/\s+/);
if (parts.length === 0) return `Bash(${command}:*)`;
@@ -290,6 +329,33 @@ export const generateBashPattern = (command: string): string => {
return `Bash(${firstWord}:*)`;
};
/**
* Generate patterns for the given command.
* For chained commands (&&, ||, ;, |), returns one pattern per sub-command.
* This prevents "Bash(cd:*)" from blanket-approving everything chained after cd.
*/
export const generateBashPattern = (command: string): string => {
const subCommands = splitChainedCommands(command);
if (subCommands.length <= 1) {
return generateSingleBashPattern(command);
}
// For chained commands, return all unique patterns joined so the user can see them
const patterns = [
...new Set(subCommands.map(generateSingleBashPattern)),
];
return patterns.join(", ");
};
/**
* Generate individual patterns for a command (used for storing)
*/
export const generateBashPatterns = (command: string): string[] => {
const subCommands = splitChainedCommands(command);
return [...new Set(subCommands.map(generateSingleBashPattern))];
};
/**
* Add a pattern to session allow list
*/
@@ -385,21 +451,23 @@ export const clearSessionPatterns = (): void => {
};
/**
* Handle permission scope
* Handle permission scope — stores one or more patterns
*/
const handlePermissionScope = async (
scope: string,
pattern: string,
patterns: string[],
): Promise<void> => {
const scopeHandlers: Record<string, () => Promise<void> | void> = {
session: () => addSessionPattern(pattern),
local: () => addLocalPattern(pattern),
global: () => addGlobalPattern(pattern),
};
for (const pattern of patterns) {
const scopeHandlers: Record<string, () => Promise<void> | void> = {
session: () => addSessionPattern(pattern),
local: () => addLocalPattern(pattern),
global: () => addGlobalPattern(pattern),
};
const handler = scopeHandlers[scope];
if (handler) {
await handler();
const handler = scopeHandlers[scope];
if (handler) {
await handler();
}
}
};
@@ -419,6 +487,7 @@ export const promptBashPermission = async (
}
const suggestedPattern = generateBashPattern(command);
const patterns = generateBashPatterns(command);
// Use custom handler if set (TUI mode)
if (permissionHandler) {
@@ -430,7 +499,7 @@ export const promptBashPermission = async (
});
if (response.allowed && response.scope) {
await handlePermissionScope(response.scope, suggestedPattern);
await handlePermissionScope(response.scope, patterns);
}
return {
@@ -468,55 +537,61 @@ export const promptBashPermission = async (
process.stdin.removeListener("data", handleInput);
process.stdin.setRawMode?.(false);
const addAllPatterns = async (
addFn: (p: string) => void | Promise<void>,
): Promise<void> => {
for (const p of patterns) await addFn(p);
};
const responseMap: Record<string, () => Promise<void>> = {
y: async () => resolve({ allowed: true }),
yes: async () => resolve({ allowed: true }),
s: async () => {
addSessionPattern(suggestedPattern);
await addAllPatterns(addSessionPattern);
console.log(
chalk.blue(`\n✓ Added session pattern: ${suggestedPattern}`),
chalk.blue(`\n✓ Added session patterns: ${suggestedPattern}`),
);
resolve({ allowed: true, remember: "session" });
},
session: async () => {
addSessionPattern(suggestedPattern);
await addAllPatterns(addSessionPattern);
console.log(
chalk.blue(`\n✓ Added session pattern: ${suggestedPattern}`),
chalk.blue(`\n✓ Added session patterns: ${suggestedPattern}`),
);
resolve({ allowed: true, remember: "session" });
},
l: async () => {
await addLocalPattern(suggestedPattern);
await addAllPatterns(addLocalPattern);
console.log(
chalk.cyan(`\n✓ Added project pattern: ${suggestedPattern}`),
chalk.cyan(`\n✓ Added project patterns: ${suggestedPattern}`),
);
resolve({ allowed: true, remember: "local" });
},
local: async () => {
await addLocalPattern(suggestedPattern);
await addAllPatterns(addLocalPattern);
console.log(
chalk.cyan(`\n✓ Added project pattern: ${suggestedPattern}`),
chalk.cyan(`\n✓ Added project patterns: ${suggestedPattern}`),
);
resolve({ allowed: true, remember: "local" });
},
project: async () => {
await addLocalPattern(suggestedPattern);
await addAllPatterns(addLocalPattern);
console.log(
chalk.cyan(`\n✓ Added project pattern: ${suggestedPattern}`),
chalk.cyan(`\n✓ Added project patterns: ${suggestedPattern}`),
);
resolve({ allowed: true, remember: "local" });
},
g: async () => {
await addGlobalPattern(suggestedPattern);
await addAllPatterns(addGlobalPattern);
console.log(
chalk.magenta(`\n✓ Added global pattern: ${suggestedPattern}`),
chalk.magenta(`\n✓ Added global patterns: ${suggestedPattern}`),
);
resolve({ allowed: true, remember: "global" });
},
global: async () => {
await addGlobalPattern(suggestedPattern);
await addAllPatterns(addGlobalPattern);
console.log(
chalk.magenta(`\n✓ Added global pattern: ${suggestedPattern}`),
chalk.magenta(`\n✓ Added global patterns: ${suggestedPattern}`),
);
resolve({ allowed: true, remember: "global" });
},
@@ -562,7 +637,7 @@ export const promptFilePermission = async (
});
if (response.allowed && response.scope) {
await handlePermissionScope(response.scope, suggestedPattern);
await handlePermissionScope(response.scope, [suggestedPattern]);
}
return {

View File

@@ -22,12 +22,58 @@ export interface DangerCheckResult {
}
/**
* Check if a command matches any blocked pattern
* Split a shell command into individual sub-commands on chaining operators.
* Handles &&, ||, ;, and | (pipe). Respects quoted strings.
*/
const splitChainedCommands = (command: string): string[] => {
const parts: string[] = [];
let current = "";
let inSingle = false;
let inDouble = false;
for (let i = 0; i < command.length; i++) {
const ch = command[i];
const next = command[i + 1];
if (ch === "'" && !inDouble) { inSingle = !inSingle; current += ch; continue; }
if (ch === '"' && !inSingle) { inDouble = !inDouble; current += ch; continue; }
if (inSingle || inDouble) { current += ch; continue; }
if (ch === "&" && next === "&") { parts.push(current); current = ""; i++; continue; }
if (ch === "|" && next === "|") { parts.push(current); current = ""; i++; continue; }
if (ch === ";") { parts.push(current); current = ""; continue; }
if (ch === "|") { parts.push(current); current = ""; continue; }
current += ch;
}
if (current.trim()) parts.push(current);
return parts.map((p) => p.trim()).filter(Boolean);
};
/**
* Check if a command matches any blocked pattern.
* For chained commands (&&, ||, ;, |), each sub-command is checked individually
* to prevent dangerous commands hidden behind benign ones (e.g. cd /safe && rm -rf /).
*/
export const checkDangerousCommand = (command: string): DangerCheckResult => {
// Normalize command for checking
const normalizedCommand = command.trim();
const subCommands = splitChainedCommands(command);
for (const subCmd of subCommands) {
const normalized = subCmd.trim();
for (const pattern of BLOCKED_PATTERNS) {
if (pattern.pattern.test(normalized)) {
return {
blocked: true,
pattern,
message: formatBlockedMessage(pattern),
};
}
}
}
// Also check the full command in case a pattern targets the chaining itself
const normalizedCommand = command.trim();
for (const pattern of BLOCKED_PATTERNS) {
if (pattern.pattern.test(normalizedCommand)) {
return {

View File

@@ -0,0 +1,364 @@
/**
* External Agent Loader
*
* Loads agent definitions from .claude/, .github/, .codetyper/
* directories in the project root. These agents are parsed from
* their respective frontmatter+markdown format and converted to
* SkillDefinition for unified handling.
*/
import fs from "fs/promises";
import { join, basename, extname } from "path";
import {
EXTERNAL_AGENT_DIRS,
EXTERNAL_AGENT_FILES,
SKILL_DEFAULTS,
} from "@constants/skills";
import type {
SkillDefinition,
SkillSource,
ExternalAgentFile,
ParsedExternalAgent,
} from "@/types/skills";
// ============================================================================
// File Discovery
// ============================================================================
/**
* Check if a file is a recognized agent definition
*/
const isAgentFile = (filename: string): boolean => {
const lower = filename.toLowerCase();
const ext = extname(lower);
// Check known filenames
if (EXTERNAL_AGENT_FILES.KNOWN_FILES.some((f) => lower === f.toLowerCase())) {
return true;
}
// Check extensions for files in agent subdirectories
return (EXTERNAL_AGENT_FILES.EXTENSIONS as readonly string[]).includes(ext);
};
/**
* Scan a directory for agent files (non-recursive for top level)
*/
const scanDirectory = async (
dir: string,
source: SkillSource,
): Promise<ExternalAgentFile[]> => {
const files: ExternalAgentFile[] = [];
try {
await fs.access(dir);
} catch {
return files; // Directory doesn't exist
}
try {
const entries = await fs.readdir(dir, { withFileTypes: true });
for (const entry of entries) {
const fullPath = join(dir, entry.name);
if (entry.isFile() && isAgentFile(entry.name)) {
try {
const content = await fs.readFile(fullPath, "utf-8");
files.push({
relativePath: entry.name,
absolutePath: fullPath,
source,
content,
});
} catch {
// Skip unreadable files
}
} else if (
entry.isDirectory() &&
(EXTERNAL_AGENT_FILES.SUBDIRS as readonly string[]).includes(entry.name.toLowerCase())
) {
// Scan recognized subdirectories
const subFiles = await scanSubdirectory(fullPath, source);
files.push(...subFiles);
}
}
} catch {
// Directory not accessible
}
return files;
};
/**
* Scan a subdirectory for agent files
*/
const scanSubdirectory = async (
dir: string,
source: SkillSource,
): Promise<ExternalAgentFile[]> => {
const files: ExternalAgentFile[] = [];
try {
const entries = await fs.readdir(dir, { withFileTypes: true });
for (const entry of entries) {
if (!entry.isFile()) continue;
const ext = extname(entry.name).toLowerCase();
if (!(EXTERNAL_AGENT_FILES.EXTENSIONS as readonly string[]).includes(ext)) continue;
const fullPath = join(dir, entry.name);
try {
const content = await fs.readFile(fullPath, "utf-8");
files.push({
relativePath: join(basename(dir), entry.name),
absolutePath: fullPath,
source,
content,
});
} catch {
// Skip unreadable files
}
}
} catch {
// Subdirectory not accessible
}
return files;
};
// ============================================================================
// Parsing
// ============================================================================
/**
* Parse the frontmatter from an external agent file.
* Supports the standard --- delimited YAML-like frontmatter.
*/
const parseFrontmatter = (
content: string,
): { frontmatter: Record<string, unknown>; body: string } => {
const lines = content.split("\n");
if (lines[0]?.trim() !== "---") {
// No frontmatter — the entire content is the body
return { frontmatter: {}, body: content.trim() };
}
let endIndex = -1;
for (let i = 1; i < lines.length; i++) {
if (lines[i]?.trim() === "---") {
endIndex = i;
break;
}
}
if (endIndex === -1) {
return { frontmatter: {}, body: content.trim() };
}
const fmLines = lines.slice(1, endIndex);
const body = lines
.slice(endIndex + 1)
.join("\n")
.trim();
// Simple YAML-like parsing
const fm: Record<string, unknown> = {};
let currentKey: string | null = null;
let currentArray: string[] | null = null;
for (const line of fmLines) {
const trimmed = line.trim();
if (!trimmed || trimmed.startsWith("#")) continue;
// Array item
if (trimmed.startsWith("- ") && currentKey) {
if (!currentArray) currentArray = [];
const value = trimmed.slice(2).trim().replace(/^["']|["']$/g, "");
currentArray.push(value);
fm[currentKey] = currentArray;
continue;
}
// Key-value pair
const colonIdx = trimmed.indexOf(":");
if (colonIdx > 0) {
if (currentArray && currentKey) {
fm[currentKey] = currentArray;
}
currentArray = null;
currentKey = trimmed.slice(0, colonIdx).trim();
const rawValue = trimmed.slice(colonIdx + 1).trim();
if (!rawValue) continue; // Empty → might be array header
// Inline array: [a, b, c]
if (rawValue.startsWith("[") && rawValue.endsWith("]")) {
const items = rawValue
.slice(1, -1)
.split(",")
.map((s) => s.trim().replace(/^["']|["']$/g, ""))
.filter(Boolean);
fm[currentKey] = items;
} else {
fm[currentKey] = rawValue.replace(/^["']|["']$/g, "");
}
}
}
return { frontmatter: fm, body };
};
/**
* Parse an external agent file into a structured definition
*/
const parseAgentFile = (file: ExternalAgentFile): ParsedExternalAgent => {
const { frontmatter, body } = parseFrontmatter(file.content);
// Derive ID from filename (strip extension, lowercase, kebab-case)
const nameWithoutExt = basename(file.relativePath, extname(file.relativePath));
const id = `ext-${file.source.replace("external-", "")}-${nameWithoutExt
.toLowerCase()
.replace(/\s+/g, "-")}`;
const description =
typeof frontmatter.description === "string"
? frontmatter.description
: `External agent from ${file.source}: ${nameWithoutExt}`;
const tools = Array.isArray(frontmatter.tools)
? (frontmatter.tools as string[])
: [];
return {
id,
description,
tools,
body,
source: file.source,
filePath: file.absolutePath,
};
};
// ============================================================================
// Conversion
// ============================================================================
/**
* Convert a parsed external agent to a SkillDefinition
* so it can be used uniformly in the skill registry.
*/
const toSkillDefinition = (agent: ParsedExternalAgent): SkillDefinition => {
// Derive a human-readable name from the ID
const name = agent.id
.replace(/^ext-[a-z]+-/, "")
.split("-")
.map((w) => w.charAt(0).toUpperCase() + w.slice(1))
.join(" ");
// Extract triggers from the agent body (look for trigger patterns)
const triggers: string[] = [`/${agent.id}`];
return {
id: agent.id,
name,
description: agent.description,
version: SKILL_DEFAULTS.VERSION,
triggers,
triggerType: "explicit",
autoTrigger: false,
requiredTools: agent.tools,
tags: [agent.source, "external"],
source: agent.source,
systemPrompt: "",
instructions: agent.body,
loadedAt: Date.now(),
};
};
// ============================================================================
// Public API
// ============================================================================
/**
* Source-to-directory mapping
*/
const SOURCE_DIRS: ReadonlyArray<readonly [string, SkillSource]> = [
[EXTERNAL_AGENT_DIRS.CLAUDE, "external-claude"],
[EXTERNAL_AGENT_DIRS.GITHUB, "external-github"],
[EXTERNAL_AGENT_DIRS.CODETYPER, "external-codetyper"],
];
/**
* Load all external agents from recognized directories
* in the current project.
*/
export const loadExternalAgents = async (
projectRoot?: string,
): Promise<SkillDefinition[]> => {
const root = projectRoot ?? process.cwd();
const allAgents: SkillDefinition[] = [];
for (const [dirName, source] of SOURCE_DIRS) {
const dir = join(root, dirName);
const files = await scanDirectory(dir, source);
for (const file of files) {
try {
const parsed = parseAgentFile(file);
const skill = toSkillDefinition(parsed);
allAgents.push(skill);
} catch {
// Skip unparseable files
}
}
}
return allAgents;
};
/**
* Load a specific external agent by source and filename
*/
export const loadExternalAgentByPath = async (
filePath: string,
source: SkillSource,
): Promise<SkillDefinition | null> => {
try {
const content = await fs.readFile(filePath, "utf-8");
const file: ExternalAgentFile = {
relativePath: basename(filePath),
absolutePath: filePath,
source,
content,
};
const parsed = parseAgentFile(file);
return toSkillDefinition(parsed);
} catch {
return null;
}
};
/**
* Check if any external agent directories exist
*/
export const hasExternalAgents = async (
projectRoot?: string,
): Promise<boolean> => {
const root = projectRoot ?? process.cwd();
for (const [dirName] of SOURCE_DIRS) {
try {
await fs.access(join(root, dirName));
return true;
} catch {
continue;
}
}
return false;
};

View File

@@ -0,0 +1,353 @@
/**
* Keybind Resolver
*
* Parses keybind strings (e.g., "ctrl+c", "<leader>m", "shift+return,ctrl+return"),
* expands leader-key prefixes, and matches incoming key events against configured bindings.
*
* Keybind string format:
* - "ctrl+c" → single combo
* - "ctrl+c,ctrl+d" → two alternatives (either triggers)
* - "<leader>m" → leader prefix + key (expands based on configured leader)
* - "none" → binding disabled
* - "escape" → single key without modifiers
*/
import fs from "fs/promises";
import { FILES } from "@constants/paths";
import {
DEFAULT_KEYBINDS,
DEFAULT_LEADER,
type KeybindAction,
} from "@constants/keybinds";
// ============================================================================
// Types
// ============================================================================
/** A single parsed key combination */
export interface ParsedCombo {
key: string;
ctrl: boolean;
alt: boolean;
shift: boolean;
meta: boolean;
}
/** A resolved keybinding: one action → one or more alternative combos */
export interface ResolvedKeybind {
action: KeybindAction;
combos: ParsedCombo[];
raw: string;
}
/** The incoming key event from the TUI framework */
export interface KeyEvent {
name: string;
ctrl?: boolean;
alt?: boolean;
shift?: boolean;
meta?: boolean;
}
/** User-provided overrides (partial, only the actions they want to change) */
export type KeybindOverrides = Partial<Record<KeybindAction, string>>;
/** Full resolved keybind map */
export type ResolvedKeybindMap = Map<KeybindAction, ResolvedKeybind>;
// ============================================================================
// Parsing
// ============================================================================
/**
* Expand `<leader>` references in a keybind string.
* E.g., with leader="ctrl+x":
* "<leader>m" → "ctrl+x+m"
* "<leader>q" → "ctrl+x+q"
*/
const expandLeader = (raw: string, leader: string): string => {
return raw.replace(/<leader>/gi, `${leader}+`);
};
/**
* Parse a single key combo string like "ctrl+shift+s" into a ParsedCombo.
*/
const parseCombo = (combo: string): ParsedCombo => {
const parts = combo
.trim()
.toLowerCase()
.split("+")
.map((p) => p.trim())
.filter(Boolean);
const result: ParsedCombo = {
key: "",
ctrl: false,
alt: false,
shift: false,
meta: false,
};
for (const part of parts) {
switch (part) {
case "ctrl":
case "control":
result.ctrl = true;
break;
case "alt":
case "option":
result.alt = true;
break;
case "shift":
result.shift = true;
break;
case "meta":
case "cmd":
case "super":
case "win":
result.meta = true;
break;
default:
// Last non-modifier part is the key name
result.key = part;
break;
}
}
return result;
};
/**
* Parse a full keybind string (possibly comma-separated) into an array of combos.
* Returns empty array for "none" (disabled binding).
*/
const parseKeybindString = (
raw: string,
leader: string,
): ParsedCombo[] => {
const trimmed = raw.trim().toLowerCase();
if (trimmed === "none" || trimmed === "") return [];
const expanded = expandLeader(raw, leader);
const alternatives = expanded.split(",");
return alternatives
.map((alt) => parseCombo(alt))
.filter((combo) => combo.key !== "");
};
// ============================================================================
// Matching
// ============================================================================
/**
* Check if a key event matches a parsed combo.
*/
const matchesCombo = (event: KeyEvent, combo: ParsedCombo): boolean => {
const eventKey = event.name?.toLowerCase() ?? "";
if (eventKey !== combo.key) return false;
if (!!event.ctrl !== combo.ctrl) return false;
if (!!event.alt !== combo.alt) return false;
if (!!event.shift !== combo.shift) return false;
if (!!event.meta !== combo.meta) return false;
return true;
};
// ============================================================================
// Resolver State
// ============================================================================
let resolvedMap: ResolvedKeybindMap = new Map();
let currentLeader: string = DEFAULT_LEADER;
let initialized = false;
/**
* Build the resolved keybind map from defaults + overrides.
*/
const buildResolvedMap = (
leader: string,
overrides: KeybindOverrides,
): ResolvedKeybindMap => {
const map = new Map<KeybindAction, ResolvedKeybind>();
const merged = { ...DEFAULT_KEYBINDS, ...overrides };
for (const [action, raw] of Object.entries(merged)) {
const combos = parseKeybindString(raw, leader);
map.set(action as KeybindAction, {
action: action as KeybindAction,
combos,
raw,
});
}
return map;
};
// ============================================================================
// Public API
// ============================================================================
/**
* Initialize the keybind resolver.
* Loads user overrides from keybindings.json if it exists.
*/
export const initializeKeybinds = async (): Promise<void> => {
let overrides: KeybindOverrides = {};
let leader = DEFAULT_LEADER;
try {
const data = await fs.readFile(FILES.keybindings, "utf-8");
const parsed = JSON.parse(data) as Record<string, unknown>;
if (typeof parsed.leader === "string") {
leader = parsed.leader;
}
// Extract keybind overrides (anything that's not "leader")
for (const [key, value] of Object.entries(parsed)) {
if (key === "leader") continue;
if (typeof value === "string") {
overrides[key as KeybindAction] = value;
}
}
} catch {
// File doesn't exist or is invalid — use defaults only
}
currentLeader = leader;
resolvedMap = buildResolvedMap(leader, overrides);
initialized = true;
};
/**
* Re-initialize with explicit overrides (for programmatic use).
*/
export const setKeybindOverrides = (
overrides: KeybindOverrides,
leader?: string,
): void => {
currentLeader = leader ?? currentLeader;
resolvedMap = buildResolvedMap(currentLeader, overrides);
initialized = true;
};
/**
* Check if a key event matches a specific action.
*/
export const matchesAction = (
event: KeyEvent,
action: KeybindAction,
): boolean => {
if (!initialized) {
// Lazy init with defaults if not yet initialized
resolvedMap = buildResolvedMap(DEFAULT_LEADER, {});
initialized = true;
}
const resolved = resolvedMap.get(action);
if (!resolved) return false;
return resolved.combos.some((combo) => matchesCombo(event, combo));
};
/**
* Find which action(s) a key event matches.
* Returns all matching actions (there may be overlaps).
*/
export const findMatchingActions = (event: KeyEvent): KeybindAction[] => {
if (!initialized) {
resolvedMap = buildResolvedMap(DEFAULT_LEADER, {});
initialized = true;
}
const matches: KeybindAction[] = [];
for (const [action, resolved] of resolvedMap) {
if (resolved.combos.some((combo) => matchesCombo(event, combo))) {
matches.push(action);
}
}
return matches;
};
/**
* Get the resolved keybind for an action (for display in help menus).
*/
export const getKeybindDisplay = (action: KeybindAction): string => {
if (!initialized) {
resolvedMap = buildResolvedMap(DEFAULT_LEADER, {});
initialized = true;
}
const resolved = resolvedMap.get(action);
if (!resolved || resolved.combos.length === 0) return "none";
return resolved.combos
.map((combo) => formatCombo(combo))
.join(" / ");
};
/**
* Format a parsed combo back to a human-readable string.
* E.g., { ctrl: true, key: "c" } → "Ctrl+C"
*/
const formatCombo = (combo: ParsedCombo): string => {
const parts: string[] = [];
if (combo.ctrl) parts.push("Ctrl");
if (combo.alt) parts.push("Alt");
if (combo.shift) parts.push("Shift");
if (combo.meta) parts.push("Cmd");
const keyDisplay =
combo.key.length === 1
? combo.key.toUpperCase()
: combo.key === "return"
? "Enter"
: combo.key === "escape"
? "Esc"
: combo.key.charAt(0).toUpperCase() + combo.key.slice(1);
parts.push(keyDisplay);
return parts.join("+");
};
/**
* Get all resolved keybinds (for help display or debugging).
*/
export const getAllKeybinds = (): ResolvedKeybind[] => {
if (!initialized) {
resolvedMap = buildResolvedMap(DEFAULT_LEADER, {});
initialized = true;
}
return Array.from(resolvedMap.values());
};
/**
* Get the current leader key string.
*/
export const getLeader = (): string => currentLeader;
/**
* Save current keybind overrides to keybindings.json.
*/
export const saveKeybindOverrides = async (
overrides: KeybindOverrides,
leader?: string,
): Promise<void> => {
const { mkdir, writeFile } = await import("fs/promises");
const { dirname } = await import("path");
const filepath = FILES.keybindings;
await mkdir(dirname(filepath), { recursive: true });
const data: Record<string, string> = {};
if (leader) data.leader = leader;
for (const [action, value] of Object.entries(overrides)) {
data[action] = value;
}
await writeFile(filepath, JSON.stringify(data, null, 2), "utf-8");
};

View File

@@ -39,6 +39,10 @@ interface JsonRpcResponse {
export class MCPClient {
private config: MCPServerConfig;
private process: ChildProcess | null = null;
/** Base URL for http / sse transport */
private httpUrl: string | null = null;
/** Session URL returned by the server after SSE handshake (if any) */
private httpSessionUrl: string | null = null;
private state: MCPConnectionState = "disconnected";
private tools: MCPToolDefinition[] = [];
private resources: MCPResourceDefinition[] = [];
@@ -71,6 +75,13 @@ export class MCPClient {
};
}
/**
* Resolve effective transport: `type` takes precedence over legacy `transport`
*/
private get transport(): "stdio" | "sse" | "http" {
return this.config.type ?? "stdio";
}
/**
* Connect to the MCP server
*/
@@ -83,12 +94,13 @@ export class MCPClient {
this.error = undefined;
try {
if (this.config.transport === "stdio" || !this.config.transport) {
const t = this.transport;
if (t === "stdio") {
await this.connectStdio();
} else if (t === "http" || t === "sse") {
await this.connectHttp();
} else {
throw new Error(
`Transport type '${this.config.transport}' not yet supported`,
);
throw new Error(`Transport type '${t}' is not supported`);
}
// Initialize the connection
@@ -109,13 +121,17 @@ export class MCPClient {
* Connect via stdio transport
*/
private async connectStdio(): Promise<void> {
if (!this.config.command) {
throw new Error("Command is required for stdio transport");
}
return new Promise((resolve, reject) => {
const env = {
...process.env,
...this.config.env,
};
this.process = spawn(this.config.command, this.config.args || [], {
this.process = spawn(this.config.command!, this.config.args || [], {
stdio: ["pipe", "pipe", "pipe"],
env,
});
@@ -146,11 +162,38 @@ export class MCPClient {
}
});
// Give the process a moment to start
// Give the stdio process a moment to start
setTimeout(resolve, 100);
});
}
/**
* Connect via HTTP (Streamable HTTP) transport.
* The server URL is used directly for JSON-RPC over HTTP POST.
*/
private async connectHttp(): Promise<void> {
const url = this.config.url;
if (!url) {
throw new Error("URL is required for http/sse transport");
}
this.httpUrl = url;
// Verify the server is reachable with a simple OPTIONS/HEAD check
try {
const res = await fetch(url, { method: "POST", headers: { "Content-Type": "application/json" }, body: JSON.stringify({ jsonrpc: "2.0", id: 0, method: "ping" }) });
// Even a 4xx/5xx means the server is reachable; we'll handle errors in initialize()
if (!res.ok && res.status >= 500) {
throw new Error(`Server returned ${res.status}: ${res.statusText}`);
}
} catch (err) {
if (err instanceof TypeError) {
// Network/fetch error
throw new Error(`Cannot reach MCP server at ${url}: ${(err as Error).message}`);
}
// Other errors (like 400) are OK — the server is reachable
}
}
/**
* Handle incoming data from the server
*/
@@ -189,11 +232,24 @@ export class MCPClient {
}
/**
* Send a JSON-RPC request
* Send a JSON-RPC request (dispatches to stdio or http)
*/
private async sendRequest(
method: string,
params?: unknown,
): Promise<unknown> {
if (this.httpUrl) {
return this.sendHttpRequest(method, params);
}
return this.sendStdioRequest(method, params);
}
/**
* Send a JSON-RPC request via stdio
*/
private async sendStdioRequest(
method: string,
params?: unknown,
): Promise<unknown> {
if (!this.process?.stdin) {
throw new Error("Not connected");
@@ -225,6 +281,72 @@ export class MCPClient {
});
}
/**
* Send a JSON-RPC request via HTTP POST
*/
private async sendHttpRequest(
method: string,
params?: unknown,
): Promise<unknown> {
const url = this.httpSessionUrl ?? this.httpUrl!;
const id = ++this.requestId;
const body: JsonRpcRequest = { jsonrpc: "2.0", id, method, params };
const res = await fetch(url, {
method: "POST",
headers: {
"Content-Type": "application/json",
Accept: "application/json, text/event-stream",
},
body: JSON.stringify(body),
signal: AbortSignal.timeout(30000),
});
if (!res.ok) {
const text = await res.text().catch(() => "");
throw new Error(`MCP HTTP error ${res.status}: ${text || res.statusText}`);
}
// Capture session URL from Mcp-Session header if present
const sessionHeader = res.headers.get("mcp-session");
if (sessionHeader && !this.httpSessionUrl) {
// If it's a full URL use it; otherwise it's a session id
this.httpSessionUrl = sessionHeader.startsWith("http")
? sessionHeader
: this.httpUrl!;
}
const contentType = res.headers.get("content-type") ?? "";
// Handle SSE responses (text/event-stream) — collect the last JSON-RPC result
if (contentType.includes("text/event-stream")) {
const text = await res.text();
let lastResult: unknown = undefined;
for (const line of text.split("\n")) {
if (line.startsWith("data: ")) {
const json = line.slice(6).trim();
if (json && json !== "[DONE]") {
try {
const parsed = JSON.parse(json) as JsonRpcResponse;
if (parsed.error) throw new Error(parsed.error.message);
lastResult = parsed.result;
} catch {
// skip unparseable lines
}
}
}
}
return lastResult;
}
// Standard JSON response
const json = (await res.json()) as JsonRpcResponse;
if (json.error) {
throw new Error(json.error.message);
}
return json.result;
}
/**
* Initialize the MCP connection
*/
@@ -242,7 +364,18 @@ export class MCPClient {
});
// Send initialized notification
if (this.process?.stdin) {
if (this.httpUrl) {
// For HTTP transport, send as a JSON-RPC notification (no id)
const url = this.httpSessionUrl ?? this.httpUrl;
await fetch(url, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
jsonrpc: "2.0",
method: "notifications/initialized",
}),
}).catch(() => { /* ignore notification failures */ });
} else if (this.process?.stdin) {
this.process.stdin.write(
JSON.stringify({
jsonrpc: "2.0",
@@ -344,6 +477,8 @@ export class MCPClient {
this.process.kill();
this.process = null;
}
this.httpUrl = null;
this.httpSessionUrl = null;
this.state = "disconnected";
this.tools = [];
this.resources = [];

View File

@@ -37,7 +37,7 @@ interface MCPManagerState {
*/
const state: MCPManagerState = {
clients: new Map(),
config: { servers: {} },
config: { inputs: [], servers: {} },
initialized: false,
};
@@ -53,17 +53,49 @@ const loadConfigFile = async (filePath: string): Promise<MCPConfig | null> => {
}
};
/**
* Inject the runtime `name` field from the config key into each server entry.
* Also normalises legacy `transport` field → `type`.
*/
const hydrateServerNames = (
servers: Record<string, MCPServerConfig>,
): Record<string, MCPServerConfig> => {
const hydrated: Record<string, MCPServerConfig> = {};
for (const [key, cfg] of Object.entries(servers)) {
// Normalise legacy `transport` → `type`
const type = cfg.type ?? (cfg as Record<string, unknown>).transport as MCPServerConfig["type"];
hydrated[key] = { ...cfg, name: key, type };
}
return hydrated;
};
/**
* Build a clean server config object for disk persistence.
* Strips the runtime-only `name` field so the JSON matches:
* { "servers": { "<name>": { "type": "http", "url": "..." } } }
*/
const toStorableConfig = (config: MCPServerConfig): Omit<MCPServerConfig, "name"> => {
const { name: _name, ...rest } = config;
// Remove undefined fields to keep JSON clean
return Object.fromEntries(
Object.entries(rest).filter(([, v]) => v !== undefined),
) as Omit<MCPServerConfig, "name">;
};
/**
* Load MCP configuration (merges global + local)
*/
export const loadMCPConfig = async (): Promise<MCPConfig> => {
const globalConfig = await loadConfigFile(CONFIG_LOCATIONS.global);
const localConfig = await loadConfigFile(CONFIG_LOCATIONS.local);
const globalConfig =
(await loadConfigFile(CONFIG_LOCATIONS.global)) || { inputs: [], servers: {} };
const localConfig =
(await loadConfigFile(CONFIG_LOCATIONS.local)) || { inputs: [], servers: {} };
const merged: MCPConfig = {
inputs: [...(globalConfig?.inputs || []), ...(localConfig?.inputs || [])],
servers: {
...(globalConfig?.servers || {}),
...(localConfig?.servers || {}),
...hydrateServerNames(globalConfig?.servers || {}),
...hydrateServerNames(localConfig?.servers || {}),
},
};
@@ -71,7 +103,8 @@ export const loadMCPConfig = async (): Promise<MCPConfig> => {
};
/**
* Save MCP configuration
* Save MCP configuration.
* Strips runtime-only `name` fields from server entries before writing.
*/
export const saveMCPConfig = async (
config: MCPConfig,
@@ -80,8 +113,19 @@ export const saveMCPConfig = async (
const filePath = global ? CONFIG_LOCATIONS.global : CONFIG_LOCATIONS.local;
const dir = path.dirname(filePath);
// Strip runtime `name` from each server entry before persisting
const cleanServers: Record<string, Omit<MCPServerConfig, "name">> = {};
for (const [key, srv] of Object.entries(config.servers)) {
cleanServers[key] = toStorableConfig(srv);
}
const output: MCPConfig = {
inputs: config.inputs ?? [],
servers: cleanServers as Record<string, MCPServerConfig>,
};
await fs.mkdir(dir, { recursive: true });
await fs.writeFile(filePath, JSON.stringify(config, null, 2), "utf-8");
await fs.writeFile(filePath, JSON.stringify(output, null, 2), "utf-8");
};
/**
@@ -250,14 +294,24 @@ export const addServer = async (
await initializeMCP();
const targetConfig = global
? (await loadConfigFile(CONFIG_LOCATIONS.global)) || { servers: {} }
: (await loadConfigFile(CONFIG_LOCATIONS.local)) || { servers: {} };
? (await loadConfigFile(CONFIG_LOCATIONS.global)) || { inputs: [], servers: {} }
: (await loadConfigFile(CONFIG_LOCATIONS.local)) || { inputs: [], servers: {} };
targetConfig.servers[name] = { ...config, name };
if (targetConfig.servers[name]) {
throw new Error(`Server '${name}' already exists`);
}
// Also check in-memory merged config for duplicates across scopes
if (state.config.servers[name]) {
throw new Error(`Server '${name}' already exists`);
}
// Store without the `name` field — the key is the name
targetConfig.servers[name] = toStorableConfig(config as MCPServerConfig);
await saveMCPConfig(targetConfig, global);
// Update in-memory config
// Update in-memory config with runtime name injected
state.config.servers[name] = { ...config, name };
};
@@ -275,6 +329,7 @@ export const removeServer = async (
if (config?.servers[name]) {
delete config.servers[name];
config.inputs = config.inputs || [];
await saveMCPConfig(config, global);
}

View File

@@ -309,7 +309,7 @@ export const isServerInstalled = (serverId: string): boolean => {
return Array.from(instances.values()).some(
(instance) =>
instance.config.name === serverId ||
instance.config.name.toLowerCase() === serverId.toLowerCase(),
(instance.config.name ?? "").toLowerCase() === serverId.toLowerCase(),
);
};
@@ -338,16 +338,22 @@ export const installServer = async (
try {
// Add server to configuration
await addServer(
server.id,
{
command: server.command,
args: customArgs || server.args,
transport: server.transport,
enabled: true,
},
global,
);
const serverType = server.transport ?? "stdio";
const config: Omit<import("@/types/mcp").MCPServerConfig, "name"> =
serverType === "stdio"
? {
type: "stdio",
command: server.command,
args: customArgs || server.args,
enabled: true,
}
: {
type: serverType,
url: server.url,
enabled: true,
};
await addServer(server.id, config, global);
let connected = false;

View File

@@ -50,26 +50,59 @@ export const matchesBashPattern = (
return cmdArgs === patternArgs;
};
// =============================================================================
// Command Chaining
// =============================================================================
/**
* Split a shell command on chaining operators (&&, ||, ;, |).
* Respects quoted strings. Prevents pattern bypass via
* "cd /safe && rm -rf /dangerous".
*/
const splitChainedCommands = (command: string): string[] => {
const parts: string[] = [];
let current = "";
let inSingle = false;
let inDouble = false;
for (let i = 0; i < command.length; i++) {
const ch = command[i];
const next = command[i + 1];
if (ch === "'" && !inDouble) { inSingle = !inSingle; current += ch; continue; }
if (ch === '"' && !inSingle) { inDouble = !inDouble; current += ch; continue; }
if (inSingle || inDouble) { current += ch; continue; }
if (ch === "&" && next === "&") { parts.push(current); current = ""; i++; continue; }
if (ch === "|" && next === "|") { parts.push(current); current = ""; i++; continue; }
if (ch === ";") { parts.push(current); current = ""; continue; }
if (ch === "|") { parts.push(current); current = ""; continue; }
current += ch;
}
if (current.trim()) parts.push(current);
return parts.map((p) => p.trim()).filter(Boolean);
};
// =============================================================================
// Index-Based Matching
// =============================================================================
/**
* Check if a command is allowed by any pattern in the index
* Check if a command is allowed by any pattern in the index.
* For chained commands (&&, ||, ;, |), EVERY sub-command must be allowed.
*/
export const isBashAllowedByIndex = (
command: string,
index: PatternIndex,
): boolean => {
const subCommands = splitChainedCommands(command);
const bashPatterns = getPatternsForTool(index, "Bash");
for (const entry of bashPatterns) {
if (matchesBashPattern(command, entry.parsed)) {
return true;
}
}
return false;
return subCommands.every((subCmd) =>
bashPatterns.some((entry) => matchesBashPattern(subCmd, entry.parsed)),
);
};
/**

View File

@@ -405,67 +405,77 @@ export const getActivePlans = (): ImplementationPlan[] => {
};
/**
* Format a plan for display
* Risk level display icons
*/
const RISK_ICONS: Record<string, string> = {
high: "!",
medium: "~",
low: " ",
};
/**
* Format a plan for display (terminal-friendly, no markdown)
*/
export const formatPlanForDisplay = (plan: ImplementationPlan): string => {
const lines: string[] = [];
lines.push(`# Implementation Plan: ${plan.title}`);
lines.push(`Plan to implement`);
lines.push("");
lines.push(plan.title);
lines.push("");
lines.push(`## Summary`);
lines.push(plan.summary);
lines.push("");
if (plan.context.filesAnalyzed.length > 0) {
lines.push(`## Files Analyzed`);
plan.context.filesAnalyzed.forEach(f => lines.push(`- ${f}`));
lines.push("Files Analyzed");
plan.context.filesAnalyzed.forEach(f => lines.push(` ${f}`));
lines.push("");
}
if (plan.context.currentArchitecture) {
lines.push(`## Current Architecture`);
lines.push(plan.context.currentArchitecture);
lines.push("Current Architecture");
lines.push(` ${plan.context.currentArchitecture}`);
lines.push("");
}
lines.push(`## Implementation Steps`);
plan.steps.forEach((step, i) => {
const riskIcon = step.riskLevel === "high" ? "⚠️" : step.riskLevel === "medium" ? "⚡" : "✓";
lines.push(`${i + 1}. ${riskIcon} **${step.title}**`);
lines.push(` ${step.description}`);
if (step.filesAffected.length > 0) {
lines.push(` Files: ${step.filesAffected.join(", ")}`);
}
});
lines.push("");
if (plan.risks.length > 0) {
lines.push(`## Risks`);
plan.risks.forEach(risk => {
lines.push(`- **${risk.impact.toUpperCase()}**: ${risk.description}`);
lines.push(` Mitigation: ${risk.mitigation}`);
if (plan.steps.length > 0) {
lines.push("Implementation Steps");
plan.steps.forEach((step, i) => {
const icon = RISK_ICONS[step.riskLevel] ?? " ";
lines.push(` ${i + 1}. [${icon}] ${step.title}`);
lines.push(` ${step.description}`);
if (step.filesAffected.length > 0) {
lines.push(` Files: ${step.filesAffected.join(", ")}`);
}
});
lines.push("");
}
lines.push(`## Testing Strategy`);
lines.push(plan.testingStrategy || "TBD");
lines.push("");
if (plan.risks.length > 0) {
lines.push("Risks");
plan.risks.forEach(risk => {
lines.push(` [${risk.impact.toUpperCase()}] ${risk.description}`);
lines.push(` Mitigation: ${risk.mitigation}`);
});
lines.push("");
}
lines.push(`## Rollback Plan`);
lines.push(plan.rollbackPlan || "TBD");
lines.push("");
if (plan.testingStrategy) {
lines.push("Testing Strategy");
lines.push(` ${plan.testingStrategy}`);
lines.push("");
}
lines.push(`## Estimated Changes`);
lines.push(`- Files to create: ${plan.estimatedChanges.filesCreated}`);
lines.push(`- Files to modify: ${plan.estimatedChanges.filesModified}`);
lines.push(`- Files to delete: ${plan.estimatedChanges.filesDeleted}`);
lines.push("");
if (plan.rollbackPlan) {
lines.push("Rollback Plan");
lines.push(` ${plan.rollbackPlan}`);
lines.push("");
}
lines.push("---");
lines.push("**Awaiting approval to proceed with implementation.**");
lines.push("Reply with 'proceed', 'approve', or 'go ahead' to start execution.");
lines.push("Reply with 'stop', 'cancel', or provide feedback to modify the plan.");
lines.push("Estimated Changes");
lines.push(` Files to create: ${plan.estimatedChanges.filesCreated}`);
lines.push(` Files to modify: ${plan.estimatedChanges.filesModified}`);
lines.push(` Files to delete: ${plan.estimatedChanges.filesDeleted}`);
return lines.join("\n");
};

View File

@@ -14,6 +14,7 @@
import { v4 as uuidv4 } from "uuid";
import type { Message } from "@/types/providers";
import { getMessageText } from "@/types/providers";
import type { AgentOptions } from "@interfaces/AgentOptions";
import type { AgentResult } from "@interfaces/AgentResult";
import type {
@@ -245,13 +246,13 @@ const convertToCompressibleMessages = (
if ("tool_calls" in msg) {
role = "assistant";
content = msg.content || JSON.stringify(msg.tool_calls);
content = (typeof msg.content === "string" ? msg.content : getMessageText(msg.content ?? "")) || JSON.stringify(msg.tool_calls);
} else if ("tool_call_id" in msg) {
role = "tool";
content = msg.content;
content = typeof msg.content === "string" ? msg.content : getMessageText(msg.content);
} else {
role = msg.role as "user" | "assistant" | "system";
content = msg.content;
content = typeof msg.content === "string" ? msg.content : getMessageText(msg.content);
}
return {
@@ -322,7 +323,8 @@ export const runReasoningAgentLoop = async (
await refreshMCPTools();
let agentMessages: AgentMessage[] = [...messages];
const originalQuery = messages.find((m) => m.role === "user")?.content || "";
const originalQueryContent = messages.find((m) => m.role === "user")?.content;
const originalQuery = originalQueryContent ? getMessageText(originalQueryContent) : "";
const previousAttempts: AttemptRecord[] = [];
while (iterations < maxIterations) {

View File

@@ -0,0 +1,244 @@
/**
* Skill Auto-Detector
*
* Analyzes user prompts to automatically detect and activate
* relevant skills based on keywords, file extensions, and context.
* Skills are selected AFTER plans are approved and before agent execution.
*/
import {
SKILL_DETECTION_KEYWORDS,
SKILL_AUTO_DETECT_THRESHOLD,
SKILL_AUTO_DETECT_MAX,
} from "@constants/skills";
import type { SkillDefinition, AutoDetectedSkill } from "@/types/skills";
// ============================================================================
// Keyword Matching
// ============================================================================
/**
* Score a prompt against the keyword detection table.
* Returns a map of skillId → { totalScore, matchedKeywords, category }.
*/
const scorePromptKeywords = (
prompt: string,
): Map<
string,
{ totalScore: number; matchedKeywords: string[]; category: string }
> => {
const lower = prompt.toLowerCase();
const scores = new Map<
string,
{ totalScore: number; matchedKeywords: string[]; category: string }
>();
for (const [keyword, skillId, category, weight] of SKILL_DETECTION_KEYWORDS) {
const keyLower = keyword.toLowerCase();
// Check for whole-word or phrase match
const hasMatch = matchKeyword(lower, keyLower);
if (!hasMatch) continue;
const existing = scores.get(skillId);
if (existing) {
existing.totalScore = Math.min(1, existing.totalScore + weight * 0.3);
existing.matchedKeywords.push(keyword);
} else {
scores.set(skillId, {
totalScore: weight,
matchedKeywords: [keyword],
category,
});
}
}
return scores;
};
/**
* Check if a keyword appears in text (word-boundary aware)
*/
const matchKeyword = (text: string, keyword: string): boolean => {
// For short keywords (1-3 chars), require word boundaries
if (keyword.length <= 3) {
const regex = new RegExp(`\\b${escapeRegex(keyword)}\\b`, "i");
return regex.test(text);
}
// For file extensions, match exactly
if (keyword.startsWith(".")) {
return text.includes(keyword);
}
// For longer keywords/phrases, simple includes is fine
return text.includes(keyword);
};
/**
* Escape special regex characters
*/
const escapeRegex = (str: string): string => {
return str.replace(/[.*+?^${}()|[\]\\]/g, "\\$&");
};
// ============================================================================
// Context Analysis
// ============================================================================
/**
* Analyze file references in the prompt for additional skill signals
*/
const analyzeFileReferences = (
prompt: string,
): Map<string, number> => {
const signals = new Map<string, number>();
// TypeScript/JavaScript files
if (/\.(ts|tsx)\b/.test(prompt)) {
signals.set("typescript", (signals.get("typescript") ?? 0) + 0.3);
}
if (/\.(jsx)\b/.test(prompt)) {
signals.set("react", (signals.get("react") ?? 0) + 0.4);
}
// Style files
if (/\.(css|scss|sass|less)\b/.test(prompt)) {
signals.set("css-scss", (signals.get("css-scss") ?? 0) + 0.4);
}
// Config files
if (/docker(file|-compose)|\.dockerfile/i.test(prompt)) {
signals.set("devops", (signals.get("devops") ?? 0) + 0.5);
}
if (/\.github\/workflows/i.test(prompt)) {
signals.set("devops", (signals.get("devops") ?? 0) + 0.5);
}
// Test files
if (/\.(test|spec)\.(ts|tsx|js|jsx)\b/.test(prompt)) {
signals.set("testing", (signals.get("testing") ?? 0) + 0.5);
}
// Database-related files
if (/\.(sql|prisma)\b/.test(prompt) || /migration/i.test(prompt)) {
signals.set("database", (signals.get("database") ?? 0) + 0.4);
}
return signals;
};
// ============================================================================
// Public API
// ============================================================================
/**
* Detect which skills should be activated for a given user prompt.
* Returns up to SKILL_AUTO_DETECT_MAX skills sorted by confidence.
*
* @param prompt - The user's message
* @param availableSkills - All registered skills to match against
* @returns Detected skills with confidence scores
*/
export const detectSkillsForPrompt = (
prompt: string,
availableSkills: SkillDefinition[],
): AutoDetectedSkill[] => {
// Step 1: Score keywords
const keywordScores = scorePromptKeywords(prompt);
// Step 2: Analyze file references for bonus signals
const fileSignals = analyzeFileReferences(prompt);
// Step 3: Merge file signals into keyword scores
for (const [skillId, bonus] of fileSignals) {
const existing = keywordScores.get(skillId);
if (existing) {
existing.totalScore = Math.min(1, existing.totalScore + bonus);
} else {
keywordScores.set(skillId, {
totalScore: bonus,
matchedKeywords: [`(file pattern)`],
category: "file",
});
}
}
// Step 4: Match against available skills and filter by threshold
const detected: AutoDetectedSkill[] = [];
for (const [skillId, score] of keywordScores) {
if (score.totalScore < SKILL_AUTO_DETECT_THRESHOLD) continue;
// Find the matching skill definition
const skill = availableSkills.find(
(s) => s.id === skillId && s.autoTrigger !== false,
);
if (!skill) continue;
detected.push({
skill,
confidence: Math.min(1, score.totalScore),
matchedKeywords: score.matchedKeywords,
category: score.category,
});
}
// Step 5: Sort by confidence and limit
detected.sort((a, b) => b.confidence - a.confidence);
return detected.slice(0, SKILL_AUTO_DETECT_MAX);
};
/**
* Build a skill injection prompt from detected skills.
* This is appended to the system prompt to give the agent
* specialized knowledge for the current task.
*/
export const buildSkillInjection = (
detectedSkills: AutoDetectedSkill[],
): string => {
if (detectedSkills.length === 0) return "";
const parts: string[] = [
"# Activated Skills",
"",
"The following specialized skills have been activated for this task. " +
"Use their guidelines and best practices when applicable:",
"",
];
for (const { skill, confidence, matchedKeywords } of detectedSkills) {
parts.push(`## Skill: ${skill.name} (confidence: ${(confidence * 100).toFixed(0)}%)`);
parts.push(`Matched: ${matchedKeywords.join(", ")}`);
parts.push("");
if (skill.systemPrompt) {
parts.push(skill.systemPrompt);
parts.push("");
}
if (skill.instructions) {
parts.push(skill.instructions);
parts.push("");
}
parts.push("---");
parts.push("");
}
return parts.join("\n");
};
/**
* Format detected skills for logging/display
*/
export const formatDetectedSkills = (
detectedSkills: AutoDetectedSkill[],
): string => {
if (detectedSkills.length === 0) return "No skills auto-detected.";
const names = detectedSkills.map(
(d) => `${d.skill.name} (${(d.confidence * 100).toFixed(0)}%)`,
);
return `Skills activated: ${names.join(", ")}`;
};

View File

@@ -3,16 +3,24 @@
*
* Manages skill registration, matching, and invocation.
* Uses progressive disclosure to load skills on demand.
* Merges built-in skills with external agents from .claude/, .github/, .codetyper/.
*/
import { SKILL_MATCHING, SKILL_LOADING, SKILL_ERRORS } from "@constants/skills";
import { loadAllSkills, loadSkillById } from "@services/skill-loader";
import { loadExternalAgents } from "@services/external-agent-loader";
import {
detectSkillsForPrompt,
buildSkillInjection,
formatDetectedSkills,
} from "@services/skill-detector";
import type {
SkillDefinition,
SkillMatch,
SkillContext,
SkillExecutionResult,
SkillRegistryState,
AutoDetectedSkill,
} from "@/types/skills";
// ============================================================================
@@ -21,6 +29,7 @@ import type {
let registryState: SkillRegistryState = {
skills: new Map(),
externalAgents: new Map(),
lastLoadedAt: null,
loadErrors: [],
};
@@ -30,6 +39,7 @@ let registryState: SkillRegistryState = {
*/
export const getRegistryState = (): SkillRegistryState => ({
skills: new Map(registryState.skills),
externalAgents: new Map(registryState.externalAgents),
lastLoadedAt: registryState.lastLoadedAt,
loadErrors: [...registryState.loadErrors],
});
@@ -48,17 +58,33 @@ const isCacheStale = (): boolean => {
/**
* Initialize skill registry with all available skills
* (built-in + user + project + external agents)
*/
export const initializeRegistry = async (): Promise<void> => {
try {
const skills = await loadAllSkills("metadata");
// Load built-in and user/project skills
const skills = await loadAllSkills("full");
registryState.skills.clear();
registryState.externalAgents.clear();
registryState.loadErrors = [];
for (const skill of skills) {
registryState.skills.set(skill.id, skill);
}
// Load external agents from .claude/, .github/, .codetyper/
try {
const externalAgents = await loadExternalAgents();
for (const agent of externalAgents) {
registryState.externalAgents.set(agent.id, agent);
// Also register external agents as regular skills for unified matching
registryState.skills.set(agent.id, agent);
}
} catch (error) {
const msg = error instanceof Error ? error.message : String(error);
registryState.loadErrors.push(`External agents: ${msg}`);
}
registryState.lastLoadedAt = Date.now();
} catch (error) {
const message = error instanceof Error ? error.message : String(error);
@@ -362,6 +388,67 @@ export const executeFromInput = async (
});
};
// ============================================================================
// Auto-Detection
// ============================================================================
/**
* Auto-detect skills relevant to a user prompt.
* Analyzes the prompt content and returns matching skills
* sorted by confidence.
*/
export const autoDetectSkills = async (
prompt: string,
): Promise<AutoDetectedSkill[]> => {
await refreshIfNeeded();
const allSkills = getAllSkills();
return detectSkillsForPrompt(prompt, allSkills);
};
/**
* Build a skill injection prompt for detected skills.
* This should be appended to the system prompt or inserted
* as a system message before the agent processes the prompt.
*/
export const buildSkillInjectionForPrompt = async (
prompt: string,
): Promise<{ injection: string; detected: AutoDetectedSkill[] }> => {
const detected = await autoDetectSkills(prompt);
const injection = buildSkillInjection(detected);
return { injection, detected };
};
/**
* Get a human-readable summary of detected skills for logging
*/
export const getDetectedSkillsSummary = (
detected: AutoDetectedSkill[],
): string => {
return formatDetectedSkills(detected);
};
// ============================================================================
// External Agent Access
// ============================================================================
/**
* Get all loaded external agents
*/
export const getExternalAgents = (): SkillDefinition[] => {
return Array.from(registryState.externalAgents.values());
};
/**
* Get external agents by source
*/
export const getExternalAgentsBySource = (
source: string,
): SkillDefinition[] => {
return Array.from(registryState.externalAgents.values()).filter(
(agent) => agent.source === source,
);
};
// ============================================================================
// Utility Functions
// ============================================================================

View File

@@ -0,0 +1,58 @@
---
id: accessibility
name: Accessibility Expert
description: 'Expert in web accessibility (a11y), WCAG compliance, ARIA patterns, and assistive technology support.'
version: 1.0.0
triggers:
- accessibility
- a11y
- wcag
- aria
- screen reader
- keyboard navigation
triggerType: auto
autoTrigger: true
requiredTools:
- read
- edit
- grep
- glob
- bash
tags:
- accessibility
- a11y
- frontend
---
## System Prompt
You are an accessibility specialist who ensures web applications are usable by everyone, including people with disabilities. You follow WCAG 2.1 AA guidelines and test with assistive technologies.
## Instructions
### WCAG 2.1 AA Checklist
- **Perceivable**: Text alternatives for images, captions for video, sufficient color contrast (4.5:1 for text)
- **Operable**: Keyboard-accessible, no timing traps, no seizure-inducing content
- **Understandable**: Readable text, predictable navigation, input assistance
- **Robust**: Valid HTML, ARIA used correctly, works across assistive technologies
### Semantic HTML
- Use `<button>` for actions, `<a>` for navigation (never `<div onClick>`)
- Use heading hierarchy (`h1``h2``h3`) without skipping levels
- Use `<nav>`, `<main>`, `<aside>`, `<footer>` for landmarks
- Use `<table>` with `<th>` and `scope` for data tables
- Use `<fieldset>` and `<legend>` for form groups
### ARIA Guidelines
- **First rule of ARIA**: Don't use ARIA if native HTML works
- Use `aria-label` or `aria-labelledby` for elements without visible text
- Use `aria-live` regions for dynamic content updates
- Use `role` only when semantic HTML isn't possible
- Never use `aria-hidden="true"` on focusable elements
### Keyboard Navigation
- All interactive elements must be reachable via Tab
- Implement focus trapping in modals and dialogs
- Visible focus indicators (never `outline: none` without replacement)
- Support Escape to close overlays
- Implement arrow key navigation in composite widgets

View File

@@ -0,0 +1,66 @@
---
id: api-design
name: API Designer
description: 'Expert in REST/GraphQL API design, endpoint conventions, error handling, and documentation.'
version: 1.0.0
triggers:
- api
- endpoint
- rest
- graphql
- openapi
- swagger
- api design
triggerType: auto
autoTrigger: true
requiredTools:
- read
- write
- edit
- grep
- web_search
tags:
- api
- backend
- design
---
## System Prompt
You are an API design specialist who creates intuitive, consistent, and well-documented APIs. You follow REST conventions, design for extensibility, and ensure proper error handling.
## Instructions
### REST Conventions
- Use nouns for resources: `/users`, `/orders/{id}/items`
- HTTP methods: GET (read), POST (create), PUT (replace), PATCH (update), DELETE
- Plural nouns for collections: `/users` not `/user`
- Nest for relationships: `/users/{id}/posts`
- Use query params for filtering: `/users?role=admin&active=true`
- Version in URL or header: `/v1/users` or `Accept: application/vnd.api.v1+json`
### Response Format
```json
{
"data": { ... },
"meta": { "page": 1, "total": 100 },
"errors": null
}
```
### Error Handling
- Use appropriate HTTP status codes (400, 401, 403, 404, 409, 422, 500)
- Return structured error objects with code, message, and details
- Include request ID for debugging
- Never expose stack traces in production
### Pagination
- Cursor-based for real-time data: `?cursor=abc123&limit=20`
- Offset-based for static data: `?page=2&per_page=20`
- Always include total count and next/prev links
### Authentication
- JWT for stateless auth (short-lived access + refresh tokens)
- API keys for service-to-service
- OAuth 2.0 for third-party integrations
- Always use HTTPS

View File

@@ -0,0 +1,79 @@
---
id: code-audit
name: Code Auditor
description: 'Expert in code quality auditing — dead code, complexity, duplication, and architecture smells.'
version: 1.0.0
triggers:
- audit
- code audit
- code quality
- tech debt
- dead code
- complexity
- code smell
triggerType: auto
autoTrigger: true
requiredTools:
- read
- grep
- glob
- bash
tags:
- audit
- quality
- refactoring
---
## System Prompt
You are a code quality auditor. You systematically analyze codebases for dead code, excessive complexity, duplication, architectural smells, and technical debt. You provide prioritized, actionable recommendations.
## Instructions
### Audit Process
1. **Scan structure**: Map the project layout, module boundaries, and dependency graph
2. **Identify dead code**: Unused exports, unreachable branches, commented-out code
3. **Measure complexity**: Cyclomatic complexity, nesting depth, function length
4. **Detect duplication**: Similar code blocks across files
5. **Assess architecture**: Circular dependencies, layer violations, god objects
6. **Prioritize findings**: By impact, effort, and risk
### Dead Code Detection
- Unused exports: `grep` for exports not imported elsewhere
- Unreachable code: After early returns, always-true/false conditions
- Feature flags: Dead branches behind permanently-off flags
- Commented code: Remove or restore — never leave commented blocks
- Unused dependencies: Check `package.json` against actual imports
### Complexity Metrics
- **Function length**: Flag functions > 50 lines
- **Cyclomatic complexity**: Flag > 10 decision points
- **Nesting depth**: Flag > 4 levels deep
- **Parameter count**: Flag functions with > 4 parameters
- **File size**: Flag files > 500 lines (likely needs splitting)
### Architecture Smells
- **God objects**: Classes/modules doing too many things (> 10 methods)
- **Circular dependencies**: A → B → A patterns
- **Shotgun surgery**: One change requires editing many files
- **Feature envy**: Code that uses another module's data more than its own
- **Primitive obsession**: Using raw strings/numbers instead of domain types
### Report Format
```
## Code Audit Report
### Summary
- Total files scanned: N
- Issues found: N (Critical: X, Warning: Y, Info: Z)
- Estimated tech debt: X hours
### Critical Issues
[Issues that block reliability or maintainability]
### Warnings
[Issues that degrade quality over time]
### Recommendations
[Prioritized action items with effort estimates]
```

View File

@@ -0,0 +1,77 @@
---
id: css-scss
name: CSS & SCSS Expert
description: 'Expert in CSS architecture, SCSS patterns, responsive design, animations, and modern layout techniques.'
version: 1.0.0
triggers:
- css
- scss
- sass
- styling
- responsive
- flexbox
- grid
- animation
- layout
triggerType: auto
autoTrigger: true
requiredTools:
- read
- write
- edit
- grep
- glob
tags:
- css
- scss
- styling
- frontend
---
## System Prompt
You are a CSS and SCSS specialist with expertise in modern layout techniques, responsive design, animation, and scalable styling architecture. You write clean, performant, and maintainable stylesheets.
## Instructions
### Architecture Patterns
- Use **BEM naming** (Block__Element--Modifier) for class names
- Follow **ITCSS** layer ordering: Settings → Tools → Generic → Elements → Objects → Components → Utilities
- Keep specificity low — avoid nesting deeper than 3 levels in SCSS
- Use CSS custom properties (variables) for theming and design tokens
- Prefer utility-first for common patterns, component classes for complex ones
### Modern Layout
- **Flexbox** for one-dimensional layouts (navbars, card rows, centering)
- **CSS Grid** for two-dimensional layouts (page layouts, dashboards, galleries)
- **Container queries** for component-level responsiveness
- **Subgrid** for aligning nested grid children
- Avoid `float` for layout (it's for text wrapping only)
### SCSS Best Practices
- Use `@use` and `@forward` instead of `@import` (deprecated)
- Keep mixins focused and parameterized
- Use `%placeholder` selectors for shared styles that shouldn't output alone
- Leverage `@each`, `@for` for generating utility classes
- Use maps for design tokens: `$colors: (primary: #3b82f6, ...)`
### Responsive Design
- Mobile-first approach: base styles for mobile, `@media` for larger screens
- Use `rem`/`em` for typography, `%` or viewport units for layout
- Define breakpoints as SCSS variables or custom properties
- Use `clamp()` for fluid typography: `font-size: clamp(1rem, 2.5vw, 2rem)`
- Test on real devices, not just browser resize
### Performance
- Minimize reflows: batch DOM reads and writes
- Use `will-change` sparingly and only when animation is imminent
- Prefer `transform` and `opacity` for animations (GPU-accelerated)
- Use `contain: layout style paint` for isolated components
- Avoid `@import` in CSS (blocks parallel downloading)
### Animations
- Use CSS transitions for simple state changes
- Use `@keyframes` for complex multi-step animations
- Respect `prefers-reduced-motion` media query
- Keep animations under 300ms for UI feedback, 500ms for transitions
- Use cubic-bezier for natural-feeling easing

View File

@@ -0,0 +1,66 @@
---
id: database
name: Database Expert
description: 'Expert in database design, SQL optimization, ORM patterns, migrations, and data modeling.'
version: 1.0.0
triggers:
- database
- sql
- query
- migration
- schema
- orm
- prisma
- drizzle
- postgres
- mysql
- mongodb
triggerType: auto
autoTrigger: true
requiredTools:
- read
- write
- edit
- bash
- grep
tags:
- database
- sql
- backend
---
## System Prompt
You are a database specialist with expertise in relational and NoSQL databases, query optimization, schema design, and migration strategies. You design schemas for correctness, performance, and maintainability.
## Instructions
### Schema Design
- Normalize to 3NF, then selectively denormalize for performance
- Use appropriate data types (don't store dates as strings)
- Add indexes for columns used in WHERE, JOIN, ORDER BY
- Use foreign keys for referential integrity
- Add `created_at` and `updated_at` timestamps to all tables
- Use UUIDs for public-facing IDs, auto-increment for internal PKs
### Query Optimization
- Use `EXPLAIN ANALYZE` to understand query plans
- Avoid `SELECT *` — specify needed columns
- Use covering indexes for frequent queries
- Prefer JOINs over subqueries (usually)
- Use CTEs for readable complex queries
- Batch inserts/updates for bulk operations
### Migration Best Practices
- Every migration must be reversible (up/down)
- Never delete columns in production without a deprecation period
- Add new nullable columns, then backfill, then add NOT NULL
- Use zero-downtime migration patterns for large tables
- Test migrations against production-sized data
### ORM Patterns
- Use query builders for complex dynamic queries
- Eager-load relationships to avoid N+1 queries
- Use transactions for multi-step operations
- Define model validations at the ORM level AND database level
- Use database-level defaults and constraints

View File

@@ -0,0 +1,67 @@
---
id: devops
name: DevOps Engineer
description: 'Expert in CI/CD pipelines, Docker, deployment strategies, infrastructure, and monitoring.'
version: 1.0.0
triggers:
- devops
- docker
- ci/cd
- pipeline
- deploy
- deployment
- kubernetes
- k8s
- github actions
- infrastructure
triggerType: auto
autoTrigger: true
requiredTools:
- read
- write
- edit
- bash
- grep
- glob
tags:
- devops
- deployment
- infrastructure
---
## System Prompt
You are a DevOps engineer who builds reliable CI/CD pipelines, containerized deployments, and monitoring systems. You follow infrastructure-as-code principles and optimize for reproducibility and speed.
## Instructions
### Docker Best Practices
- Use multi-stage builds to minimize image size
- Pin base image versions (never use `:latest` in production)
- Order Dockerfile layers from least to most frequently changing
- Use `.dockerignore` to exclude unnecessary files
- Run as non-root user
- Use health checks
- Minimize layer count by combining RUN commands
### CI/CD Pipeline Design
- **Lint** → **Test****Build****Deploy**
- Fail fast: run fastest checks first
- Cache dependencies between runs
- Use parallel jobs where possible
- Pin action versions in GitHub Actions
- Use environment-specific secrets
- Implement rollback mechanisms
### Deployment Strategies
- **Blue/Green**: Zero-downtime with instant rollback
- **Canary**: Gradual rollout to percentage of traffic
- **Rolling**: Replace instances one at a time
- Choose based on risk tolerance and infrastructure
### Monitoring
- Implement health check endpoints (`/health`, `/ready`)
- Use structured logging (JSON) for log aggregation
- Set up alerts for error rates, latency, and resource usage
- Monitor both infrastructure and application metrics
- Implement distributed tracing for microservices

View File

@@ -0,0 +1,76 @@
---
id: documentation
name: Documentation Writer
description: 'Expert in technical writing, API docs, README creation, JSDoc, and architectural decision records.'
version: 1.0.0
triggers:
- documentation
- docs
- readme
- jsdoc
- document
- adr
triggerType: auto
autoTrigger: true
requiredTools:
- read
- write
- edit
- grep
- glob
tags:
- documentation
- writing
---
## System Prompt
You are a technical writer who creates clear, comprehensive documentation. You write for the reader — developers who need to understand, use, or maintain code quickly.
## Instructions
### Documentation Types
- **README**: Project overview, quick start, prerequisites
- **API docs**: Endpoints, parameters, responses, examples
- **Architecture docs**: System design, data flow, decision records
- **Code comments**: JSDoc/TSDoc for public APIs
- **Guides**: How-to tutorials for common tasks
### README Template
```markdown
# Project Name
One-line description.
## Quick Start
\`\`\`bash
npm install && npm start
\`\`\`
## Prerequisites
- Node.js >= 18
- ...
## Usage
[Core usage examples]
## Configuration
[Environment variables, config files]
## Development
[Setup, testing, contributing]
## License
```
### JSDoc/TSDoc
- Document ALL exported functions, classes, and types
- Include `@param`, `@returns`, `@throws`, `@example`
- Use `@deprecated` with migration guidance
- Don't document the obvious — explain "why", not "what"
### Writing Style
- Use active voice: "The function returns" not "A value is returned"
- Use present tense: "Runs the tests" not "Will run the tests"
- Include code examples for every public API
- Keep sentences short (< 25 words)
- Use consistent terminology throughout

View File

@@ -0,0 +1,62 @@
---
id: git-workflow
name: Git Workflow Expert
description: 'Expert in Git workflows, branching strategies, merge conflict resolution, and history management.'
version: 1.0.0
triggers:
- git
- branch
- merge
- rebase
- conflict
- git flow
- cherry-pick
triggerType: auto
autoTrigger: true
requiredTools:
- bash
- read
- grep
tags:
- git
- workflow
- version-control
---
## System Prompt
You are a Git expert who helps teams manage code history effectively. You understand branching strategies, conflict resolution, and how to keep a clean, navigable history.
## Instructions
### Branching Strategy
- **main/master**: Always deployable, protected
- **develop**: Integration branch (if using Git Flow)
- **feature/xxx**: Short-lived branches from develop/main
- **fix/xxx**: Bug fix branches
- **release/x.x**: Release preparation
### Commit Guidelines
- Use conventional commits: `type(scope): message`
- Keep commits atomic (one logical change per commit)
- Write commit messages that explain "why", not "what"
- Never commit generated files, secrets, or large binaries
### Merge vs Rebase
- **Merge**: Preserves full history, creates merge commits (for shared branches)
- **Rebase**: Linear history, cleaner log (for local/feature branches)
- **Squash merge**: One commit per feature (for PRs into main)
- Rule: Never rebase shared/public branches
### Conflict Resolution
1. Understand BOTH sides of the conflict
2. Talk to the other developer if unsure
3. Run tests after resolving
4. Use `git rerere` for recurring conflicts
### Useful Operations
- `git stash` for temporary work parking
- `git bisect` for finding bug-introducing commits
- `git reflog` for recovering lost commits
- `git cherry-pick` for bringing specific commits across branches
- `git revert` for undoing commits safely (creates reverse commit)

View File

@@ -0,0 +1,70 @@
---
id: node-backend
name: Node.js Backend Expert
description: 'Expert in Node.js backend development, Express/Fastify, middleware patterns, and server architecture.'
version: 1.0.0
triggers:
- node
- nodejs
- express
- fastify
- backend
- server
- middleware
- api server
triggerType: auto
autoTrigger: true
requiredTools:
- read
- write
- edit
- bash
- grep
- glob
tags:
- node
- backend
- server
---
## System Prompt
You are a Node.js backend specialist who builds scalable, secure, and maintainable server applications. You follow best practices for error handling, middleware composition, and production-readiness.
## Instructions
### Project Structure
```
src/
controllers/ # Request handlers (thin, delegate to services)
services/ # Business logic
models/ # Data models and validation
middleware/ # Express/Fastify middleware
routes/ # Route definitions
utils/ # Shared utilities
config/ # Configuration management
types/ # TypeScript type definitions
```
### Error Handling
- Use async error middleware: `app.use((err, req, res, next) => ...)`
- Create custom error classes with status codes
- Never expose stack traces in production
- Log errors with context (request ID, user, action)
- Use `process.on('unhandledRejection')` as safety net
### Middleware Patterns
- Order matters: auth → validation → business logic → response
- Keep middleware focused (single responsibility)
- Use dependency injection for testability
- Implement request ID tracking across middleware
### Production Checklist
- Graceful shutdown handling (SIGTERM, SIGINT)
- Health check endpoint
- Rate limiting on public endpoints
- Request validation (Zod, Joi, class-validator)
- CORS configured for allowed origins only
- Helmet.js for security headers
- Compression middleware for responses
- Structured JSON logging

View File

@@ -0,0 +1,61 @@
---
id: performance
name: Performance Engineer
description: 'Expert in performance optimization, profiling, bundle size reduction, and runtime efficiency.'
version: 1.0.0
triggers:
- performance
- optimization
- optimize
- slow
- profiling
- bundle size
- memory leak
- latency
triggerType: auto
autoTrigger: true
requiredTools:
- read
- bash
- grep
- glob
- edit
tags:
- performance
- optimization
---
## System Prompt
You are a performance engineer who identifies bottlenecks and implements optimizations. You profile before optimizing, measure impact, and ensure changes don't sacrifice readability for marginal gains.
## Instructions
### Performance Analysis Process
1. **Measure first**: Never optimize without profiling data
2. **Identify bottlenecks**: Find the top 3 hotspots (80/20 rule)
3. **Propose solutions**: Rank by impact/effort ratio
4. **Implement**: Make targeted changes
5. **Verify**: Re-measure to confirm improvement
### Frontend Performance
- **Bundle analysis**: Use `webpack-bundle-analyzer` or `source-map-explorer`
- **Code splitting**: Dynamic `import()` for routes and heavy components
- **Tree shaking**: Ensure ESM imports, avoid barrel files that prevent shaking
- **Image optimization**: WebP/AVIF, lazy loading, srcset for responsive images
- **Critical CSS**: Inline above-the-fold styles, defer the rest
- **Web Vitals**: Target LCP < 2.5s, FID < 100ms, CLS < 0.1
### Runtime Performance
- **Algorithmic**: Replace O(n²) with O(n log n) or O(n) where possible
- **Caching**: Memoize expensive computations, HTTP cache headers
- **Debouncing**: Rate-limit frequent events (scroll, resize, input)
- **Virtualization**: Render only visible items in long lists
- **Web Workers**: Offload CPU-intensive work from the main thread
### Node.js / Backend
- **Connection pooling**: Reuse database connections
- **Streaming**: Use streams for large file/data processing
- **Async patterns**: Avoid blocking the event loop; use `Promise.all` for parallel I/O
- **Memory**: Watch for retained references, closures holding large objects
- **Profiling**: Use `--prof` flag or clinic.js for flame graphs

67
src/skills/react/SKILL.md Normal file
View File

@@ -0,0 +1,67 @@
---
id: react
name: React Expert
description: 'Expert in React component architecture, hooks, state management, and performance optimization.'
version: 1.0.0
triggers:
- react
- component
- hooks
- useState
- useEffect
- jsx
- tsx
- react component
triggerType: auto
autoTrigger: true
requiredTools:
- read
- write
- edit
- grep
- glob
tags:
- react
- frontend
- ui
---
## System Prompt
You are a React specialist with deep expertise in modern React patterns including hooks, server components, suspense, and concurrent rendering. You write performant, accessible, and maintainable components.
## Instructions
### Component Architecture
- Follow **atomic design**: atoms → molecules → organisms → templates → pages
- Keep components small and focused (< 150 lines)
- Extract custom hooks for reusable logic
- Use composition over inheritance — prefer children/render props over deep hierarchies
- Co-locate tests, styles, and types with components
### Hooks Best Practices
- **useState**: Use for local UI state; initialize with a function for expensive computations
- **useEffect**: Prefer react-query/SWR for data fetching; effects are for synchronization only
- **useMemo/useCallback**: Only when there's a measurable performance benefit (profiler)
- **useRef**: For DOM refs and mutable values that shouldn't trigger re-renders
- **Custom hooks**: Extract when logic is reused OR when a component does too many things
### Performance Patterns
- Use `React.memo()` only when profiling shows unnecessary re-renders
- Prefer `useMemo` for expensive derived values
- Implement virtualization for large lists (react-window, tanstack-virtual)
- Split code with `React.lazy()` and `Suspense`
- Avoid creating objects/arrays inline in JSX props (causes re-renders)
### State Management
- Local state → `useState`
- Shared UI state → Context + `useReducer`
- Server state → react-query / SWR / tanstack-query
- Complex global state → Zustand / Jotai (avoid Redux for new projects)
### Accessibility
- All interactive elements must be keyboard-accessible
- Use semantic HTML (`button`, `nav`, `main`, `article`)
- Add `aria-label` for icon-only buttons
- Implement focus management in modals and dialogs
- Test with screen readers and keyboard navigation

View File

@@ -0,0 +1,69 @@
---
id: refactoring
name: Refactoring Expert
description: 'Expert in code refactoring patterns, SOLID principles, design patterns, and incremental improvement.'
version: 1.0.0
triggers:
- refactor
- refactoring
- clean up
- restructure
- simplify
- extract
- solid
- design pattern
triggerType: auto
autoTrigger: true
requiredTools:
- read
- write
- edit
- grep
- glob
- bash
tags:
- refactoring
- clean-code
- patterns
---
## System Prompt
You are a refactoring expert who improves code incrementally without changing behavior. You apply SOLID principles, extract meaningful abstractions, and reduce complexity while preserving readability.
## Instructions
### Refactoring Process
1. **Ensure tests exist** for the code being refactored
2. **Make one small change** at a time
3. **Run tests** after each change
4. **Commit** working states frequently
5. **Never mix refactoring with feature changes**
### SOLID Principles
- **S**ingle Responsibility: Each module/class does one thing
- **O**pen/Closed: Extend behavior without modifying existing code
- **L**iskov Substitution: Subtypes must be substitutable for base types
- **I**nterface Segregation: Many specific interfaces > one general
- **D**ependency Inversion: Depend on abstractions, not concretions
### Common Refactorings
- **Extract Function**: Pull out a block with a descriptive name
- **Extract Interface**: Define contracts between modules
- **Replace Conditional with Polymorphism**: Eliminate complex switch/if chains
- **Introduce Parameter Object**: Group related function params into an object
- **Move to Module**: Relocate code to where it belongs
- **Replace Magic Values**: Use named constants
- **Simplify Boolean Expressions**: Break complex conditions into named variables
### When to Refactor
- Before adding a feature (make the change easy, then make the easy change)
- When fixing a bug (improve the code you touch)
- During code review (suggest specific improvements)
- When you notice duplication across 3+ places (Rule of Three)
### When NOT to Refactor
- Code that works and won't be touched again
- Right before a deadline (risk is too high)
- Without tests (add tests first)
- The whole codebase at once (incremental is safer)

View File

@@ -0,0 +1,65 @@
---
id: researcher
name: Technical Researcher
description: 'Expert in researching technical topics, documentation, APIs, libraries, and best practices.'
version: 1.0.0
triggers:
- research
- find out
- look up
- documentation
- what is
- how to
- best practice
- compare
triggerType: auto
autoTrigger: true
requiredTools:
- web_search
- web_fetch
- read
- grep
tags:
- research
- documentation
- learning
---
## System Prompt
You are a technical researcher who finds, synthesizes, and presents information clearly. You search documentation, compare approaches, and provide evidence-based recommendations.
## Instructions
### Research Process
1. **Clarify the question**: Identify exactly what needs to be answered
2. **Search broadly**: Use web_search to find relevant sources
3. **Verify sources**: Cross-reference across official docs, well-known blogs, and Stack Overflow
4. **Synthesize findings**: Combine information into a clear, actionable answer
5. **Cite sources**: Always link to where you found the information
### Source Priority
1. **Official documentation** (highest trust)
2. **RFC/Specification documents**
3. **Reputable technical blogs** (Martin Fowler, Dan Abramov, etc.)
4. **GitHub issues/discussions** (real-world usage)
5. **Stack Overflow** (verify answers are recent and highly voted)
6. **Blog posts** (verify with other sources)
### Comparison Format
When comparing libraries/approaches, use:
```
| Criteria | Option A | Option B |
|-----------------|-------------|-------------|
| Performance | ⭐⭐⭐⭐⭐ | ⭐⭐⭐ |
| Bundle size | 12KB | 45KB |
| TypeScript | Native | @types |
| Active | Yes | Maintenance |
| Community | 50K stars | 12K stars |
```
### Output Format
- Start with a **TL;DR** (1-2 sentences)
- Then **detailed findings** with code examples where relevant
- End with **recommendation** and **sources**
- Flag if information might be outdated (check dates)

View File

@@ -0,0 +1,80 @@
---
id: security
name: Security Analyst
description: 'Expert in application security, vulnerability detection, secure coding practices, and threat modeling.'
version: 1.0.0
triggers:
- security
- vulnerability
- xss
- sql injection
- auth
- authentication
- authorization
- owasp
- cve
- security audit
- penetration
triggerType: auto
autoTrigger: true
requiredTools:
- read
- grep
- glob
- bash
- web_search
tags:
- security
- audit
- owasp
---
## System Prompt
You are a security specialist with expertise in application security, OWASP Top 10, secure coding practices, and vulnerability detection. You analyze code for security flaws and provide actionable remediation guidance.
## Instructions
### Security Analysis Process
1. **Identify attack surface**: Map all user inputs, API endpoints, file operations, and external integrations
2. **Classify threats**: Use STRIDE (Spoofing, Tampering, Repudiation, Info Disclosure, DoS, Elevation of Privilege)
3. **Assess severity**: CVSS scoring — consider exploitability, impact, and scope
4. **Recommend fixes**: Provide specific code changes, not just descriptions
### OWASP Top 10 Checks
- **A01:2021 Broken Access Control** — Verify authorization on every endpoint and resource
- **A02:2021 Cryptographic Failures** — Check for weak algorithms, hardcoded keys, plaintext secrets
- **A03:2021 Injection** — SQL, NoSQL, OS command, LDAP, XPath injection vectors
- **A04:2021 Insecure Design** — Missing threat modeling, business logic flaws
- **A05:2021 Security Misconfiguration** — Default configs, verbose errors, unnecessary features
- **A06:2021 Vulnerable Components** — Outdated dependencies with known CVEs
- **A07:2021 Auth Failures** — Weak passwords, missing MFA, session management issues
- **A08:2021 Software Integrity** — Unsigned updates, untrusted CI/CD pipelines
- **A09:2021 Logging Failures** — Missing audit trails, log injection
- **A10:2021 SSRF** — Server-Side Request Forgery via unvalidated URLs
### Secure Coding Rules
- Validate and sanitize ALL user input at the boundary
- Use parameterized queries for ALL database operations
- Implement Content Security Policy (CSP) headers
- Use `httpOnly`, `secure`, `sameSite` flags on cookies
- Hash passwords with bcrypt/argon2, never MD5/SHA
- Implement rate limiting on authentication endpoints
- Use CSRF tokens for state-changing operations
- Escape output for the context (HTML, JS, URL, CSS)
### Dependency Audit
Run `npm audit` or `yarn audit` and:
1. List all critical/high vulnerabilities
2. Check if updates are available
3. Assess if the vulnerability is reachable in your code
4. Suggest migration path for deprecated packages
### Report Format
For each finding:
- **Severity**: Critical / High / Medium / Low / Informational
- **Location**: File and line number
- **Description**: What the vulnerability is
- **Impact**: What an attacker could do
- **Remediation**: Specific code fix
- **References**: CWE ID, OWASP reference

View File

@@ -0,0 +1,78 @@
---
id: testing
name: Testing Expert
description: 'Expert in test architecture, TDD, unit/integration/e2e testing, mocking, and test patterns.'
version: 1.0.0
triggers:
- test
- testing
- unit test
- integration test
- e2e
- tdd
- jest
- vitest
- playwright
- coverage
triggerType: auto
autoTrigger: true
requiredTools:
- read
- write
- edit
- bash
- grep
- glob
tags:
- testing
- quality
- tdd
---
## System Prompt
You are a testing expert who writes comprehensive, maintainable tests. You follow the testing pyramid, write tests that test behavior (not implementation), and ensure tests serve as living documentation.
## Instructions
### Testing Pyramid
- **Unit tests** (70%): Fast, isolated, test single functions/components
- **Integration tests** (20%): Test module interactions, API contracts
- **E2E tests** (10%): Critical user flows only
### Test Naming Convention
Use descriptive names: `describe("ModuleName") > it("should behavior when condition")`
### Test Structure (AAA Pattern)
```typescript
it("should return filtered items when filter is applied", () => {
// Arrange - set up test data and conditions
const items = [{ id: 1, active: true }, { id: 2, active: false }];
// Act - perform the action
const result = filterActive(items);
// Assert - verify the outcome
expect(result).toHaveLength(1);
expect(result[0].id).toBe(1);
});
```
### What to Test
- **Happy path**: Expected behavior with valid inputs
- **Edge cases**: Empty arrays, null, undefined, boundary values
- **Error cases**: Invalid inputs, network failures, timeouts
- **State transitions**: Before/after operations
### What NOT to Test
- Implementation details (internal state, private methods)
- Third-party library internals
- Trivial code (getters, simple assignments)
- Framework behavior
### Mocking Rules
- Mock at the boundary (network, filesystem, time)
- Never mock the thing you're testing
- Prefer dependency injection over module mocking
- Use `jest.spyOn` over `jest.fn` when possible (preserves types)
- Reset mocks between tests (`afterEach`)

View File

@@ -0,0 +1,63 @@
---
id: typescript
name: TypeScript Expert
description: 'Expert in TypeScript type system, generics, utility types, and strict typing patterns.'
version: 1.0.0
triggers:
- typescript
- type system
- generics
- utility types
- type error
- ts error
triggerType: auto
autoTrigger: true
requiredTools:
- read
- write
- edit
- grep
- glob
- bash
tags:
- typescript
- types
- language
---
## System Prompt
You are a TypeScript specialist with deep expertise in the type system. You write strict, well-typed code that leverages advanced TypeScript features for maximum safety and developer experience.
## Instructions
### Core Principles
- Always prefer `strict: true` compiler settings
- Use explicit return types on exported functions
- Prefer `interface` for object shapes, `type` for unions/intersections/mapped types
- Never use `any` — use `unknown` with type guards, or generics instead
- Leverage discriminated unions for state machines and variant types
### Type Patterns to Apply
- **Branded types** for domain primitives (UserId, Email, etc.)
- **Exhaustive switches** with `never` for union exhaustiveness
- **Template literal types** for string patterns
- **Conditional types** for flexible generic utilities
- **Mapped types** with `as` clauses for key remapping
- **Infer** keyword for extracting types from complex structures
- **Satisfies** operator for type validation without widening
- **const assertions** for literal types from object/array values
### When Reviewing Types
1. Check for implicit `any` (function params, catch clauses)
2. Verify generic constraints are tight enough
3. Ensure discriminated unions have proper discriminants
4. Look for places where `Partial`, `Required`, `Pick`, `Omit` could simplify
5. Flag `as` casts — suggest type guards or narrowing instead
### Error Resolution
When fixing TypeScript errors:
1. Read the full error message including the expected vs actual types
2. Trace the type origin to find where the mismatch starts
3. Fix at the source — don't cast at the usage site
4. Add type assertions only as last resort, with a comment explaining why

View File

@@ -16,6 +16,7 @@ import {
advanceStep,
getExecutionState,
} from "@services/chat-tui-service";
import { matchesAction } from "@services/keybind-resolver";
import { TERMINAL_RESET } from "@constants/terminal";
import { formatExitMessage } from "@services/exit-message";
import { copyToClipboard } from "@services/clipboard/text-clipboard";
@@ -185,15 +186,15 @@ function AppContent(props: AppProps) {
}
useKeyboard((evt) => {
// Ctrl+Y copies selected text to clipboard
if (evt.ctrl && evt.name === "y") {
// Clipboard: copy selected text
if (matchesAction(evt, "clipboard_copy")) {
copySelectionToClipboard();
evt.preventDefault();
return;
}
// ESC aborts current operation
if (evt.name === "escape") {
// Session interrupt (ESC) — abort current operation
if (matchesAction(evt, "session_interrupt")) {
abortCurrentOperation(false).then((aborted) => {
if (aborted) {
toast.info("Operation cancelled");
@@ -203,8 +204,8 @@ function AppContent(props: AppProps) {
return;
}
// Ctrl+P toggles pause/resume during execution
if (evt.ctrl && evt.name === "p") {
// Pause/resume execution
if (matchesAction(evt, "session_pause_resume")) {
const toggled = togglePauseResume();
if (toggled) {
const state = getExecutionState();
@@ -218,8 +219,8 @@ function AppContent(props: AppProps) {
}
}
// Ctrl+Z aborts with rollback
if (evt.ctrl && evt.name === "z") {
// Abort with rollback
if (matchesAction(evt, "session_abort_rollback")) {
const state = getExecutionState();
if (state.state !== "idle") {
abortCurrentOperation(true).then((aborted) => {
@@ -234,8 +235,8 @@ function AppContent(props: AppProps) {
}
}
// Ctrl+Shift+S toggles step mode
if (evt.ctrl && evt.shift && evt.name === "s") {
// Toggle step mode
if (matchesAction(evt, "session_step_toggle")) {
const state = getExecutionState();
if (state.state !== "idle") {
const isStepMode = state.state === "stepping";
@@ -248,8 +249,8 @@ function AppContent(props: AppProps) {
}
}
// Enter advances step when waiting for step confirmation
if (evt.name === "return" && !evt.ctrl && !evt.shift) {
// Advance one step when waiting for step confirmation
if (matchesAction(evt, "session_step_advance")) {
const state = getExecutionState();
if (state.waitingForStep) {
advanceStep();
@@ -258,8 +259,8 @@ function AppContent(props: AppProps) {
}
}
// Ctrl+C exits the application (with confirmation)
if (evt.ctrl && evt.name === "c") {
// App exit (Ctrl+C with confirmation)
if (matchesAction(evt, "app_exit")) {
// First try to abort current operation
const state = getExecutionState();
if (state.state !== "idle") {
@@ -285,7 +286,8 @@ function AppContent(props: AppProps) {
return;
}
if (evt.name === "/" && app.mode() === "idle" && !app.inputBuffer()) {
// Command menu trigger from "/" when input is empty
if (matchesAction(evt, "command_menu") && app.mode() === "idle" && !app.inputBuffer()) {
app.openCommandMenu();
evt.preventDefault();
return;
@@ -378,8 +380,11 @@ function AppContent(props: AppProps) {
const handlePlanApprovalResponse = (
response: PlanApprovalPromptResponse,
): void => {
// Don't set mode here - the resolve callback in plan-approval.ts
// handles the mode transition
// Resolve the blocking promise stored on the prompt
const prompt = app.planApprovalPrompt();
if (prompt?.resolve) {
prompt.resolve(response);
}
props.onPlanApprovalResponse(response);
};

View File

@@ -1,8 +1,14 @@
import { createMemo, Show, onMount, onCleanup } from "solid-js";
import { createMemo, Show, For, onMount, onCleanup } from "solid-js";
import { useKeyboard } from "@opentui/solid";
import { TextareaRenderable, type PasteEvent } from "@opentui/core";
import { useTheme } from "@tui-solid/context/theme";
import { useAppStore } from "@tui-solid/context/app";
import { matchesAction } from "@services/keybind-resolver";
import {
readClipboardImage,
formatImageSize,
getImageSizeFromBase64,
} from "@services/clipboard-service";
/** Minimum lines to trigger paste summary */
const MIN_PASTE_LINES = 3;
@@ -110,12 +116,40 @@ export function InputArea(props: InputAreaProps) {
return theme.colors.border;
});
// Handle "/" to open command menu when input is empty
// Handle Enter to submit (backup in case onSubmit doesn't fire)
// Handle Ctrl+M to toggle interaction mode (Ctrl+Tab doesn't work in most terminals)
/**
* Try to read an image from the clipboard and add it as a pasted image.
* Called on Ctrl+V — if an image is found it gets inserted as a placeholder;
* otherwise the default text-paste behavior takes over.
*/
const handleImagePaste = async (): Promise<boolean> => {
try {
const image = await readClipboardImage();
if (!image) return false;
// Store the image in app state
app.addPastedImage(image);
// Insert a visual placeholder into the input
const size = formatImageSize(getImageSizeFromBase64(image.data));
const placeholder = `[Image: ${image.mediaType.split("/")[1].toUpperCase()} ${size}]`;
if (inputRef) {
inputRef.insertText(placeholder + " ");
app.setInputBuffer(inputRef.plainText);
}
return true;
} catch {
return false;
}
};
// Keyboard handler using configurable keybinds from keybind-resolver.
// Keybinds are loaded from ~/.config/codetyper/keybindings.json on startup.
// See DEFAULT_KEYBINDS in constants/keybinds.ts for all available actions.
useKeyboard((evt) => {
// Ctrl+M works even when locked or menus are open
if (evt.ctrl && evt.name === "m") {
// Mode toggle — works even when locked or menus are open
if (matchesAction(evt, "mode_toggle")) {
app.toggleInteractionMode();
evt.preventDefault();
evt.stopPropagation();
@@ -126,21 +160,46 @@ export function InputArea(props: InputAreaProps) {
// Don't capture keys when any menu/modal is open
if (isMenuOpen()) return;
if (evt.name === "/" && !app.inputBuffer()) {
// Ctrl+V: attempt clipboard image paste first, then fall through to text paste
if (matchesAction(evt, "input_paste")) {
handleImagePaste().then((handled) => {
// If an image was found, the placeholder is already inserted.
// If not, the default terminal paste (text) has already fired.
if (handled) {
// Image was pasted — nothing else to do
}
});
// Don't preventDefault — let the terminal's native text paste
// fire in parallel. If the clipboard has an image, handleImagePaste
// will insert its own placeholder; the text paste will be empty/no-op.
return;
}
// Command menu from "/" — works at any point in the input
if (matchesAction(evt, "command_menu")) {
app.insertText("/");
app.openCommandMenu();
evt.preventDefault();
evt.stopPropagation();
return;
}
if (evt.name === "@") {
// File picker from "@" — works at any point in the input
if (matchesAction(evt, "file_picker")) {
app.insertText("@");
app.setMode("file_picker");
evt.preventDefault();
evt.stopPropagation();
return;
}
if (evt.name === "return" && !evt.shift && !evt.ctrl && !evt.meta) {
// Submit input
if (
matchesAction(evt, "input_submit") &&
!evt.shift &&
!evt.ctrl &&
!evt.meta
) {
handleSubmit();
evt.preventDefault();
evt.stopPropagation();
@@ -157,9 +216,14 @@ export function InputArea(props: InputAreaProps) {
if (inputRef) inputRef.clear();
app.setInputBuffer("");
clearPastedBlocks();
// NOTE: Do NOT clear pasted images here — the message handler reads them
// asynchronously and clears them after consuming. Clearing here would race
// and cause images to be silently dropped.
}
};
const imageCount = createMemo(() => app.pastedImages().length);
/**
* Handle paste events - summarize large pastes
*/
@@ -238,21 +302,42 @@ export function InputArea(props: InputAreaProps) {
onKeyDown={(evt) => {
// Don't capture keys when any menu/modal is open
if (isMenuOpen()) return;
if (evt.name === "return" && !evt.shift && !evt.ctrl && !evt.meta) {
if (
matchesAction(evt, "input_submit") &&
!evt.shift &&
!evt.ctrl &&
!evt.meta
) {
handleSubmit();
evt.preventDefault();
}
if (evt.name === "/" && !app.inputBuffer()) {
if (matchesAction(evt, "command_menu")) {
app.insertText("/");
app.openCommandMenu();
evt.preventDefault();
}
if (evt.name === "@") {
if (matchesAction(evt, "file_picker")) {
app.insertText("@");
app.setMode("file_picker");
evt.preventDefault();
}
}}
onSubmit={handleSubmit}
/>
<Show when={imageCount() > 0}>
<box flexDirection="row" paddingTop={0}>
<For each={app.pastedImages()}>
{(img) => (
<text fg={theme.colors.accent}>
{` [${img.mediaType.split("/")[1].toUpperCase()} ${formatImageSize(getImageSizeFromBase64(img.data))}]`}
</text>
)}
</For>
<text fg={theme.colors.textDim}>
{` (${imageCount()} image${imageCount() > 1 ? "s" : ""} attached)`}
</text>
</box>
</Show>
</Show>
</box>
);

View File

@@ -1,8 +1,8 @@
import { createSignal, Show } from "solid-js";
import { createSignal, createMemo, Show } from "solid-js";
import { useKeyboard } from "@opentui/solid";
import { TextAttributes } from "@opentui/core";
import { useTheme } from "@tui-solid/context/theme";
import type { MCPAddFormData } from "@/types/mcp";
import type { MCPAddFormData, MCPTransportType } from "@/types/mcp";
interface MCPAddFormProps {
onSubmit: (data: MCPAddFormData) => void;
@@ -10,21 +10,26 @@ interface MCPAddFormProps {
isActive?: boolean;
}
type FormField = "name" | "command" | "args" | "scope";
/** All fields in order. The visible set depends on the selected transport. */
type FormField = "name" | "type" | "command" | "args" | "url" | "scope";
const FIELD_ORDER: FormField[] = ["name", "command", "args", "scope"];
const TRANSPORT_OPTIONS: MCPTransportType[] = ["stdio", "http", "sse"];
const FIELD_LABELS: Record<FormField, string> = {
name: "Server Name",
type: "Transport",
command: "Command",
args: "Arguments (use quotes for paths with spaces)",
args: "Arguments",
url: "URL",
scope: "Scope",
};
const FIELD_PLACEHOLDERS: Record<FormField, string> = {
name: "e.g., filesystem",
name: "e.g., figma",
type: "",
command: "e.g., npx",
args: 'e.g., -y @modelcontextprotocol/server-filesystem "/path/to/dir"',
args: 'e.g., -y @modelcontextprotocol/server-filesystem "/path"',
url: "e.g., https://mcp.figma.com/mcp",
scope: "",
};
@@ -34,65 +39,109 @@ export function MCPAddForm(props: MCPAddFormProps) {
const [currentField, setCurrentField] = createSignal<FormField>("name");
const [name, setName] = createSignal("");
const [transport, setTransport] = createSignal<MCPTransportType>("http");
const [command, setCommand] = createSignal("");
const [args, setArgs] = createSignal("");
const [url, setUrl] = createSignal("");
const [isGlobal, setIsGlobal] = createSignal(false);
const [error, setError] = createSignal<string | null>(null);
/** Visible fields depend on the selected transport */
const fieldOrder = createMemo((): FormField[] => {
if (transport() === "stdio") {
return ["name", "type", "command", "args", "scope"];
}
return ["name", "type", "url", "scope"];
});
const getFieldValue = (field: FormField): string => {
const fieldGetters: Record<FormField, () => string> = {
name: name,
command: command,
args: args,
const getters: Record<FormField, () => string> = {
name,
type: transport,
command,
args,
url,
scope: () => (isGlobal() ? "global" : "local"),
};
return fieldGetters[field]();
return getters[field]();
};
const setFieldValue = (field: FormField, value: string): void => {
const fieldSetters: Record<FormField, (v: string) => void> = {
const setters: Record<FormField, (v: string) => void> = {
name: setName,
type: (v) => setTransport(v as MCPTransportType),
command: setCommand,
args: setArgs,
url: setUrl,
scope: () => setIsGlobal(value === "global"),
};
fieldSetters[field](value);
setters[field](value);
};
const handleSubmit = (): void => {
setError(null);
const n = name().trim();
if (!name().trim()) {
if (!n) {
setError("Server name is required");
setCurrentField("name");
return;
}
if (!command().trim()) {
setError("Command is required");
setCurrentField("command");
return;
if (transport() === "stdio") {
if (!command().trim()) {
setError("Command is required for stdio transport");
setCurrentField("command");
return;
}
props.onSubmit({
name: n,
type: "stdio",
command: command().trim(),
args: args().trim() || undefined,
isGlobal: isGlobal(),
});
} else {
if (!url().trim()) {
setError("URL is required for http/sse transport");
setCurrentField("url");
return;
}
props.onSubmit({
name: n,
type: transport(),
url: url().trim(),
isGlobal: isGlobal(),
});
}
props.onSubmit({
name: name().trim(),
command: command().trim(),
args: args().trim(),
isGlobal: isGlobal(),
});
};
const moveToNextField = (): void => {
const currentIndex = FIELD_ORDER.indexOf(currentField());
if (currentIndex < FIELD_ORDER.length - 1) {
setCurrentField(FIELD_ORDER[currentIndex + 1]);
const order = fieldOrder();
const idx = order.indexOf(currentField());
if (idx < order.length - 1) {
setCurrentField(order[idx + 1]);
}
};
const moveToPrevField = (): void => {
const currentIndex = FIELD_ORDER.indexOf(currentField());
if (currentIndex > 0) {
setCurrentField(FIELD_ORDER[currentIndex - 1]);
const order = fieldOrder();
const idx = order.indexOf(currentField());
if (idx > 0) {
setCurrentField(order[idx - 1]);
}
};
/** Cycle through transport options */
const cycleTransport = (direction: 1 | -1): void => {
const idx = TRANSPORT_OPTIONS.indexOf(transport());
const next =
(idx + direction + TRANSPORT_OPTIONS.length) % TRANSPORT_OPTIONS.length;
setTransport(TRANSPORT_OPTIONS[next]);
// If the current field is no longer visible after switching, jump to next valid
if (!fieldOrder().includes(currentField())) {
setCurrentField("type");
}
};
@@ -108,6 +157,7 @@ export function MCPAddForm(props: MCPAddFormProps) {
const field = currentField();
// Enter — submit on last field, otherwise advance
if (evt.name === "return") {
if (field === "scope") {
handleSubmit();
@@ -118,54 +168,64 @@ export function MCPAddForm(props: MCPAddFormProps) {
return;
}
// Tab navigation
if (evt.name === "tab") {
if (evt.shift) {
moveToPrevField();
} else {
moveToNextField();
}
if (evt.shift) moveToPrevField();
else moveToNextField();
evt.preventDefault();
return;
}
// Up / Down
if (evt.name === "up") {
if (field === "scope") {
setIsGlobal(!isGlobal());
} else {
moveToPrevField();
}
if (field === "scope") setIsGlobal(!isGlobal());
else if (field === "type") cycleTransport(-1);
else moveToPrevField();
evt.preventDefault();
return;
}
if (evt.name === "down") {
if (field === "scope") setIsGlobal(!isGlobal());
else if (field === "type") cycleTransport(1);
else moveToNextField();
evt.preventDefault();
return;
}
if (evt.name === "down") {
if (field === "scope") {
setIsGlobal(!isGlobal());
} else {
moveToNextField();
// Left / Right / Space on selector fields
if (field === "type") {
if (
evt.name === "space" ||
evt.name === "left" ||
evt.name === "right"
) {
cycleTransport(evt.name === "left" ? -1 : 1);
evt.preventDefault();
}
evt.preventDefault();
return;
}
if (field === "scope") {
if (evt.name === "space" || evt.name === "left" || evt.name === "right") {
if (
evt.name === "space" ||
evt.name === "left" ||
evt.name === "right"
) {
setIsGlobal(!isGlobal());
evt.preventDefault();
}
return;
}
// Backspace
if (evt.name === "backspace" || evt.name === "delete") {
const currentValue = getFieldValue(field);
if (currentValue.length > 0) {
setFieldValue(field, currentValue.slice(0, -1));
}
const val = getFieldValue(field);
if (val.length > 0) setFieldValue(field, val.slice(0, -1));
evt.preventDefault();
return;
}
// Handle space key
// Space
if (evt.name === "space") {
setFieldValue(field, getFieldValue(field) + " ");
setError(null);
@@ -173,14 +233,10 @@ export function MCPAddForm(props: MCPAddFormProps) {
return;
}
// Handle paste (Ctrl+V) - terminal paste usually comes as sequence of characters
// but some terminals send the full pasted text as a single event
if (evt.ctrl && evt.name === "v") {
// Let the terminal handle paste - don't prevent default
return;
}
// Ctrl+V
if (evt.ctrl && evt.name === "v") return;
// Handle regular character input
// Single char
if (evt.name.length === 1 && !evt.ctrl && !evt.meta) {
setFieldValue(field, getFieldValue(field) + evt.name);
setError(null);
@@ -188,7 +244,7 @@ export function MCPAddForm(props: MCPAddFormProps) {
return;
}
// Handle multi-character input (e.g., pasted text from terminal)
// Pasted text
if (evt.sequence && evt.sequence.length > 1 && !evt.ctrl && !evt.meta) {
setFieldValue(field, getFieldValue(field) + evt.sequence);
setError(null);
@@ -196,57 +252,20 @@ export function MCPAddForm(props: MCPAddFormProps) {
}
});
const renderField = (field: FormField) => {
const isCurrentField = currentField() === field;
// ──────────── Renderers ────────────
const renderTextField = (field: FormField) => {
const isCurrent = currentField() === field;
const value = getFieldValue(field);
const placeholder = FIELD_PLACEHOLDERS[field];
if (field === "scope") {
return (
<box flexDirection="row" marginBottom={1}>
<text
fg={isCurrentField ? theme.colors.primary : theme.colors.text}
attributes={
isCurrentField ? TextAttributes.BOLD : TextAttributes.NONE
}
>
{isCurrentField ? "> " : " "}
{FIELD_LABELS[field]}:{" "}
</text>
<text
fg={!isGlobal() ? theme.colors.success : theme.colors.textDim}
attributes={
!isGlobal() && isCurrentField
? TextAttributes.BOLD
: TextAttributes.NONE
}
>
[Local]
</text>
<text fg={theme.colors.textDim}> / </text>
<text
fg={isGlobal() ? theme.colors.warning : theme.colors.textDim}
attributes={
isGlobal() && isCurrentField
? TextAttributes.BOLD
: TextAttributes.NONE
}
>
[Global]
</text>
</box>
);
}
return (
<box flexDirection="row" marginBottom={1}>
<text
fg={isCurrentField ? theme.colors.primary : theme.colors.text}
attributes={
isCurrentField ? TextAttributes.BOLD : TextAttributes.NONE
}
fg={isCurrent ? theme.colors.primary : theme.colors.text}
attributes={isCurrent ? TextAttributes.BOLD : TextAttributes.NONE}
>
{isCurrentField ? "> " : " "}
{isCurrent ? "> " : " "}
{FIELD_LABELS[field]}:{" "}
</text>
<Show
@@ -254,19 +273,93 @@ export function MCPAddForm(props: MCPAddFormProps) {
fallback={
<text fg={theme.colors.textDim}>
{placeholder}
{isCurrentField ? "_" : ""}
{isCurrent ? "_" : ""}
</text>
}
>
<text fg={theme.colors.text}>
{value}
{isCurrentField ? "_" : ""}
{isCurrent ? "_" : ""}
</text>
</Show>
</box>
);
};
const renderTransportField = () => {
const isCurrent = currentField() === "type";
return (
<box flexDirection="row" marginBottom={1}>
<text
fg={isCurrent ? theme.colors.primary : theme.colors.text}
attributes={isCurrent ? TextAttributes.BOLD : TextAttributes.NONE}
>
{isCurrent ? "> " : " "}
{FIELD_LABELS.type}:{" "}
</text>
{TRANSPORT_OPTIONS.map((opt) => (
<>
<text
fg={
transport() === opt
? theme.colors.success
: theme.colors.textDim
}
attributes={
transport() === opt && isCurrent
? TextAttributes.BOLD
: TextAttributes.NONE
}
>
[{opt}]
</text>
<text fg={theme.colors.textDim}> </text>
</>
))}
</box>
);
};
const renderScopeField = () => {
const isCurrent = currentField() === "scope";
return (
<box flexDirection="row" marginBottom={1}>
<text
fg={isCurrent ? theme.colors.primary : theme.colors.text}
attributes={isCurrent ? TextAttributes.BOLD : TextAttributes.NONE}
>
{isCurrent ? "> " : " "}
{FIELD_LABELS.scope}:{" "}
</text>
<text
fg={!isGlobal() ? theme.colors.success : theme.colors.textDim}
attributes={
!isGlobal() && isCurrent ? TextAttributes.BOLD : TextAttributes.NONE
}
>
[Local]
</text>
<text fg={theme.colors.textDim}> / </text>
<text
fg={isGlobal() ? theme.colors.warning : theme.colors.textDim}
attributes={
isGlobal() && isCurrent ? TextAttributes.BOLD : TextAttributes.NONE
}
>
[Global]
</text>
</box>
);
};
const renderField = (field: FormField) => {
if (field === "type") return renderTransportField();
if (field === "scope") return renderScopeField();
return renderTextField(field);
};
return (
<box
flexDirection="column"
@@ -290,14 +383,11 @@ export function MCPAddForm(props: MCPAddFormProps) {
</box>
</Show>
{renderField("name")}
{renderField("command")}
{renderField("args")}
{renderField("scope")}
{fieldOrder().map((f) => renderField(f))}
<box marginTop={1} flexDirection="column">
<text fg={theme.colors.textDim}>
Tab/Enter next | Shift+Tab prev | navigate | Esc cancel
Tab/Enter next | Shift+Tab prev | switch option | Esc cancel
</text>
<text fg={theme.colors.textDim}>Enter on Scope to submit</text>
</box>

View File

@@ -25,9 +25,16 @@ const PERMISSION_OPTIONS: PermissionOption[] = [
{ key: "n", label: "No, deny this request", scope: "deny", allowed: false },
];
/**
* Default to "Yes, for this session" (index 1) instead of "Yes, this once" (index 0).
* This prevents the user from having to re-approve the same pattern every time
* (e.g. for curl, web_search, etc.) which is the most common intended behavior.
*/
const DEFAULT_SELECTION = 1;
export function PermissionModal(props: PermissionModalProps) {
const theme = useTheme();
const [selectedIndex, setSelectedIndex] = createSignal(0);
const [selectedIndex, setSelectedIndex] = createSignal(DEFAULT_SELECTION);
const isActive = () => props.isActive ?? true;
const handleResponse = (allowed: boolean, scope?: PermissionScope): void => {

View File

@@ -1,4 +1,4 @@
import { createSignal, createMemo, For, Show } from "solid-js";
import { createSignal, For, Show } from "solid-js";
import { useKeyboard } from "@opentui/solid";
import { TextAttributes } from "@opentui/core";
import { useTheme } from "@tui-solid/context/theme";
@@ -7,7 +7,7 @@ import {
PLAN_APPROVAL_OPTIONS,
PLAN_APPROVAL_FOOTER_TEXT,
} from "@constants/plan-approval";
import type { PlanApprovalOption, PlanEditMode } from "@constants/plan-approval";
import type { PlanApprovalOption } from "@constants/plan-approval";
interface PlanApprovalModalProps {
prompt: PlanApprovalPrompt;
@@ -63,33 +63,54 @@ export function PlanApprovalModal(props: PlanApprovalModalProps) {
useKeyboard((evt) => {
if (!isActive()) return;
evt.stopPropagation();
// Feedback mode: handle text input
if (feedbackMode()) {
if (evt.name === "return") {
handleFeedbackSubmit();
evt.preventDefault();
evt.stopPropagation();
return;
}
if (evt.name === "escape") {
handleCancel();
evt.preventDefault();
evt.stopPropagation();
return;
}
if (evt.name === "backspace") {
if (evt.name === "backspace" || evt.name === "delete") {
setFeedbackText((prev) => prev.slice(0, -1));
evt.preventDefault();
return;
}
// Handle space key (name is "space", not " ")
if (evt.name === "space") {
setFeedbackText((prev) => prev + " ");
evt.preventDefault();
return;
}
// Handle regular character input
if (evt.name.length === 1 && !evt.ctrl && !evt.meta) {
setFeedbackText((prev) => prev + evt.name);
evt.preventDefault();
return;
}
// Handle multi-character input (e.g., pasted text)
if (evt.sequence && evt.sequence.length > 1 && !evt.ctrl && !evt.meta) {
setFeedbackText((prev) => prev + evt.sequence);
evt.preventDefault();
return;
}
return;
}
// Normal mode: navigate options
if (evt.name === "escape") {
handleCancel();
evt.preventDefault();
evt.stopPropagation();
return;
}
if (evt.name === "up") {
setSelectedIndex((prev) =>
prev > 0 ? prev - 1 : optionCount - 1,
@@ -112,12 +133,6 @@ export function PlanApprovalModal(props: PlanApprovalModalProps) {
return;
}
if (evt.name === "escape") {
handleCancel();
evt.preventDefault();
return;
}
// Shift+Tab shortcut for option 1 (auto-accept clear)
if (evt.name === "tab" && evt.shift) {
handleApproval(PLAN_APPROVAL_OPTIONS[0]);
@@ -139,23 +154,23 @@ export function PlanApprovalModal(props: PlanApprovalModalProps) {
<box
flexDirection="column"
borderColor={theme.colors.borderModal}
border={["top", "bottom", "left", "right"]}
border={["top"]}
backgroundColor={theme.colors.background}
paddingLeft={2}
paddingRight={2}
paddingTop={1}
paddingBottom={1}
width="100%"
>
{/* Header */}
<box marginBottom={1}>
<text fg={theme.colors.primary} attributes={TextAttributes.BOLD}>
CodeTyper has written up a plan and is ready to execute. Would you
like to proceed?
</text>
</box>
{/* Plan content (shrinkable to fit available space) */}
<Show when={props.prompt.planContent}>
<box marginBottom={1} flexDirection="column" flexShrink={1} overflow="hidden">
<text fg={theme.colors.text}>{props.prompt.planContent}</text>
</box>
</Show>
{/* Plan info */}
<Show when={props.prompt.planTitle}>
{/* Fallback: title + summary when no full content */}
<Show when={!props.prompt.planContent && props.prompt.planTitle}>
<box marginBottom={1}>
<text fg={theme.colors.text} attributes={TextAttributes.BOLD}>
{props.prompt.planTitle}
@@ -163,15 +178,22 @@ export function PlanApprovalModal(props: PlanApprovalModalProps) {
</box>
</Show>
<Show when={props.prompt.planSummary}>
<Show when={!props.prompt.planContent && props.prompt.planSummary}>
<box marginBottom={1}>
<text fg={theme.colors.textDim}>{props.prompt.planSummary}</text>
</box>
</Show>
{/* Approval prompt */}
<box marginBottom={1} flexShrink={0}>
<text fg={theme.colors.primary} attributes={TextAttributes.BOLD}>
Would you like to proceed with this plan?
</text>
</box>
{/* Options */}
<Show when={!feedbackMode()}>
<box flexDirection="column" marginTop={1}>
<box flexDirection="column" marginTop={1} flexShrink={0}>
<For each={PLAN_APPROVAL_OPTIONS}>
{(option, index) => {
const isSelected = () => index() === selectedIndex();
@@ -205,7 +227,7 @@ export function PlanApprovalModal(props: PlanApprovalModalProps) {
{/* Feedback input mode */}
<Show when={feedbackMode()}>
<box flexDirection="column" marginTop={1}>
<box flexDirection="column" marginTop={1} flexShrink={0}>
<text fg={theme.colors.text}>
Tell CodeTyper what to change:
</text>
@@ -228,7 +250,7 @@ export function PlanApprovalModal(props: PlanApprovalModalProps) {
{/* Footer */}
<Show when={!feedbackMode()}>
<box marginTop={1} flexDirection="row">
<box marginTop={1} flexDirection="row" flexShrink={0}>
<Show when={props.prompt.planFilePath}>
<text fg={theme.colors.textDim}>
{PLAN_APPROVAL_FOOTER_TEXT} - {props.prompt.planFilePath}

View File

@@ -0,0 +1,182 @@
/**
* Activity Panel
*
* Right sidebar showing session summary: context usage, modified files, etc.
* Inspired by OpenCode's clean summary view.
*/
import { For, Show, createMemo } from "solid-js";
import { TextAttributes } from "@opentui/core";
import { useTheme } from "@tui-solid/context/theme";
import { useAppStore } from "@tui-solid/context/app";
import {
TOKEN_WARNING_THRESHOLD,
TOKEN_CRITICAL_THRESHOLD,
} from "@constants/token";
/** Extract filename from a path without importing node:path */
const getFileName = (filePath: string): string => {
const parts = filePath.split("/");
return parts[parts.length - 1] ?? filePath;
};
const PANEL_WIDTH = 36;
const formatTokenCount = (tokens: number): string => {
if (tokens >= 1000) {
return `${(tokens / 1000).toFixed(1)}K`;
}
return tokens.toString();
};
const formatPercent = (value: number): string =>
`${Math.round(value)}%`;
export function ActivityPanel() {
const theme = useTheme();
const app = useAppStore();
const contextUsage = createMemo(() => {
const stats = app.sessionStats();
const totalTokens = stats.inputTokens + stats.outputTokens;
const maxTokens = stats.contextMaxTokens;
const percent = maxTokens > 0 ? (totalTokens / maxTokens) * 100 : 0;
let status: "normal" | "warning" | "critical" = "normal";
if (percent >= TOKEN_CRITICAL_THRESHOLD * 100) {
status = "critical";
} else if (percent >= TOKEN_WARNING_THRESHOLD * 100) {
status = "warning";
}
return { total: totalTokens, max: maxTokens, percent, status };
});
const tokenColor = createMemo(() => {
const s = contextUsage().status;
if (s === "critical") return theme.colors.error;
if (s === "warning") return theme.colors.warning;
return theme.colors.textDim;
});
const modifiedFiles = createMemo(() => {
return [...app.modifiedFiles()].sort(
(a, b) => b.lastModified - a.lastModified,
);
});
const totalChanges = createMemo(() => {
const files = app.modifiedFiles();
return {
additions: files.reduce((sum, f) => sum + f.additions, 0),
deletions: files.reduce((sum, f) => sum + f.deletions, 0),
};
});
return (
<box
flexDirection="column"
width={PANEL_WIDTH}
border={["left"]}
borderColor={theme.colors.border}
paddingLeft={1}
paddingRight={1}
paddingTop={1}
flexShrink={0}
>
{/* Context Section */}
<box flexDirection="column" marginBottom={1}>
<text
fg={theme.colors.text}
attributes={TextAttributes.BOLD}
>
Context
</text>
<box flexDirection="row" marginTop={1}>
<text fg={tokenColor()}>
{formatTokenCount(contextUsage().total)}
</text>
<text fg={theme.colors.textDim}> / </text>
<text fg={theme.colors.textDim}>
{formatTokenCount(contextUsage().max)}
</text>
<text fg={theme.colors.textDim}> tokens</text>
</box>
<text fg={tokenColor()}>
{formatPercent(contextUsage().percent)} used
</text>
</box>
{/* Separator */}
<box marginBottom={1}>
<text fg={theme.colors.border}>
{"─".repeat(PANEL_WIDTH - 2)}
</text>
</box>
{/* Modified Files Section */}
<box flexDirection="column">
<box flexDirection="row" justifyContent="space-between">
<text
fg={theme.colors.text}
attributes={TextAttributes.BOLD}
>
Modified Files
</text>
<Show when={modifiedFiles().length > 0}>
<text fg={theme.colors.textDim}>
{modifiedFiles().length}
</text>
</Show>
</box>
<Show
when={modifiedFiles().length > 0}
fallback={
<text fg={theme.colors.textDim}>No files modified yet</text>
}
>
<box flexDirection="column" marginTop={1}>
<For each={modifiedFiles()}>
{(file) => (
<box flexDirection="row" marginBottom={0}>
<text fg={theme.colors.text}>
{getFileName(file.filePath)}
</text>
<text fg={theme.colors.textDim}> </text>
<Show when={file.additions > 0}>
<text fg={theme.colors.success}>
+{file.additions}
</text>
</Show>
<Show when={file.deletions > 0}>
<text fg={theme.colors.textDim}> </text>
<text fg={theme.colors.error}>
-{file.deletions}
</text>
</Show>
</box>
)}
</For>
</box>
{/* Totals */}
<box marginTop={1}>
<text fg={theme.colors.textDim}>Total: </text>
<Show when={totalChanges().additions > 0}>
<text fg={theme.colors.success}>
+{totalChanges().additions}
</text>
</Show>
<Show when={totalChanges().deletions > 0}>
<text fg={theme.colors.textDim}> </text>
<text fg={theme.colors.error}>
-{totalChanges().deletions}
</text>
</Show>
</box>
</Show>
</box>
</box>
);
}

View File

@@ -16,9 +16,12 @@ import type {
StreamingLogState,
SuggestionState,
MCPServerDisplay,
ModifiedFileEntry,
} from "@/types/tui";
import type { ProviderModel } from "@/types/providers";
import type { BrainConnectionStatus, BrainUser } from "@/types/brain";
import type { PastedImage } from "@/types/image";
import { stripMarkdown } from "@/utils/markdown/strip";
interface AppStore {
mode: AppMode;
@@ -49,6 +52,8 @@ interface AppStore {
suggestions: SuggestionState;
cascadeEnabled: boolean;
mcpServers: MCPServerDisplay[];
modifiedFiles: ModifiedFileEntry[];
pastedImages: PastedImage[];
brain: {
status: BrainConnectionStatus;
user: BrainUser | null;
@@ -95,6 +100,7 @@ interface AppContextValue {
suggestions: Accessor<SuggestionState>;
cascadeEnabled: Accessor<boolean>;
mcpServers: Accessor<MCPServerDisplay[]>;
modifiedFiles: Accessor<ModifiedFileEntry[]>;
brain: Accessor<{
status: BrainConnectionStatus;
user: BrainUser | null;
@@ -201,6 +207,16 @@ interface AppContextValue {
status: MCPServerDisplay["status"],
) => void;
// Modified file tracking
addModifiedFile: (entry: ModifiedFileEntry) => void;
clearModifiedFiles: () => void;
// Pasted image tracking
pastedImages: Accessor<PastedImage[]>;
addPastedImage: (image: PastedImage) => void;
clearPastedImages: () => void;
removePastedImage: (id: string) => void;
// Computed
isInputLocked: () => boolean;
}
@@ -268,6 +284,8 @@ export const { provider: AppStoreProvider, use: useAppStore } =
suggestions: createInitialSuggestionState(),
cascadeEnabled: true,
mcpServers: [],
modifiedFiles: [],
pastedImages: [],
brain: {
status: "disconnected" as BrainConnectionStatus,
user: null,
@@ -326,6 +344,7 @@ export const { provider: AppStoreProvider, use: useAppStore } =
const suggestions = (): SuggestionState => store.suggestions;
const cascadeEnabled = (): boolean => store.cascadeEnabled;
const mcpServers = (): MCPServerDisplay[] => store.mcpServers;
const modifiedFiles = (): ModifiedFileEntry[] => store.modifiedFiles;
const brain = () => store.brain;
// Mode actions
@@ -590,6 +609,52 @@ export const { provider: AppStoreProvider, use: useAppStore } =
);
};
// Modified file tracking
const addModifiedFile = (entry: ModifiedFileEntry): void => {
setStore(
produce((s) => {
const existing = s.modifiedFiles.find(
(f) => f.filePath === entry.filePath,
);
if (existing) {
existing.additions += entry.additions;
existing.deletions += entry.deletions;
existing.lastModified = entry.lastModified;
} else {
s.modifiedFiles.push({ ...entry });
}
}),
);
};
const clearModifiedFiles = (): void => {
setStore("modifiedFiles", []);
};
// Pasted image tracking
const addPastedImage = (image: PastedImage): void => {
setStore(
produce((s) => {
s.pastedImages.push({ ...image });
}),
);
};
const clearPastedImages = (): void => {
setStore("pastedImages", []);
};
const removePastedImage = (id: string): void => {
setStore(
produce((s) => {
const idx = s.pastedImages.findIndex((img) => img.id === id);
if (idx !== -1) {
s.pastedImages.splice(idx, 1);
}
}),
);
};
// Session stats actions
const startThinking = (): void => {
setStore("sessionStats", {
@@ -703,6 +768,12 @@ export const { provider: AppStoreProvider, use: useAppStore } =
const logIndex = store.logs.findIndex((l) => l.id === logId);
batch(() => {
// Strip markdown from the fully accumulated content before finalizing
if (logIndex !== -1) {
const rawContent = store.logs[logIndex].content;
setStore("logs", logIndex, "content", stripMarkdown(rawContent));
}
setStore("streamingLog", createInitialStreamingState());
if (logIndex !== -1) {
const currentMetadata = store.logs[logIndex].metadata ?? {};
@@ -829,6 +900,7 @@ export const { provider: AppStoreProvider, use: useAppStore } =
suggestions,
cascadeEnabled,
mcpServers,
modifiedFiles,
brain,
// Mode actions
@@ -899,6 +971,16 @@ export const { provider: AppStoreProvider, use: useAppStore } =
addMcpServer,
updateMcpServerStatus,
// Modified file tracking
addModifiedFile,
clearModifiedFiles,
// Pasted image tracking
pastedImages: () => store.pastedImages,
addPastedImage,
clearPastedImages,
removePastedImage,
// Session stats actions
startThinking,
stopThinking,
@@ -968,6 +1050,7 @@ const defaultAppState = {
streamingLog: createInitialStreamingState(),
suggestions: createInitialSuggestionState(),
mcpServers: [] as MCPServerDisplay[],
pastedImages: [] as PastedImage[],
brain: {
status: "disconnected" as BrainConnectionStatus,
user: null,
@@ -1009,6 +1092,7 @@ export const appStore = {
streamingLog: storeRef.streamingLog(),
suggestions: storeRef.suggestions(),
mcpServers: storeRef.mcpServers(),
pastedImages: storeRef.pastedImages(),
brain: storeRef.brain(),
};
},
@@ -1240,4 +1324,34 @@ export const appStore = {
if (!storeRef) return;
storeRef.updateMcpServerStatus(id, status);
},
addModifiedFile: (entry: ModifiedFileEntry): void => {
if (!storeRef) return;
storeRef.addModifiedFile(entry);
},
clearModifiedFiles: (): void => {
if (!storeRef) return;
storeRef.clearModifiedFiles();
},
addPastedImage: (image: PastedImage): void => {
if (!storeRef) return;
storeRef.addPastedImage(image);
},
clearPastedImages: (): void => {
if (!storeRef) return;
storeRef.clearPastedImages();
},
removePastedImage: (id: string): void => {
if (!storeRef) return;
storeRef.removePastedImage(id);
},
getPastedImages: (): PastedImage[] => {
if (!storeRef) return [];
return storeRef.pastedImages();
},
};

View File

@@ -29,6 +29,7 @@ import { HelpDetail } from "@tui-solid/components/panels/help-detail";
import { TodoPanel } from "@tui-solid/components/panels/todo-panel";
import { CenteredModal } from "@tui-solid/components/modals/centered-modal";
import { DebugLogPanel } from "@tui-solid/components/logs/debug-log-panel";
import { ActivityPanel } from "@tui-solid/components/panels/activity-panel";
import { BrainMenu } from "@tui-solid/components/menu/brain-menu";
import { BRAIN_DISABLED } from "@constants/brain";
import { initializeMCP, getServerInstances } from "@services/mcp/manager";
@@ -281,6 +282,8 @@ export function Session(props: SessionProps) {
<LogPanel />
</box>
<ActivityPanel />
<Show when={app.todosVisible() && props.plan}>
<TodoPanel plan={props.plan ?? null} visible={app.todosVisible()} />
</Show>

View File

@@ -58,6 +58,8 @@ export interface DiffResult {
export interface ToolCallInfo {
name: string;
path?: string;
/** Additional paths for multi-file tools (e.g. multi_edit) */
paths?: string[];
}
export interface ProviderDisplayInfo {

View File

@@ -21,6 +21,8 @@ export interface MCPRegistryServer {
package: string;
command: string;
args: string[];
/** Server URL for http / sse transport */
url?: string;
category: MCPServerCategory;
tags: string[];
transport: MCPRegistryTransport;

View File

@@ -1,16 +1,41 @@
export type MCPTransportType = "stdio" | "sse" | "http";
/**
* MCP server configuration as stored in mcp.json.
*
* For stdio servers: { command, args?, env?, type?: "stdio" }
* For http/sse servers: { url, type: "http" | "sse" }
*
* The `name` field is injected at runtime from the config key — it is
* NOT persisted inside the server object.
*/
export interface MCPServerConfig {
name: string;
command: string;
/** Runtime-only: injected from the config key, never written to disk */
name?: string;
/** Transport type (defaults to "stdio" when absent) */
type?: MCPTransportType;
/** Command to spawn (stdio transport) */
command?: string;
/** Arguments for the command (stdio transport) */
args?: string[];
/** Extra environment variables (stdio transport) */
env?: Record<string, string>;
transport?: MCPTransportType;
/** Server URL (http / sse transport) */
url?: string;
/** Whether this server is enabled */
enabled?: boolean;
}
export type MCPTransportType = "stdio" | "sse" | "http";
export interface MCPConfig {
/**
* Reserved for MCP client runtime input wiring.
* Keep for compatibility with MCP config schema.
*/
inputs: unknown[];
/**
* Map of server name to server config.
*/
servers: Record<string, MCPServerConfig>;
}
@@ -65,7 +90,12 @@ export interface MCPManagerState {
export interface MCPAddFormData {
name: string;
command: string;
args: string;
type: MCPTransportType;
/** Command (stdio) */
command?: string;
/** Arguments string (stdio) */
args?: string;
/** Server URL (http / sse) */
url?: string;
isGlobal: boolean;
}

View File

@@ -29,6 +29,8 @@ export interface OllamaMessage {
role: string;
content: string;
tool_calls?: OllamaToolCall[];
/** Base64-encoded images for multimodal models (e.g. llava) */
images?: string[];
}
export interface OllamaChatOptions {

View File

@@ -4,13 +4,45 @@
export type ProviderName = "copilot" | "ollama";
/** A text content part in a multimodal message */
export interface TextContentPart {
type: "text";
text: string;
}
/** An image content part in a multimodal message (OpenAI-compatible) */
export interface ImageContentPart {
type: "image_url";
image_url: {
url: string; // data:image/png;base64,... or a URL
detail?: "auto" | "low" | "high";
};
}
/** A single part of multimodal message content */
export type ContentPart = TextContentPart | ImageContentPart;
/** Message content can be a simple string or an array of content parts */
export type MessageContent = string | ContentPart[];
export interface Message {
role: "system" | "user" | "assistant" | "tool";
content: string;
content: MessageContent;
tool_call_id?: string;
tool_calls?: ToolCall[];
}
/**
* Helper: extract plain text from message content regardless of format
*/
export const getMessageText = (content: MessageContent): string => {
if (typeof content === "string") return content;
return content
.filter((p): p is TextContentPart => p.type === "text")
.map((p) => p.text)
.join("");
};
export interface ToolCall {
id: string;
type: "function";
@@ -53,10 +85,14 @@ export interface ChatCompletionResponse {
}
export interface StreamChunk {
type: "content" | "tool_call" | "done" | "error" | "model_switched";
type: "content" | "tool_call" | "done" | "error" | "model_switched" | "usage";
content?: string;
toolCall?: Partial<ToolCall>;
error?: string;
usage?: {
promptTokens: number;
completionTokens: number;
};
modelSwitch?: {
from: string;
to: string;

View File

@@ -3,6 +3,8 @@
*
* Types for the progressive disclosure skill system.
* Skills are loaded in 3 levels: metadata → body → external references.
* Supports built-in skills, project skills, and external agents
* from .claude/, .github/, .codetyper/ directories.
*/
/**
@@ -19,6 +21,17 @@ export type SkillTriggerType =
| "auto" // Automatically triggered based on context
| "explicit"; // Only when explicitly invoked
/**
* Source of a skill definition
*/
export type SkillSource =
| "builtin" // Ships with codetyper (src/skills/)
| "user" // User-level (~/.config/codetyper/skills/)
| "project" // Project-level (.codetyper/skills/)
| "external-claude" // External from .claude/
| "external-github" // External from .github/
| "external-codetyper"; // External from .codetyper/ root agents
/**
* Example for a skill
*/
@@ -41,6 +54,7 @@ export interface SkillMetadata {
autoTrigger: boolean;
requiredTools: string[];
tags?: string[];
source?: SkillSource;
}
/**
@@ -71,6 +85,18 @@ export interface SkillMatch {
matchType: SkillTriggerType;
}
/**
* Auto-detected skill match (from prompt analysis)
*/
export interface AutoDetectedSkill {
skill: SkillDefinition;
confidence: number;
/** Keywords in the prompt that triggered detection */
matchedKeywords: string[];
/** Category of the match (e.g., "language", "tool", "domain") */
category: string;
}
/**
* Skill execution context
*/
@@ -97,6 +123,7 @@ export interface SkillExecutionResult {
*/
export interface SkillRegistryState {
skills: Map<string, SkillDefinition>;
externalAgents: Map<string, SkillDefinition>;
lastLoadedAt: number | null;
loadErrors: string[];
}
@@ -125,3 +152,35 @@ export interface ParsedSkillFile {
examples?: SkillExample[];
filePath: string;
}
/**
* External agent file (from .claude/, .github/, .codetyper/)
*/
export interface ExternalAgentFile {
/** Relative path within the source directory */
relativePath: string;
/** Absolute path to the file */
absolutePath: string;
/** Source directory type */
source: SkillSource;
/** Raw file content */
content: string;
}
/**
* Parsed external agent definition
*/
export interface ParsedExternalAgent {
/** Derived ID from filename/path */
id: string;
/** Description from frontmatter */
description: string;
/** Tools specified in frontmatter */
tools: string[];
/** Body content (instructions) */
body: string;
/** Source of this agent */
source: SkillSource;
/** Original file path */
filePath: string;
}

View File

@@ -65,6 +65,7 @@ export interface StreamCallbacks {
onToolCallStart: (toolCall: PartialToolCall) => void;
onToolCallComplete: (toolCall: ToolCall) => void;
onModelSwitch: (info: ModelSwitchInfo) => void;
onUsage: (usage: { promptTokens: number; completionTokens: number }) => void;
onComplete: () => void;
onError: (error: string) => void;
}

View File

@@ -93,4 +93,5 @@ export type ThemeName =
| "kanagawa"
| "ayu-dark"
| "cargdev-cyberpunk"
| "pink-purple"
| "custom";

View File

@@ -212,6 +212,7 @@ export interface PlanApprovalPrompt {
id: string;
planTitle: string;
planSummary: string;
planContent?: string;
planFilePath?: string;
resolve: (response: PlanApprovalPromptResponse) => void;
}
@@ -255,6 +256,21 @@ export interface SessionStats {
contextMaxTokens: number;
}
// ============================================================================
// Modified File Tracking
// ============================================================================
export interface ModifiedFileEntry {
/** Relative or absolute file path */
filePath: string;
/** Net lines added */
additions: number;
/** Net lines deleted */
deletions: number;
/** Timestamp of the last modification */
lastModified: number;
}
// ============================================================================
// Streaming Types
// ============================================================================

View File

@@ -0,0 +1,69 @@
/**
* Markdown → plain text transform for TUI rendering.
*
* Not a full parser — strips the common constructs so the output
* reads cleanly in a terminal that does not render rich markdown.
*/
/**
* Strip markdown syntax from complete text, keeping readable content.
*
* Rules applied (in order):
* - Fenced code blocks ```lang\nCODE\n``` → CODE
* - Inline code `x` → x
* - Images ![alt](url) → alt
* - Links [label](url) → label (url)
* - Bold / italic **x** *x* __x__ _x_ → x
* - Strikethrough ~~x~~ → x
* - Headings ### Heading → Heading
* - Blockquotes > text → text
* - Horizontal rules --- *** ___ → (removed)
* - Unordered lists - item / * item → • item
* - Ordered lists 1. item → • item
*/
export const stripMarkdown = (text: string): string => {
if (!text) return text;
let result = text;
// 1. Fenced code blocks — keep inner code, drop the fences + optional lang
result = result.replace(/```[^\n]*\n([\s\S]*?)```/g, "$1");
// 2. Inline code — keep content
result = result.replace(/`([^`]+)`/g, "$1");
// 3. Images — keep alt text
result = result.replace(/!\[([^\]]*)\]\([^)]+\)/g, "$1");
// 4. Links — keep label, append URL in parens
result = result.replace(/\[([^\]]+)\]\(([^)]+)\)/g, "$1 ($2)");
// 5. Bold + italic combos first, then bold, then italic
// Order matters: longest delimiters first to avoid partial matches.
result = result.replace(/\*\*\*(.+?)\*\*\*/g, "$1");
result = result.replace(/___(.+?)___/g, "$1");
result = result.replace(/\*\*(.+?)\*\*/g, "$1");
result = result.replace(/__(.+?)__/g, "$1");
result = result.replace(/\*(.+?)\*/g, "$1");
result = result.replace(/_(.+?)_/g, "$1");
// 6. Strikethrough
result = result.replace(/~~(.+?)~~/g, "$1");
// 7. Headings — strip leading #'s and one optional space
result = result.replace(/^#{1,6}\s+/gm, "");
// 8. Blockquotes — strip leading > (possibly nested)
result = result.replace(/^(>\s*)+/gm, "");
// 9. Horizontal rules (standalone lines of ---, ***, ___)
result = result.replace(/^[\s]*([-*_]){3,}\s*$/gm, "");
// 10. Unordered list markers → bullet
result = result.replace(/^(\s*)[-*+]\s+/gm, "$1• ");
// 11. Ordered list markers → bullet
result = result.replace(/^(\s*)\d+\.\s+/gm, "$1• ");
return result;
};