Merge branch 'main' of https://gitea.haugesenspil.dk/jonas/dotfiles
This commit is contained in:
3
.gitignore
vendored
3
.gitignore
vendored
@@ -5,3 +5,6 @@ pi/.pi/agent/sessions
|
|||||||
pi/.pi/agent/auth.json
|
pi/.pi/agent/auth.json
|
||||||
pi/.pi/agent/settings.json
|
pi/.pi/agent/settings.json
|
||||||
pi/.pi/agent/usage-cache.json
|
pi/.pi/agent/usage-cache.json
|
||||||
|
pi/.pi/agent/mcp-cache.json
|
||||||
|
pi/.pi/agent/auth.json.current
|
||||||
|
pi/.pi/agent/run-history.jsonl
|
||||||
|
|||||||
@@ -9,18 +9,40 @@ defaultProgress: true
|
|||||||
|
|
||||||
You are an explorer. Thoroughly investigate a codebase or knowledge base and return structured, actionable findings.
|
You are an explorer. Thoroughly investigate a codebase or knowledge base and return structured, actionable findings.
|
||||||
|
|
||||||
Prefer semantic tools first:
|
## Available MCP Tools
|
||||||
1. Use qmd_query / qmd_get / qmd_multi_get for semantic and hybrid search of indexed docs and code
|
|
||||||
2. Use opty MCP tools for HDC-indexed context retrieval
|
|
||||||
3. Fall back to bash (grep/find) only when semantic tools don't surface what you need
|
|
||||||
4. Read key sections of files — not entire files unless necessary
|
|
||||||
|
|
||||||
Thoroughness (infer from task, default thorough):
|
### opty — Semantic code search (source files)
|
||||||
|
- **opty_query** — Semantic search via Hyperdimensional Computing. Finds functions, types, imports by meaning. Returns TOON-format (token-optimized) results.
|
||||||
|
`opty_query({ query: "camera projection" })` — find related code
|
||||||
|
`opty_query({ query: "error handling", top_k: 10 })` — limit results
|
||||||
|
- **opty_ast** — Depth-aware AST extraction. Returns functions, types, imports, fields, variants with nesting depth and line numbers. Essential for understanding structure before diving into code.
|
||||||
|
- Whole project: `opty_ast({})`
|
||||||
|
- Single file: `opty_ast({ file: "src/main.rs" })`
|
||||||
|
- Multiple files: `opty_ast({ file: ["src/world.rs", "src/level.rs"] })`
|
||||||
|
- By glob: `opty_ast({ pattern: "src/editor/**/*.rs" })`
|
||||||
|
- **opty_reindex** — Force re-index after major file changes.
|
||||||
|
- **opty_status** — Check index health (file count, code units, memory).
|
||||||
|
|
||||||
|
### qmd — Markdown/doc search (indexed collections)
|
||||||
|
- **qmd_query** — Hybrid search with typed sub-queries (lex/vec/hyde). Use `collections` to filter.
|
||||||
|
`qmd_query({ searches: [{ type: "vec", query: "how does rendering work" }] })`
|
||||||
|
- **qmd_get** — Retrieve a document by path or docid from search results.
|
||||||
|
- **qmd_multi_get** — Batch fetch by glob or comma-separated paths.
|
||||||
|
- **qmd_status** — Check index health, list collections.
|
||||||
|
|
||||||
|
## Workflow
|
||||||
|
1. **Orientation:** Run `qmd_status` to discover collections, `opty_status` to check index health
|
||||||
|
2. **Structure:** Use `opty_ast` (project-wide or by pattern) to map the codebase
|
||||||
|
3. **Semantic search:** Use `opty_query` for code, `qmd_query` for docs/markdown
|
||||||
|
4. **Read specifics:** Use `read` for exact file sections once you have line numbers from AST/search
|
||||||
|
5. **Fallback:** Use `bash` (grep/find) only when semantic tools don't surface what you need
|
||||||
|
|
||||||
|
## Thoroughness (infer from task, default thorough)
|
||||||
- Quick: targeted lookups, answer from search results alone
|
- Quick: targeted lookups, answer from search results alone
|
||||||
- Medium: follow the most important cross-references, read critical sections
|
- Medium: follow the most important cross-references, read critical sections
|
||||||
- Thorough: trace all dependencies, check related files, synthesize a full picture
|
- Thorough: trace all dependencies, check related files, synthesize a full picture
|
||||||
|
|
||||||
Your output format (context.md):
|
## Output format (context.md)
|
||||||
|
|
||||||
# Exploration Context
|
# Exploration Context
|
||||||
|
|
||||||
@@ -29,7 +51,7 @@ What was explored and why.
|
|||||||
|
|
||||||
## Files & Docs Retrieved
|
## Files & Docs Retrieved
|
||||||
List with exact line ranges or doc IDs:
|
List with exact line ranges or doc IDs:
|
||||||
1. `path/to/file.ts` (lines 10-50) — Description
|
1. `path/to/file` (lines 10-50) — Description
|
||||||
2. `#docid` — Description
|
2. `#docid` — Description
|
||||||
|
|
||||||
## Key Findings
|
## Key Findings
|
||||||
|
|||||||
@@ -1 +0,0 @@
|
|||||||
work
|
|
||||||
@@ -6,7 +6,7 @@
|
|||||||
* have complex union-type parameters represented as `{"description": "..."}` with
|
* have complex union-type parameters represented as `{"description": "..."}` with
|
||||||
* no `type`, which causes llama-server to return a 400 error.
|
* no `type`, which causes llama-server to return a 400 error.
|
||||||
*
|
*
|
||||||
* This extension starts a tiny local HTTP proxy on port 8081 that:
|
* This extension provides an optional tiny local HTTP proxy on port 8081 that:
|
||||||
* 1. Intercepts outgoing OpenAI-compatible API calls
|
* 1. Intercepts outgoing OpenAI-compatible API calls
|
||||||
* 2. Walks tool schemas and adds `"type": "string"` to any schema node
|
* 2. Walks tool schemas and adds `"type": "string"` to any schema node
|
||||||
* that is missing a type declaration
|
* that is missing a type declaration
|
||||||
@@ -15,10 +15,13 @@
|
|||||||
*
|
*
|
||||||
* It also overrides the `llama-cpp` provider's baseUrl to point at the proxy,
|
* It also overrides the `llama-cpp` provider's baseUrl to point at the proxy,
|
||||||
* so no changes to models.json are needed (beyond what's already there).
|
* so no changes to models.json are needed (beyond what's already there).
|
||||||
|
*
|
||||||
|
* Use `/llama-proxy` command to toggle the proxy on/off. Off by default.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import type { ExtensionAPI } from "@mariozechner/pi-coding-agent";
|
import type { ExtensionAPI } from "@mariozechner/pi-coding-agent";
|
||||||
import * as http from "http";
|
import * as http from "http";
|
||||||
|
import { execSync } from "child_process";
|
||||||
|
|
||||||
const PROXY_PORT = 8081;
|
const PROXY_PORT = 8081;
|
||||||
const TARGET_HOST = "127.0.0.1";
|
const TARGET_HOST = "127.0.0.1";
|
||||||
@@ -97,6 +100,33 @@ function sanitizeRequestBody(body: Record<string, unknown>): Record<string, unkn
|
|||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// Process management
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Kill any existing processes using the proxy port.
|
||||||
|
*/
|
||||||
|
function killExistingProxy(): void {
|
||||||
|
try {
|
||||||
|
// Use lsof to find processes on the port and kill them
|
||||||
|
const output = execSync(`lsof -ti:${PROXY_PORT} 2>/dev/null || true`, {
|
||||||
|
encoding: "utf-8",
|
||||||
|
});
|
||||||
|
const pids = output.trim().split("\n").filter(Boolean);
|
||||||
|
for (const pid of pids) {
|
||||||
|
try {
|
||||||
|
process.kill(Number(pid), "SIGTERM");
|
||||||
|
console.log(`[llama-proxy] Terminated old instance (PID: ${pid})`);
|
||||||
|
} catch {
|
||||||
|
// Process may have already exited
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} catch {
|
||||||
|
// lsof not available or other error — continue anyway
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// ---------------------------------------------------------------------------
|
// ---------------------------------------------------------------------------
|
||||||
// Proxy server
|
// Proxy server
|
||||||
// ---------------------------------------------------------------------------
|
// ---------------------------------------------------------------------------
|
||||||
@@ -165,15 +195,16 @@ function startProxy(): http.Server {
|
|||||||
});
|
});
|
||||||
|
|
||||||
server.listen(PROXY_PORT, "127.0.0.1", () => {
|
server.listen(PROXY_PORT, "127.0.0.1", () => {
|
||||||
// Server is up
|
console.log(`[llama-proxy] Proxy started on port ${PROXY_PORT}`);
|
||||||
});
|
});
|
||||||
|
|
||||||
server.on("error", (err: NodeJS.ErrnoException) => {
|
server.on("error", (err: NodeJS.ErrnoException) => {
|
||||||
if (err.code === "EADDRINUSE") {
|
if (err.code === "EADDRINUSE") {
|
||||||
console.warn(
|
console.error(
|
||||||
`[llama-proxy] Port ${PROXY_PORT} already in use — proxy not started. ` +
|
`[llama-proxy] Port ${PROXY_PORT} already in use. ` +
|
||||||
`If a previous pi session left it running, kill it and reload.`,
|
`Killing old instances and retrying...`,
|
||||||
);
|
);
|
||||||
|
killExistingProxy();
|
||||||
} else {
|
} else {
|
||||||
console.error("[llama-proxy] Server error:", err);
|
console.error("[llama-proxy] Server error:", err);
|
||||||
}
|
}
|
||||||
@@ -187,7 +218,20 @@ function startProxy(): http.Server {
|
|||||||
// ---------------------------------------------------------------------------
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
export default function (pi: ExtensionAPI) {
|
export default function (pi: ExtensionAPI) {
|
||||||
const server = startProxy();
|
let server: http.Server | null = null;
|
||||||
|
let proxyEnabled = false;
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Start the proxy and register the provider override.
|
||||||
|
*/
|
||||||
|
function enableProxy(): void {
|
||||||
|
if (proxyEnabled) {
|
||||||
|
console.log("[llama-proxy] Proxy already enabled");
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
killExistingProxy();
|
||||||
|
server = startProxy();
|
||||||
|
|
||||||
// Override the llama-cpp provider's baseUrl to route through our proxy.
|
// Override the llama-cpp provider's baseUrl to route through our proxy.
|
||||||
// models.json model definitions are preserved; only the endpoint changes.
|
// models.json model definitions are preserved; only the endpoint changes.
|
||||||
@@ -195,7 +239,55 @@ export default function (pi: ExtensionAPI) {
|
|||||||
baseUrl: `http://127.0.0.1:${PROXY_PORT}/v1`,
|
baseUrl: `http://127.0.0.1:${PROXY_PORT}/v1`,
|
||||||
});
|
});
|
||||||
|
|
||||||
pi.on("session_end", async () => {
|
proxyEnabled = true;
|
||||||
|
console.log("[llama-proxy] Proxy enabled");
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Disable the proxy and restore default provider.
|
||||||
|
*/
|
||||||
|
function disableProxy(): void {
|
||||||
|
if (!proxyEnabled) {
|
||||||
|
console.log("[llama-proxy] Proxy already disabled");
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (server) {
|
||||||
server.close();
|
server.close();
|
||||||
|
server = null;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Reset provider to default (no baseUrl override)
|
||||||
|
pi.registerProvider("llama-cpp", {});
|
||||||
|
|
||||||
|
proxyEnabled = false;
|
||||||
|
console.log("[llama-proxy] Proxy disabled");
|
||||||
|
}
|
||||||
|
|
||||||
|
// Register the /llama-proxy command to toggle the proxy
|
||||||
|
pi.registerCommand("llama-proxy", async (args) => {
|
||||||
|
const action = args[0]?.toLowerCase() || "";
|
||||||
|
|
||||||
|
if (action === "on") {
|
||||||
|
enableProxy();
|
||||||
|
} else if (action === "off") {
|
||||||
|
disableProxy();
|
||||||
|
} else if (action === "status") {
|
||||||
|
console.log(`[llama-proxy] Status: ${proxyEnabled ? "enabled" : "disabled"}`);
|
||||||
|
} else {
|
||||||
|
// Toggle if no argument
|
||||||
|
if (proxyEnabled) {
|
||||||
|
disableProxy();
|
||||||
|
} else {
|
||||||
|
enableProxy();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
// Clean up on session end
|
||||||
|
pi.on("session_end", async () => {
|
||||||
|
if (server) {
|
||||||
|
server.close();
|
||||||
|
}
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -20,6 +20,7 @@ import {
|
|||||||
clampPercent,
|
clampPercent,
|
||||||
colorForPercent,
|
colorForPercent,
|
||||||
detectProvider,
|
detectProvider,
|
||||||
|
ensureFreshAuthForProviders,
|
||||||
fetchAllUsages,
|
fetchAllUsages,
|
||||||
fetchClaudeUsage,
|
fetchClaudeUsage,
|
||||||
fetchCodexUsage,
|
fetchCodexUsage,
|
||||||
@@ -30,6 +31,7 @@ import {
|
|||||||
readUsageCache,
|
readUsageCache,
|
||||||
resolveUsageEndpoints,
|
resolveUsageEndpoints,
|
||||||
writeUsageCache,
|
writeUsageCache,
|
||||||
|
type OAuthProviderId,
|
||||||
type ProviderKey,
|
type ProviderKey,
|
||||||
type UsageByProvider,
|
type UsageByProvider,
|
||||||
type UsageData,
|
type UsageData,
|
||||||
@@ -443,34 +445,60 @@ export default function (pi: ExtensionAPI) {
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
// --- Actually hit the API ---
|
// --- Proactive token refresh ---
|
||||||
// Skip independent token refresh — pi manages OAuth tokens and refreshes
|
// Before hitting the API, check whether the stored access token is expired.
|
||||||
// them in memory. A parallel refresh here would cause token rotation
|
// This is the main cause of HTTP 401 errors: switching accounts via
|
||||||
// conflicts (Anthropic invalidates the old refresh token on use).
|
// /switch-claude restores a profile whose access token has since expired
|
||||||
|
// (the refresh token is still valid). We use pi's own OAuth resolver so
|
||||||
|
// the new tokens are written back to auth.json and the profile stays in
|
||||||
|
// sync. This is safe at turn_start / session_start because pi hasn't made
|
||||||
|
// any Claude API calls yet, so there's no parallel refresh to conflict with.
|
||||||
|
const oauthId = providerToOAuthProviderId(active);
|
||||||
|
let effectiveAuth = auth;
|
||||||
|
if (oauthId && active !== "zai") {
|
||||||
|
const creds = auth[oauthId as keyof typeof auth] as
|
||||||
|
| { access?: string; refresh?: string; expires?: number }
|
||||||
|
| undefined;
|
||||||
|
const expires = typeof creds?.expires === "number" ? creds.expires : 0;
|
||||||
|
const tokenExpiredOrMissing =
|
||||||
|
!creds?.access || (expires > 0 && Date.now() + 60_000 >= expires);
|
||||||
|
if (tokenExpiredOrMissing && creds?.refresh) {
|
||||||
|
try {
|
||||||
|
const refreshed = await ensureFreshAuthForProviders([oauthId as OAuthProviderId], {
|
||||||
|
auth,
|
||||||
|
persist: true,
|
||||||
|
});
|
||||||
|
if (refreshed.auth) effectiveAuth = refreshed.auth;
|
||||||
|
} catch {
|
||||||
|
// Ignore refresh errors — fall through with existing auth
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
let result: UsageData;
|
let result: UsageData;
|
||||||
|
|
||||||
if (active === "codex") {
|
if (active === "codex") {
|
||||||
const access = auth["openai-codex"]?.access;
|
const access = effectiveAuth["openai-codex"]?.access;
|
||||||
result = access
|
result = access
|
||||||
? await fetchCodexUsage(access)
|
? await fetchCodexUsage(access)
|
||||||
: { session: 0, weekly: 0, error: "missing access token (try /login again)" };
|
: { session: 0, weekly: 0, error: "missing access token (try /login again)" };
|
||||||
} else if (active === "claude") {
|
} else if (active === "claude") {
|
||||||
const access = auth.anthropic?.access;
|
const access = effectiveAuth.anthropic?.access;
|
||||||
result = access
|
result = access
|
||||||
? await fetchClaudeUsage(access)
|
? await fetchClaudeUsage(access)
|
||||||
: { session: 0, weekly: 0, error: "missing access token (try /login again)" };
|
: { session: 0, weekly: 0, error: "missing access token (try /login again)" };
|
||||||
} else if (active === "zai") {
|
} else if (active === "zai") {
|
||||||
const token = auth.zai?.access || auth.zai?.key;
|
const token = effectiveAuth.zai?.access || effectiveAuth.zai?.key;
|
||||||
result = token
|
result = token
|
||||||
? await fetchZaiUsage(token, { endpoints })
|
? await fetchZaiUsage(token, { endpoints })
|
||||||
: { session: 0, weekly: 0, error: "missing token (try /login again)" };
|
: { session: 0, weekly: 0, error: "missing token (try /login again)" };
|
||||||
} else if (active === "gemini") {
|
} else if (active === "gemini") {
|
||||||
const creds = auth["google-gemini-cli"];
|
const creds = effectiveAuth["google-gemini-cli"];
|
||||||
result = creds?.access
|
result = creds?.access
|
||||||
? await fetchGoogleUsage(creds.access, endpoints.gemini, creds.projectId, "gemini", { endpoints })
|
? await fetchGoogleUsage(creds.access, endpoints.gemini, creds.projectId, "gemini", { endpoints })
|
||||||
: { session: 0, weekly: 0, error: "missing access token (try /login again)" };
|
: { session: 0, weekly: 0, error: "missing access token (try /login again)" };
|
||||||
} else {
|
} else {
|
||||||
const creds = auth["google-antigravity"];
|
const creds = effectiveAuth["google-antigravity"];
|
||||||
result = creds?.access
|
result = creds?.access
|
||||||
? await fetchGoogleUsage(creds.access, endpoints.antigravity, creds.projectId, "antigravity", { endpoints })
|
? await fetchGoogleUsage(creds.access, endpoints.antigravity, creds.projectId, "antigravity", { endpoints })
|
||||||
: { session: 0, weekly: 0, error: "missing access token (try /login again)" };
|
: { session: 0, weekly: 0, error: "missing access token (try /login again)" };
|
||||||
@@ -479,18 +507,35 @@ export default function (pi: ExtensionAPI) {
|
|||||||
state[active] = result;
|
state[active] = result;
|
||||||
|
|
||||||
// Write result + rate-limit state to shared cache so other sessions
|
// Write result + rate-limit state to shared cache so other sessions
|
||||||
// (and our own next timer tick) don't need to re-hit the API.
|
// don't need to re-hit the API within CACHE_TTL_MS.
|
||||||
|
//
|
||||||
|
// Error results (other than 429) are NOT cached: they should be retried
|
||||||
|
// on the next input instead of being replayed from cache for 15 minutes.
|
||||||
|
// The most common error is HTTP 401 (expired token after an account switch)
|
||||||
|
// which resolves on the very next poll once the token is refreshed above.
|
||||||
|
if (result.error) {
|
||||||
|
if (result.error === "HTTP 429") {
|
||||||
|
// Write rate-limit backoff but preserve the last good data in cache.
|
||||||
|
const nextCache: import("./core").UsageCache = {
|
||||||
|
timestamp: cache?.timestamp ?? now,
|
||||||
|
data: { ...(cache?.data ?? {}) },
|
||||||
|
rateLimitedUntil: {
|
||||||
|
...(cache?.rateLimitedUntil ?? {}),
|
||||||
|
[active]: now + RATE_LIMITED_BACKOFF_MS,
|
||||||
|
},
|
||||||
|
};
|
||||||
|
writeUsageCache(nextCache);
|
||||||
|
}
|
||||||
|
// All other errors: don't update cache — next turn will retry from scratch.
|
||||||
|
} else {
|
||||||
const nextCache: import("./core").UsageCache = {
|
const nextCache: import("./core").UsageCache = {
|
||||||
timestamp: now,
|
timestamp: now,
|
||||||
data: { ...(cache?.data ?? {}), [active]: result },
|
data: { ...(cache?.data ?? {}), [active]: result },
|
||||||
rateLimitedUntil: { ...(cache?.rateLimitedUntil ?? {}) },
|
rateLimitedUntil: { ...(cache?.rateLimitedUntil ?? {}) },
|
||||||
};
|
};
|
||||||
if (result.error === "HTTP 429") {
|
|
||||||
nextCache.rateLimitedUntil![active] = now + RATE_LIMITED_BACKOFF_MS;
|
|
||||||
} else {
|
|
||||||
delete nextCache.rateLimitedUntil![active];
|
delete nextCache.rateLimitedUntil![active];
|
||||||
}
|
|
||||||
writeUsageCache(nextCache);
|
writeUsageCache(nextCache);
|
||||||
|
}
|
||||||
|
|
||||||
state.lastPoll = now;
|
state.lastPoll = now;
|
||||||
updateStatus();
|
updateStatus();
|
||||||
|
|||||||
@@ -1,140 +0,0 @@
|
|||||||
{
|
|
||||||
"version": 1,
|
|
||||||
"servers": {
|
|
||||||
"qmd": {
|
|
||||||
"configHash": "fd16eaf87d17a4ce5efee10dc65237dbbe1403353bbbfc4a7de196abe21ab5f9",
|
|
||||||
"tools": [
|
|
||||||
{
|
|
||||||
"name": "query",
|
|
||||||
"description": "Search the knowledge base using a query document — one or more typed sub-queries combined for best recall.\n\n## Query Types\n\n**lex** — BM25 keyword search. Fast, exact, no LLM needed.\nFull lex syntax:\n- `term` — prefix match (\"perf\" matches \"performance\")\n- `\"exact phrase\"` — phrase must appear verbatim\n- `-term` or `-\"phrase\"` — exclude documents containing this\n\nGood lex examples:\n- `\"connection pool\" timeout -redis`\n- `\"machine learning\" -sports -athlete`\n- `handleError async typescript`\n\n**vec** — Semantic vector search. Write a natural language question. Finds documents by meaning, not exact words.\n- `how does the rate limiter handle burst traffic?`\n- `what is the tradeoff between consistency and availability?`\n\n**hyde** — Hypothetical document. Write 50-100 words that look like the answer. Often the most powerful for nuanced topics.\n- `The rate limiter uses a token bucket algorithm. When a client exceeds 100 req/min, subsequent requests return 429 until the window resets.`\n\n## Strategy\n\nCombine types for best results. First sub-query gets 2× weight — put your strongest signal first.\n\n| Goal | Approach |\n|------|----------|\n| Know exact term/name | `lex` only |\n| Concept search | `vec` only |\n| Best recall | `lex` + `vec` |\n| Complex/nuanced | `lex` + `vec` + `hyde` |\n| Unknown vocabulary | Use a standalone natural-language query (no typed lines) so the server can auto-expand it |\n\n## Examples\n\nSimple lookup:\n```json\n[{ \"type\": \"lex\", \"query\": \"CAP theorem\" }]\n```\n\nBest recall on a technical topic:\n```json\n[\n { \"type\": \"lex\", \"query\": \"\\\"connection pool\\\" timeout -redis\" },\n { \"type\": \"vec\", \"query\": \"why do database connections time out under load\" },\n { \"type\": \"hyde\", \"query\": \"Connection pool exhaustion occurs when all connections are in use and new requests must wait. This typically happens under high concurrency when queries run longer than expected.\" }\n]\n```\n\nIntent-aware lex (C++ performance, not sports):\n```json\n[\n { \"type\": \"lex\", \"query\": \"\\\"C++ performance\\\" optimization -sports -athlete\" },\n { \"type\": \"vec\", \"query\": \"how to optimize C++ program performance\" }\n]\n```",
|
|
||||||
"inputSchema": {
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"searches": {
|
|
||||||
"minItems": 1,
|
|
||||||
"maxItems": 10,
|
|
||||||
"type": "array",
|
|
||||||
"items": {
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"type": {
|
|
||||||
"type": "string",
|
|
||||||
"enum": [
|
|
||||||
"lex",
|
|
||||||
"vec",
|
|
||||||
"hyde"
|
|
||||||
],
|
|
||||||
"description": "lex = BM25 keywords (supports \"phrase\" and -negation); vec = semantic question; hyde = hypothetical answer passage"
|
|
||||||
},
|
|
||||||
"query": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "The query text. For lex: use keywords, \"quoted phrases\", and -negation. For vec: natural language question. For hyde: 50-100 word answer passage."
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"required": [
|
|
||||||
"type",
|
|
||||||
"query"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"description": "Typed sub-queries to execute (lex/vec/hyde). First gets 2x weight."
|
|
||||||
},
|
|
||||||
"limit": {
|
|
||||||
"default": 10,
|
|
||||||
"description": "Max results (default: 10)",
|
|
||||||
"type": "number"
|
|
||||||
},
|
|
||||||
"minScore": {
|
|
||||||
"default": 0,
|
|
||||||
"description": "Min relevance 0-1 (default: 0)",
|
|
||||||
"type": "number"
|
|
||||||
},
|
|
||||||
"collections": {
|
|
||||||
"description": "Filter to collections (OR match)",
|
|
||||||
"type": "array",
|
|
||||||
"items": {
|
|
||||||
"type": "string"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"required": [
|
|
||||||
"searches"
|
|
||||||
],
|
|
||||||
"$schema": "http://json-schema.org/draft-07/schema#"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "get",
|
|
||||||
"description": "Retrieve the full content of a document by its file path or docid. Use paths or docids (#abc123) from search results. Suggests similar files if not found.",
|
|
||||||
"inputSchema": {
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"file": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "File path or docid from search results (e.g., 'pages/meeting.md', '#abc123', or 'pages/meeting.md:100' to start at line 100)"
|
|
||||||
},
|
|
||||||
"fromLine": {
|
|
||||||
"description": "Start from this line number (1-indexed)",
|
|
||||||
"type": "number"
|
|
||||||
},
|
|
||||||
"maxLines": {
|
|
||||||
"description": "Maximum number of lines to return",
|
|
||||||
"type": "number"
|
|
||||||
},
|
|
||||||
"lineNumbers": {
|
|
||||||
"default": false,
|
|
||||||
"description": "Add line numbers to output (format: 'N: content')",
|
|
||||||
"type": "boolean"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"required": [
|
|
||||||
"file"
|
|
||||||
],
|
|
||||||
"$schema": "http://json-schema.org/draft-07/schema#"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "multi_get",
|
|
||||||
"description": "Retrieve multiple documents by glob pattern (e.g., 'journals/2025-05*.md') or comma-separated list. Skips files larger than maxBytes.",
|
|
||||||
"inputSchema": {
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"pattern": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Glob pattern or comma-separated list of file paths"
|
|
||||||
},
|
|
||||||
"maxLines": {
|
|
||||||
"description": "Maximum lines per file",
|
|
||||||
"type": "number"
|
|
||||||
},
|
|
||||||
"maxBytes": {
|
|
||||||
"default": 10240,
|
|
||||||
"description": "Skip files larger than this (default: 10240 = 10KB)",
|
|
||||||
"type": "number"
|
|
||||||
},
|
|
||||||
"lineNumbers": {
|
|
||||||
"default": false,
|
|
||||||
"description": "Add line numbers to output (format: 'N: content')",
|
|
||||||
"type": "boolean"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"required": [
|
|
||||||
"pattern"
|
|
||||||
],
|
|
||||||
"$schema": "http://json-schema.org/draft-07/schema#"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "status",
|
|
||||||
"description": "Show the status of the QMD index: collections, document counts, and health information.",
|
|
||||||
"inputSchema": {
|
|
||||||
"type": "object",
|
|
||||||
"properties": {},
|
|
||||||
"$schema": "http://json-schema.org/draft-07/schema#"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"resources": [],
|
|
||||||
"cachedAt": 1772656106222
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -11,7 +11,9 @@
|
|||||||
"command": "opty",
|
"command": "opty",
|
||||||
"args": [
|
"args": [
|
||||||
"mcp"
|
"mcp"
|
||||||
]
|
],
|
||||||
|
"directTools": true,
|
||||||
|
"lifecycle": "eager"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,8 +0,0 @@
|
|||||||
{"agent":"scout","task":"I need to understand the editor and rendering architecture for implementing entity picking. Please find and summarize:\n\n1. How editor mode is toggled (look for key 'I' handling, editor mode state)\n2. ","ts":1772639557,"status":"ok","duration":90399}
|
|
||||||
{"agent":"scout","task":"I need to understand the editor and rendering architecture for implementing entity picking. Please find and summarize:\n\n1. How editor mode is toggled (look for key 'I' handling, editor mode flag)\n2. T","ts":1772654920,"status":"ok","duration":1217167}
|
|
||||||
{"agent":"scout","task":"I need to understand the codebase to plan entity picking in the editor. Find and summarize:\n\n1. How the editor mode works (toggled with 'I' key) - find the editor module/system, what it currently does","ts":1772655378,"status":"ok","duration":282743}
|
|
||||||
{"agent":"scout","task":"I need to understand the codebase to plan entity picking in the editor. Find and summarize:\n\n1. How editor mode works - look for editor-related code, the \"I\" key toggle, inspector UI\n2. How entities a","ts":1772655580,"status":"ok","duration":174603}
|
|
||||||
{"agent":"scout","task":"Explore the codebase structure. Find: 1) the main game loop and input handling, 2) the ECS system and component definitions, 3) any existing UI/rendering systems, 4) entity types (player, trees, etc),","ts":1772656483,"status":"ok","duration":451373}
|
|
||||||
{"agent":"scout","task":"I need to understand the rendering pipeline, scene loading, and transform usage for implementing entity picking with a visual selection indicator. Find and read:\n\n1. `src/render/mod.rs` - the full ren","ts":1772656502,"status":"ok","duration":82458}
|
|
||||||
{"agent":"scout","task":"Explore the codebase at /home/jonas/projects/snow_trail_sdl and give me a detailed summary of:\n1. src/loaders/mesh.rs - full content, especially the Mesh struct and all constructor functions\n2. src/pi","ts":1772658265,"status":"ok","duration":330549}
|
|
||||||
{"agent":"scout","task":"Find and summarize: 1) how the camera follow/update logic works, 2) how editor mode is tracked/detected. Look for camera systems, editor state, and any existing checks for editor mode in camera code.","ts":1772659168,"status":"ok","duration":5992}
|
|
||||||
@@ -0,0 +1,360 @@
|
|||||||
|
"$schema" = 'https://starship.rs/config-schema.json'
|
||||||
|
|
||||||
|
add_newline = true
|
||||||
|
|
||||||
|
command_timeout = 2000
|
||||||
|
|
||||||
|
format = """
|
||||||
|
$os\
|
||||||
|
$username\
|
||||||
|
$directory\
|
||||||
|
$git_branch\
|
||||||
|
$git_commit\
|
||||||
|
$git_status\
|
||||||
|
$git_metrics\
|
||||||
|
$git_state\
|
||||||
|
$c\
|
||||||
|
$rust\
|
||||||
|
$golang\
|
||||||
|
$nodejs\
|
||||||
|
$php\
|
||||||
|
$java\
|
||||||
|
$kotlin\
|
||||||
|
$haskell\
|
||||||
|
$python\
|
||||||
|
$package\
|
||||||
|
$docker_context\
|
||||||
|
$kubernetes\
|
||||||
|
$shell\
|
||||||
|
$container\
|
||||||
|
$jobs\
|
||||||
|
${custom.memory_usage}\
|
||||||
|
${custom.battery}\
|
||||||
|
${custom.keyboard_layout}\
|
||||||
|
$time\
|
||||||
|
$cmd_duration\
|
||||||
|
$status\
|
||||||
|
$line_break\
|
||||||
|
$character\
|
||||||
|
"""
|
||||||
|
|
||||||
|
palette = 'bearded-arc'
|
||||||
|
|
||||||
|
[palettes.bearded-arc]
|
||||||
|
|
||||||
|
color_ok = '#3CEC85'
|
||||||
|
color_danger = '#FF738A'
|
||||||
|
color_caution = '#EACD61'
|
||||||
|
|
||||||
|
color_os = '#FF738A'
|
||||||
|
color_username = '#FF738A'
|
||||||
|
color_directory = '#EACD61'
|
||||||
|
color_git = '#22ECDB'
|
||||||
|
color_git_added = '#3CEC85'
|
||||||
|
color_git_deleted = '#FF738A'
|
||||||
|
color_env = '#69C3FF'
|
||||||
|
color_kubernetes = '#bd93ff'
|
||||||
|
color_docker = '#69C3FF'
|
||||||
|
color_shell = '#ABB7C1'
|
||||||
|
color_container = '#FF955C'
|
||||||
|
color_other = '#ABB7C1'
|
||||||
|
color_time = '#c3cfd9'
|
||||||
|
color_duration = '#c3cfd9'
|
||||||
|
|
||||||
|
color_vimcmd_ok = '#9bdead'
|
||||||
|
color_vimcmd_replace = '#bd93ff'
|
||||||
|
color_vimcmd_visual = '#EACD61'
|
||||||
|
|
||||||
|
[os]
|
||||||
|
disabled = false
|
||||||
|
style = "fg:color_os"
|
||||||
|
format = '[$symbol]($style)'
|
||||||
|
|
||||||
|
[os.symbols]
|
||||||
|
Windows = ""
|
||||||
|
Ubuntu = ""
|
||||||
|
SUSE = ""
|
||||||
|
Raspbian = ""
|
||||||
|
Mint = ""
|
||||||
|
Macos = ""
|
||||||
|
Manjaro = ""
|
||||||
|
Linux = ""
|
||||||
|
Gentoo = ""
|
||||||
|
Fedora = ""
|
||||||
|
Alpine = ""
|
||||||
|
Amazon = ""
|
||||||
|
Android = ""
|
||||||
|
Arch = ""
|
||||||
|
Artix = ""
|
||||||
|
EndeavourOS = ""
|
||||||
|
CentOS = ""
|
||||||
|
Debian = ""
|
||||||
|
Redhat = ""
|
||||||
|
RedHatEnterprise = ""
|
||||||
|
Pop = ""
|
||||||
|
|
||||||
|
[username]
|
||||||
|
show_always = true
|
||||||
|
style_user = "fg:color_username"
|
||||||
|
style_root = "bold fg:color_danger"
|
||||||
|
format = '[ $user ]($style)'
|
||||||
|
|
||||||
|
[directory]
|
||||||
|
style = "fg:color_directory"
|
||||||
|
read_only_style = "fg:color_directory"
|
||||||
|
repo_root_style = "bold fg:color_directory"
|
||||||
|
format = "[ $path ]($style)"
|
||||||
|
read_only = " "
|
||||||
|
home_symbol = "~"
|
||||||
|
truncation_symbol = "…/"
|
||||||
|
truncation_length = 0
|
||||||
|
truncate_to_repo = true
|
||||||
|
fish_style_pwd_dir_length = 0
|
||||||
|
use_logical_path = true
|
||||||
|
|
||||||
|
[git_branch]
|
||||||
|
symbol = ""
|
||||||
|
style = "fg:color_git"
|
||||||
|
format = '( [$symbol $branch]($style) )'
|
||||||
|
only_attached = true
|
||||||
|
ignore_branches = []
|
||||||
|
truncation_length = 25
|
||||||
|
truncation_symbol = "..."
|
||||||
|
always_show_remote = false
|
||||||
|
disabled = false
|
||||||
|
|
||||||
|
[git_commit]
|
||||||
|
style = "fg:color_git"
|
||||||
|
format = "( [($tag)(@$hash)]($style) )"
|
||||||
|
commit_hash_length = 7
|
||||||
|
only_detached = true
|
||||||
|
tag_symbol = " "
|
||||||
|
tag_disabled = false
|
||||||
|
disabled = false
|
||||||
|
|
||||||
|
[git_status]
|
||||||
|
style = "fg:color_git"
|
||||||
|
format = '([$ahead_behind]($style) )([$all_status]($style) )'
|
||||||
|
stashed = "*${count}"
|
||||||
|
ahead = "⇡${count}"
|
||||||
|
behind = "⇣${count}"
|
||||||
|
up_to_date = ""
|
||||||
|
diverged = "⇡${ahead_count}⇣${behind_count}"
|
||||||
|
conflicted = "=${count}"
|
||||||
|
deleted = "×${count}"
|
||||||
|
renamed = "»${count}"
|
||||||
|
modified = "!${count}"
|
||||||
|
staged = "+${count}"
|
||||||
|
untracked = "?${count}"
|
||||||
|
ignore_submodules = false
|
||||||
|
disabled = false
|
||||||
|
|
||||||
|
[git_metrics]
|
||||||
|
format = '([([+$added]($added_style))([-$deleted]($deleted_style))](fg:color_git) )'
|
||||||
|
added_style = "fg:color_git_added"
|
||||||
|
deleted_style = "fg:color_git_deleted"
|
||||||
|
only_nonzero_diffs = true
|
||||||
|
disabled = false
|
||||||
|
|
||||||
|
[git_state]
|
||||||
|
style = "fg:color_danger"
|
||||||
|
format = '([$state( $progress_current/$progress_total)]($style bold) )'
|
||||||
|
rebase = "REBASING"
|
||||||
|
merge = "MERGING"
|
||||||
|
revert = "REVERTING"
|
||||||
|
cherry_pick = "CHERRY-PICKING"
|
||||||
|
bisect = "BISECTING"
|
||||||
|
am = "AM"
|
||||||
|
am_or_rebase = "AM/REBASE"
|
||||||
|
disabled = false
|
||||||
|
|
||||||
|
[nodejs]
|
||||||
|
symbol = ""
|
||||||
|
style = "fg:color_env"
|
||||||
|
format = '( [$symbol( $version)]($style) )'
|
||||||
|
|
||||||
|
[c]
|
||||||
|
symbol = ""
|
||||||
|
style = "fg:color_env"
|
||||||
|
format = '( [$symbol( $version)]($style) )'
|
||||||
|
|
||||||
|
[rust]
|
||||||
|
symbol = ""
|
||||||
|
style = "fg:color_env"
|
||||||
|
format = '( [$symbol( $version)]($style) )'
|
||||||
|
|
||||||
|
[golang]
|
||||||
|
symbol = ""
|
||||||
|
style = "fg:color_env"
|
||||||
|
format = '( [$symbol( $version)]($style) )'
|
||||||
|
|
||||||
|
[php]
|
||||||
|
symbol = ""
|
||||||
|
style = "fg:color_env"
|
||||||
|
format = '( [$symbol( $version)]($style) )'
|
||||||
|
|
||||||
|
[java]
|
||||||
|
symbol = ""
|
||||||
|
style = "fg:color_env"
|
||||||
|
format = '( [$symbol( $version)]($style) )'
|
||||||
|
|
||||||
|
[kotlin]
|
||||||
|
symbol = ""
|
||||||
|
style = "fg:color_env"
|
||||||
|
format = '( [$symbol( $version)]($style) )'
|
||||||
|
|
||||||
|
[haskell]
|
||||||
|
symbol = ""
|
||||||
|
style = "fg:color_env"
|
||||||
|
format = '( [$symbol( $version)]($style) )'
|
||||||
|
|
||||||
|
[python]
|
||||||
|
symbol = ""
|
||||||
|
style = "fg:color_env"
|
||||||
|
format = '( [$symbol( $version)( $virtualenv)]($style) )'
|
||||||
|
version_format = '${raw}'
|
||||||
|
|
||||||
|
[package]
|
||||||
|
disabled = false
|
||||||
|
symbol = ""
|
||||||
|
style = "fg:color_env"
|
||||||
|
format = '( [$symbol( $version)]($style) )'
|
||||||
|
|
||||||
|
[docker_context]
|
||||||
|
symbol = ""
|
||||||
|
style = "fg:color_docker"
|
||||||
|
format = '( [$symbol( $context)]($style) )'
|
||||||
|
|
||||||
|
[kubernetes]
|
||||||
|
symbol = ""
|
||||||
|
style = "fg:color_kubernetes"
|
||||||
|
format = '( [($symbol( $cluster))]($style) )'
|
||||||
|
disabled = false
|
||||||
|
|
||||||
|
[shell]
|
||||||
|
disabled = true
|
||||||
|
|
||||||
|
[container]
|
||||||
|
style = "fg:color_container"
|
||||||
|
format = '( [$symbol $name]($style) )'
|
||||||
|
|
||||||
|
[jobs]
|
||||||
|
symbol = ""
|
||||||
|
style = "fg:color_other"
|
||||||
|
format = '( [$symbol( $number)]($style) )'
|
||||||
|
symbol_threshold = 1
|
||||||
|
number_threshold = 1
|
||||||
|
|
||||||
|
[custom.memory_usage]
|
||||||
|
command = "starship module memory_usage"
|
||||||
|
when = '[ "${STARSHIP_COCKPIT_MEMORY_USAGE_ENABLED:-false}" = "true" ]'
|
||||||
|
shell = "sh"
|
||||||
|
format = "( $output )"
|
||||||
|
disabled = false
|
||||||
|
|
||||||
|
[memory_usage]
|
||||||
|
threshold = 0
|
||||||
|
symbol = ""
|
||||||
|
style = "fg:color_other"
|
||||||
|
format = '( [$symbol( ${ram})]($style) )'
|
||||||
|
disabled = false
|
||||||
|
|
||||||
|
[custom.battery]
|
||||||
|
command = """
|
||||||
|
battery_info=$(starship module battery)
|
||||||
|
if [ -n "$battery_info" ]; then
|
||||||
|
percent=$(echo "$battery_info" | grep -o '[0-9]*%' | sed 's/%//')
|
||||||
|
if [ "$percent" -le "${STARSHIP_COCKPIT_BATTERY_THRESHOLD:-0}" ]; then
|
||||||
|
echo "$battery_info" | sed 's/%%/%/'
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
"""
|
||||||
|
when = '[ "${STARSHIP_COCKPIT_BATTERY_ENABLED:-false}" = "true" ]'
|
||||||
|
shell = "sh"
|
||||||
|
format = "( $output )"
|
||||||
|
disabled = false
|
||||||
|
|
||||||
|
[battery]
|
||||||
|
full_symbol = ""
|
||||||
|
charging_symbol = ""
|
||||||
|
discharging_symbol = ""
|
||||||
|
unknown_symbol = ""
|
||||||
|
empty_symbol = ""
|
||||||
|
format = '( [$symbol( $percentage)]($style) )'
|
||||||
|
disabled = false
|
||||||
|
|
||||||
|
[[battery.display]]
|
||||||
|
threshold = 10
|
||||||
|
style = "bold fg:color_danger"
|
||||||
|
|
||||||
|
[[battery.display]]
|
||||||
|
threshold = 20
|
||||||
|
style = "fg:color_caution"
|
||||||
|
|
||||||
|
[[battery.display]]
|
||||||
|
threshold = 100
|
||||||
|
style = "fg:color_other"
|
||||||
|
|
||||||
|
[time]
|
||||||
|
disabled = false
|
||||||
|
time_format = "%R"
|
||||||
|
style = "fg:color_time"
|
||||||
|
format = '( [ $time]($style) )'
|
||||||
|
|
||||||
|
[cmd_duration]
|
||||||
|
min_time = 2000
|
||||||
|
format = '( [ $duration]($style) )'
|
||||||
|
style = 'fg:color_duration'
|
||||||
|
show_milliseconds = false
|
||||||
|
disabled = false
|
||||||
|
|
||||||
|
[status]
|
||||||
|
disabled = false
|
||||||
|
format = '( [$symbol( $common_meaning)( $signal_name)]($style) )'
|
||||||
|
map_symbol = true
|
||||||
|
pipestatus = true
|
||||||
|
symbol = ''
|
||||||
|
success_symbol = ''
|
||||||
|
not_executable_symbol = ''
|
||||||
|
not_found_symbol = ''
|
||||||
|
sigint_symbol = ''
|
||||||
|
signal_symbol = ''
|
||||||
|
style = 'bold fg:color_danger'
|
||||||
|
recognize_signal_code = true
|
||||||
|
|
||||||
|
[line_break]
|
||||||
|
disabled = false
|
||||||
|
|
||||||
|
[character]
|
||||||
|
disabled = false
|
||||||
|
success_symbol = '[❯](bold fg:color_ok)'
|
||||||
|
error_symbol = '[❯](bold fg:color_danger)'
|
||||||
|
vimcmd_symbol = '[❮](bold fg:color_vimcmd_ok)'
|
||||||
|
vimcmd_replace_one_symbol = '[❮](bold fg:color_vimcmd_replace)'
|
||||||
|
vimcmd_replace_symbol = '[❮](bold fg:color_vimcmd_replace)'
|
||||||
|
vimcmd_visual_symbol = '[❮](bold fg:color_vimcmd_visual)'
|
||||||
|
|
||||||
|
[custom.keyboard_layout]
|
||||||
|
command = """
|
||||||
|
|
||||||
|
# Set env variables if you want to use layout aliases (in uppercase)
|
||||||
|
# export STARSHIP_COCKPIT_KEYBOARD_LAYOUT_ABC=ENG
|
||||||
|
# export STARSHIP_COCKPIT_KEYBOARD_LAYOUT_UKRAINIAN=UKR
|
||||||
|
#
|
||||||
|
# Implementations:
|
||||||
|
# macOS
|
||||||
|
|
||||||
|
if [ "$(uname -s)" = "Darwin" ]; then
|
||||||
|
input_source=$(defaults read ~/Library/Preferences/com.apple.HIToolbox.plist AppleCurrentKeyboardLayoutInputSourceID)
|
||||||
|
layout_id=$(echo "$input_source" | cut -d '.' -f4)
|
||||||
|
layout=$(printenv "STARSHIP_COCKPIT_KEYBOARD_LAYOUT_$(echo "$layout_id" | tr '[:lower:]' '[:upper:]')")
|
||||||
|
echo "$layout" || echo "$layout_id"
|
||||||
|
fi
|
||||||
|
|
||||||
|
"""
|
||||||
|
symbol = ""
|
||||||
|
style = "fg:color_other"
|
||||||
|
format = '( [$symbol $output]($style) )'
|
||||||
|
when = '[ "${STARSHIP_COCKPIT_KEYBOARD_LAYOUT_ENABLED:-false}" = "true" ]'
|
||||||
|
shell = "sh"
|
||||||
|
disabled = false
|
||||||
|
|||||||
@@ -1,25 +1,109 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
# Opens a new terminal, using the current terminal's working directory if focused window is a terminal
|
# Opens a new terminal, using the current terminal's working directory if focused window is a terminal
|
||||||
|
# Defaults to home directory (~) if no terminal is focused
|
||||||
|
|
||||||
|
# Parse file URI to extract path
|
||||||
|
parse_file_uri() {
|
||||||
|
local uri="$1"
|
||||||
|
|
||||||
|
# Remove file:// prefix
|
||||||
|
local path="${uri#file://}"
|
||||||
|
|
||||||
|
# Handle localhost or hostname prefix: file://hostname/path -> /path
|
||||||
|
if [[ "$path" =~ ^localhost(/.*) ]] || [[ "$path" =~ ^[a-zA-Z0-9.-]+(/.*) ]]; then
|
||||||
|
path="${BASH_REMATCH[1]}"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "$path"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Try to get cwd from focused wezterm window
|
||||||
|
# Arguments: window_title from Sway's focused window
|
||||||
|
get_wezterm_cwd() {
|
||||||
|
local sway_window_title="$1"
|
||||||
|
|
||||||
|
# Check if wezterm is available
|
||||||
|
if ! command -v wezterm &> /dev/null; then
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Get list of wezterm windows/panes
|
||||||
|
local wezterm_data
|
||||||
|
wezterm_data=$(wezterm cli list --format json 2>/dev/null) || return 1
|
||||||
|
|
||||||
|
# Return early if no data
|
||||||
|
if [ -z "$wezterm_data" ]; then
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
local cwd
|
||||||
|
|
||||||
|
# Try to match the Sway window title with wezterm's window_title first
|
||||||
|
# (this handles windows with explicit titles set)
|
||||||
|
if [ -n "$sway_window_title" ] && [ "$sway_window_title" != "null" ]; then
|
||||||
|
cwd=$(echo "$wezterm_data" | jq -r --arg title "$sway_window_title" '.[] | select(.window_title == $title) | .cwd' | head -n 1)
|
||||||
|
fi
|
||||||
|
|
||||||
|
# If no match by window_title, try matching by pane title
|
||||||
|
# When multiple matches exist, pick the highest window_id (most recent)
|
||||||
|
if [ -z "$cwd" ] || [ "$cwd" = "null" ]; then
|
||||||
|
cwd=$(echo "$wezterm_data" | jq -r --arg title "$sway_window_title" '[.[] | select(.title == $title)] | sort_by(.window_id) | .[-1] | .cwd' 2>/dev/null)
|
||||||
|
fi
|
||||||
|
|
||||||
|
# If the Sway window title looks like an app (nvim, vim, pi, claude, etc),
|
||||||
|
# look for the pane with a visible cursor (likely the active app)
|
||||||
|
if [ -z "$cwd" ] || [ "$cwd" = "null" ]; then
|
||||||
|
local app_pattern="nvim|vim|pi|claude|less|more|man|htop|top|nano|emacs"
|
||||||
|
if [[ "$sway_window_title" =~ ^($app_pattern) ]]; then
|
||||||
|
# Try to find a pane with visible cursor (most likely the active one)
|
||||||
|
cwd=$(echo "$wezterm_data" | jq -r '.[] | select(.cursor_visibility == "Visible") | .cwd' | head -n 1)
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Final fallback: just get most recent pane with valid cwd
|
||||||
|
if [ -z "$cwd" ] || [ "$cwd" = "null" ]; then
|
||||||
|
cwd=$(echo "$wezterm_data" | jq -r '[.[] | select(.cwd != null and .cwd != "")] | sort_by(.window_id) | .[-1] | .cwd' 2>/dev/null)
|
||||||
|
fi
|
||||||
|
|
||||||
|
# If still nothing, fail
|
||||||
|
if [ -z "$cwd" ] || [ "$cwd" = "null" ]; then
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Parse the URI if needed
|
||||||
|
if [[ "$cwd" == file://* ]]; then
|
||||||
|
cwd=$(parse_file_uri "$cwd")
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Verify path exists
|
||||||
|
if [ -d "$cwd" ]; then
|
||||||
|
echo "$cwd"
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
return 1
|
||||||
|
}
|
||||||
|
|
||||||
|
# Main logic
|
||||||
|
cwd=""
|
||||||
|
|
||||||
# Get focused window info from Sway
|
# Get focused window info from Sway
|
||||||
focused_info=$(swaymsg -t get_tree | jq -r '.. | select(.focused? == true) | {app_id: .app_id, name: .name}')
|
if command -v swaymsg &> /dev/null; then
|
||||||
app_id=$(echo "$focused_info" | jq -r '.app_id')
|
focused_window=$(swaymsg -t get_tree 2>/dev/null | jq -r '.. | select(.focused? == true and .app_id? != null) | [.app_id, .name] | @tsv' | head -n 1)
|
||||||
window_name=$(echo "$focused_info" | jq -r '.name')
|
|
||||||
|
|
||||||
if [ "$app_id" = "org.wezfurlong.wezterm" ]; then
|
# Parse tab-separated values
|
||||||
# Match the Sway window title with wezterm's window_title to get the correct pane's cwd
|
app_id=$(echo "$focused_window" | cut -f1)
|
||||||
cwd=$(wezterm cli list --format json 2>/dev/null | jq -r --arg title "$window_name" '.[] | select(.window_title == $title) | .cwd' | head -n 1)
|
window_name=$(echo "$focused_window" | cut -f2)
|
||||||
|
|
||||||
if [ -n "$cwd" ] && [ "$cwd" != "null" ]; then
|
# Check if focused window is wezterm (app_id contains "wez")
|
||||||
# Remove file:// prefix and hostname (format: file://hostname/path)
|
if [ -n "$app_id" ] && [[ "$app_id" == *"wez"* ]]; then
|
||||||
cwd=$(echo "$cwd" | sed 's|^file://[^/]*/|/|')
|
cwd=$(get_wezterm_cwd "$window_name")
|
||||||
|
|
||||||
if [ -d "$cwd" ]; then
|
|
||||||
wezterm start --cwd "$cwd" &
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Fallback: open terminal in home directory
|
# Open terminal with cwd if we found one, otherwise default to home
|
||||||
wezterm &
|
if [ -n "$cwd" ] && [ -d "$cwd" ]; then
|
||||||
|
wezterm start --cwd "$cwd" &
|
||||||
|
else
|
||||||
|
wezterm start --cwd "$HOME" &
|
||||||
|
fi
|
||||||
|
|||||||
@@ -55,3 +55,4 @@ export PYENV_ROOT="$HOME/.pyenv"
|
|||||||
eval "$(pyenv init - zsh)"
|
eval "$(pyenv init - zsh)"
|
||||||
|
|
||||||
eval "$(starship init zsh)"
|
eval "$(starship init zsh)"
|
||||||
|
eval "$(zoxide init zsh)"
|
||||||
|
|||||||
Reference in New Issue
Block a user