I Had That Conversation. Somewhere.
Every AI tool stores sessions differently. When you run dozens per day across multiple tools and projects, finding anything becomes the real bottleneck.
I know I had that conversation. The one about the auth refactor — where I walked through the token refresh flow with the AI, step by step, and we landed on an approach that finally made sense.
But where was it?
Was it in Copilot? Claude? OpenCode? Cursor? I’ve been using them all this week. Was it in the API project? The monorepo? The experimental branch I started on Tuesday? I open Copilot, scroll through sessions — nothing obvious. I open Claude, try a few project directories — too many to scan. I try to remember the day, the wording, anything.
Ten minutes later, I still haven’t found it. I give up and re-derive the solution from scratch.
This keeps happening. And every time it does, I feel a little stupid — because I know the answer exists somewhere on my own machine, in a conversation I personally had, probably less than a week ago. I just can’t find it.
That’s the thing about this problem — many people already feel it but can’t quite name it. They scroll, they search, they give up. They re-derive solutions they already have. And they don’t realize there’s a pattern behind the pain.
So let me name it. And then let me show you what I did about it.
The scale you don’t notice until it’s too late
Here’s the math nobody does upfront.
A typical AI coding assistant user runs a few sessions per day. A power user? Easily 15 to 20. Some are quick one-offs. Some are deep multi-hour explorations. Some are continuations of yesterday’s work. Some are entirely new threads.
I generate around 100 new sessions per week. And I frequently go back to earlier ones — sometimes after days. I start a session, get pulled into a different thread, and it’s only three days later that I have time to continue. By then, I’ve opened dozens of new sessions in between. Finding the right one to resume? That’s where it gets painful.
Now multiply across the ecosystem:
- 5 tools (Copilot, Claude, OpenCode, Cursor, Codex — or whichever combination you use)
- 5–10 active projects (repos, branches, experiments)
- 5 working days
In a month, you’re looking at hundreds of conversations scattered across tools and directories with no unified way to search them.
You don’t need to be an AI power user for this to hurt. Once you use two or three tools across a few projects, session recall starts breaking down fast.
And here’s the thing — this isn’t a problem when each session is clearly tied to one project and one tool. It becomes a problem when:
- You use multiple tools interchangeably for the same type of work
- Your conversations are exploratory — not tied to a specific repo
- You context-switch between projects several times a day
- You can’t remember when or where a particular conversation happened
The moment you lose track, every search is a manual, tool-by-tool, directory-by-directory hunt. I’ve been there. More times than I’d like to admit — including that time I was running five parallel workstreams and couldn’t tell you which tool I’d used for what by the end of the day.
Every tool speaks a different language
This wouldn’t be as painful if there were a standard. But there isn’t.
Each AI coding assistant invented its own session storage format, its own directory structure, its own internal schema:
| Tool | Format | Location | Structure |
|---|---|---|---|
| GitHub Copilot | JSONL (event stream) | ~/.copilot/session-state/*/events.jsonl | Append-only event log with turns |
| Claude Code | JSON files | ~/.claude/projects/*/ | Project-scoped conversation trees |
| OpenCode | SQLite database | ~/.local/share/opencode/opencode.db | Relational tables with sessions + messages |
| Cursor | JSON transcripts | ~/.cursor/projects/*/agent-transcripts/ | Project-scoped agent conversation logs |
| Codex | JSONL (daily logs) | ~/.codex/sessions/YYYY/MM/DD/*.jsonl | Date-partitioned session files |
Five tools. Five formats. Five locations. Zero interoperability.
Observation: This is the same kind of fragmentation we saw with note-taking apps a decade ago — before search-everything tools like Alfred or Spotlight unified access. The difference is that AI sessions are growing much faster than notes ever did. And with five major players each storing data their own way, it’s not getting simpler anytime soon.
This is completely normal for a young, fast-moving ecosystem. Every tool optimizes for its own workflow. Nobody sat down to write an RFC for “AI session interchange format” — and that’s how it should be. Standardization follows adoption, not the other way around.
But the cost is real right now. You can’t grep across all five. You can’t even write a simple script that reads all of them without understanding each format individually. And the formats change — because these tools are evolving fast, and storage is an implementation detail, not a public API.
The new layer of the productivity stack
Take a step back.
A decade ago, the “productivity problem” was managing notes — Evernote, Notion. Then it was tasks — Todoist, Jira. Then communication — Slack, Teams. Then code — Git, GitHub, CI/CD.
Each time, a new category of work outgrew our ability to navigate it manually. And each time, the solution was a tool that indexed, searched, and organized that category.
AI sessions are the next category.
We’re generating more conversational context per day than we ever did with notes or emails. And unlike notes, these sessions are active — they contain working code, decisions, debugging traces, architectural reasoning. Losing a session isn’t like losing a sticky note. It’s like losing a pair programming partner’s memory.
The productivity toolset is evolving. We used to need tools to manage our information. Now we need tools to manage our work with AI. This isn’t a niche — it’s a new layer of the stack that every heavy AI user will eventually need.
sessfind: one index, all sessions
So I built sessfind.
I was tired of the manual hunt. I wanted something that felt like grep for AI sessions — fast, local, no cloud, no accounts. Something I could point at my machine and say: “find that conversation.” A single binary I could install and forget. Rust was the obvious choice: fast indexing, tiny binary, no runtime dependencies.
sessfind indexes and searches your AI sessions from GitHub Copilot, Claude Code, OpenCode, Cursor, and Codex in one place.
# Install
cargo install sessfind
# Index all your sessions
sessfind index
# Launch the interactive TUI
sessfind
That’s it. Three commands. After indexing, you get a split-pane terminal UI where you can search across all your sessions, preview conversations, and — crucially — resume any session directly. Select a result, press Enter, and sessfind hands off your terminal to the right tool with the right session ID.
Four ways to find what you lost
Not every search is the same. Sometimes you remember exact keywords. Sometimes you remember the concept but not the words. Sometimes you barely remember what you’re looking for. sessfind supports four search modes — cycle between them with Shift+Tab in the TUI.
FTS — Full-Text Search
The default. Powered by tantivy (a Rust search engine library) with BM25 ranking — the same algorithm behind Elasticsearch, Solr, and most modern search engines.
Use it when you remember specific keywords: a function name, an error message, a library name.
auth refactor → any of these words (OR)
+auth +refactor → both words required (AND)
"token refresh flow" → exact phrase
migrat* → prefix wildcard
It’s fast — results appear as you type. This covers 80% of my searches.
Fuzzy search
Edit-distance matching (Levenshtein) across session content. Each query term is matched against indexed tokens with a tolerance of 1–2 character edits — typos, partial words, and close-enough guesses all work.
Good for approximate queries when FTS is too strict. If you type "authh" instead of "auth", FTS returns nothing — fuzzy finds it. The edit distance is 1 for short words (≤3 chars) and 2 for longer ones, so it tolerates typos without drowning you in false positives.
LLM search
This is where it gets interesting. You describe what you’re looking for in natural language:
“that conversation where I was fixing the CI pipeline for the Go project”
sessfind detects which AI CLI tools you have installed (claude, opencode, copilot) and each one appears as a separate search mode: LLM (claude), LLM (opencode), etc.
When you trigger the search, the LLM analyzes your intent and generates optimized FTS queries — synonyms, related terms, even translations. sessfind runs each generated query against the tantivy index and merges the results.
It’s not the LLM searching directly. It’s the LLM thinking about how to search — and then letting the search engine do the work.
Observation: This is a pattern worth noting. Instead of throwing the entire corpus at the LLM (expensive, slow, context-limited), sessfind uses the LLM as a query optimizer. The heavy lifting stays with the search engine. The LLM just makes the queries smarter.
Semantic search
ML embedding similarity search. Finds conceptually similar sessions even when exact keywords don’t match.
The optional sessfind-semantic plugin uses ML embeddings to represent each session chunk as a vector. At query time, your input is embedded and compared via cosine similarity. (More details on the model and storage in the architecture section below.)
If you search for "database connection pooling" and the session discussed "managing concurrent DB connections with a shared resource pool" — semantic search finds it. FTS wouldn’t.
The tradeoff: it’s slower (runs a local neural network), and the first run downloads the model (~450 MB). But for vague, conceptual queries — it’s the only mode that works.
When to use which
| Mode | Speed | Best for | Limitations |
|---|---|---|---|
| FTS | Instant | Exact keywords, function names, errors | Needs exact (or prefix) match |
| Fuzzy | Instant | Typos, approximate words | No semantic understanding |
| LLM | 2–5 sec | Natural language, vague recollections | Requires AI CLI on PATH |
| Semantic | 3–10 sec | Conceptual queries, cross-language | Needs plugin + model download |
Under the hood
For those who want to know how it works — the architecture is intentionally simple.
Indexing pipeline
sessfind has source adapters for each tool. Each adapter knows how to read that tool’s session format and extract user/assistant message pairs. Messages are split into chunks (~6,000 characters each) and fed into a tantivy full-text index.
The chunking is necessary because some sessions are massive — hours-long coding sessions with thousands of lines of context. Searching the full text of such a session would hurt ranking quality. Chunks keep the BM25 scores meaningful.
Incremental updates
Nobody wants to wait for a full re-index every time. sessfind tracks each file’s mtime and size in a local SQLite database (state.db). On subsequent runs, only new or modified session files are processed. If nothing changed, the index step takes milliseconds.
Plugin architecture
Semantic search lives in a separate binary (sessfind-semantic). The main binary auto-detects it on PATH — no configuration needed. They communicate via a JSON protocol over stdin/stdout: the main binary sends search parameters, the plugin returns results.
This keeps the core binary small and dependency-free. If you don’t need semantic search, you don’t pay the cost of bundling an ML runtime. The plugin uses the multilingual-e5-small model (384-dimensional embeddings, ~450 MB download on first use, ONNX Runtime inference) and stores vectors in a local sqlite-vec database.
crates/
├── sessfind/ # main binary (TUI, CLI, indexer, sources)
├── sessfind-common/ # shared types (Source, SearchResult, SearchParams)
└── sessfind-semantic/ # optional plugin (embedder, sqlite-vec)
Resume: clean handoff
When you select a session and press Enter, sessfind shows a confirmation dialog: where do you want to resume? The original project directory — or your current working directory? Useful when the original directory no longer exists, or when you want to apply an old session’s approach to a different project.
Once you confirm, sessfind calls exec() — replacing its own process with the AI tool. Your terminal is handed off cleanly. sessfind’s process is gone; the AI tool takes over as if you’d launched it directly.
Three ways to use it — and the one that changed how I work
sessfind gives you three access modes, and each fits a different moment.
The TUI — sessfind with no arguments. This is the full interactive experience: split-pane layout, real-time filtering, session preview. Best for exploratory searches when you’re not sure what you’re looking for and want to browse.
The CLI — sessfind search "auth refactor" --method fts. Scriptable, pipeable, composable. Good for quick lookups or integration into shell workflows.
The agent skill — and this is the one that surprised me.
sessfind ships with an agent skill — a set of instructions that teaches AI assistants how to invoke sessfind. Install it once (it’s a single file that goes into ~/.claude/skills/, which is shared by Claude Code, GitHub Copilot CLI, and OpenCode), and suddenly your AI assistant can search your session history during a conversation. The skill doesn’t expose your data anywhere — it only instructs the local AI CLI to call sessfind on your machine.
You can ask Copilot: “Find that conversation I had about database migrations last week.”
And Copilot — without you leaving the terminal — invokes sessfind search, finds the session, shows you a preview, and offers to resume it.
But here’s the side-effect I didn’t anticipate: mid-conversation recall.
Imagine you’re deep in a coding session, working on a new feature, and you realize: “Wait — I solved something similar two weeks ago. I worked through this exact problem with Claude, in a different project. What was the approach we landed on?”
With the agent skill, you don’t have to leave your current session. You don’t have to open another terminal, launch sessfind, find the session, copy-paste the relevant parts. You just say: “Find my conversation about connection pooling from two weeks ago and show me the approach we agreed on.”
The AI finds the session via sessfind, reads the full transcript, extracts the relevant context, and brings it into your current conversation. Your previous session becomes reference material — instantly available, without breaking your flow.
But it goes further than just finding old conversations. You can use it for reflections:
- “A few days ago we worked on X in project Y — let’s do something similar here”
- “Summarize our last month of collaboration”
- “How many sessions did I spend on the payment integration?”
That last kind of query is surprisingly useful — it turns your session history into an informal log of AI utilization. How much time did you actually spend on a topic? Which tool did you use most? Was the collaboration effective? These are questions you can’t easily answer without a searchable index across all your tools.
Observation: This is where it gets recursive: AI tools helping you navigate your history with AI tools, so you can work better with AI tools. It sounds like a joke, but it solves a real problem — the continuity gap between sessions. Each AI conversation starts with a blank slate. The agent skill is one way to bridge that gap. And unlike structured memory systems that lose context through summarization and categorization, this is access to raw, unfiltered conversations — real memories, in real time, with all the nuance intact.
What it doesn’t do
sessfind is local-only and read-only. It doesn’t sync sessions to the cloud. It doesn’t work on Windows (macOS and Linux only). It doesn’t support Windsurf, Aider, or other AI tools — but it covers the five I actually use: Copilot, Claude Code, OpenCode, Cursor, and Codex. Adding new sources isn’t hard (it’s a trait implementation in Rust), but I built what I needed first. The project is open source — if you use a tool that’s not supported, contributions are welcome.
It also doesn’t modify, merge, or organize your sessions. It’s pure search-and-resume. I deliberately kept the scope narrow — the moment a tool starts reorganizing your data, you need to trust it a lot more.
Keeping the index fresh
Indexing is fast, but you still need to trigger it. A few options:
The daemon — sessfind watch install sets up a background service (launchd on macOS, systemd on Linux) that monitors session directories and re-indexes automatically when files change. Uses OS-level file watching — essentially zero CPU when idle.
The flag — sessfind --index runs a quick incremental index right before launching the TUI. Good enough if you always search via the interactive UI.
The shell hook — add sessfind index >/dev/null 2>&1 & to your .zshrc. Indexes in the background every time you open a terminal. The incremental index is fast enough that you won’t notice.
That conversation? Found it.
Remember the auth refactor conversation from the beginning of this post?
It was in Claude. In the api-gateway project. Last Tuesday. Found in 2 seconds with a simple FTS query: +auth +refresh +token.
The session had the full reasoning chain — why we chose rotating refresh tokens over long-lived access tokens, the edge cases we considered, the final implementation approach. All of it. Ready to resume.
This is a small tool that solves a narrow problem. But I think the problem itself points at something bigger.
As AI tools mature, the meta-tooling around them becomes just as important as the tools themselves. We’re building workflows, habits, and histories with AI — and those histories have value. Losing them is a real cost. sessfind is one piece of that second layer: the tooling that helps you work with AI tools more effectively.
I showed it to a colleague recently. He started using it, and after a while his reaction surprised me: “I didn’t know I needed this, and yet :D”. He wasn’t looking for a solution — he didn’t even know he had a problem. But once he could search across all his sessions, he realized how much time he’d been wasting scrolling and guessing.
That’s why I’m writing this post. Not to sell you a tool — but to name a problem that many of us already feel but haven’t articulated yet.
GitHub: letsdev-it/sessfind · Documentation
Thanks to Rafał Schmidt for testing sessfind and providing feedback that shaped how it works today.