An MCP server that gives Claude direct read/write access to an Obsidian vault. Built to make years of thinking searchable and traversable from inside any Claude session.
Conversations with Claude are disposable by default. Every session starts from zero. I had hundreds of conversations across Claude.ai, Notion, and Apple Notes that contained real decisions, technical reasoning, and project context. None of it was connected. None of it was searchable from inside a new session.
The MCP protocol lets Claude call tools directly. So I built a server that turns an Obsidian vault into Claude's long-term memory — and then built the pipelines to populate, classify, and link everything in it.
The first pipeline problem: getting conversations out of Claude.ai and into Obsidian. Claude.ai has no public export API. Anthropic offers a bulk JSON download, but it's manual, full-export-only, and gives you a single monolithic file with no incremental capability.
So I opened DevTools. The claude.ai frontend makes authenticated requests to internal endpoints using a sessionKey cookie. Two endpoints do everything: /chat_conversations_v2 returns a paginated index of conversations sorted by updated_at, and /chat_conversations/{uuid} returns the full message tree for a single conversation. Neither is documented. Both could break any time Anthropic changes their frontend.
The session cookie expires every few days and requires re-extraction from the browser — there's a helper script for that, but it's still manual. A cron job runs the sync every 8 hours while WSL is active, and a session-start hook alerts when the key has expired or sync is stale. It's a brittle hack held together with timestamp checks and error handling, and it's synced 274 conversations without losing one.
sessionKey cookie from browser — expires periodically, no refresh mechanismupdated_at comparison — typical run makes 1 API calltree=True&rendering_mode=messages to get the complete message structure--overwrite to force updateEvery note enters through a typed pipeline: Claude.ai API sync, Notion API sync, Notion export parser, or Apple Notes sync. Each pipeline is different — different APIs, different auth, different edge cases — but they all converge on the same contract: markdown with YAML frontmatter, deduplicated by source_id, tested in dev before touching prod.
The key design decision: idempotency as a shared invariant. Every pipeline can be re-run safely. source_id prevents duplicates. Incremental modes mean a typical sync makes 1 API call. And every pipeline has a --dry-run flag, because the first time you run something that writes 274 files is not the time to discover a bug.
Flat tags don't scale. When 95% of notes are tagged "ai" and 89% are tagged "career," the tags tell you nothing. The vault needed a classification system that could differentiate a resume optimization conversation from a freight logistics brainstorm from a baby sleep tracker.
The taxonomy uses two orthogonal axes. Domain captures what a note is about — 10 tags from domain/career to domain/meta. Mode captures what the note does — thinking, doing, reference, or reflection. A third axis, lifecycle, tracks where the note stands: seed, active, evergreen, or archived.
Classification uses scored keyword matching with title weighting. Title keywords count 3x. Longer notes require higher scores to qualify for a domain, preventing verbose documents from matching everything. Each note is capped at 3 domains and 2 modes — enough to capture cross-cutting themes, constrained enough to stay meaningful.
Tags classify notes individually. Links connect them into a navigable graph. The link builder reads all 672 notes, computes pairwise similarity from shared domain tags and keyword overlap, and generates [[wikilinks]] to the top related notes for each document.
Similarity scoring combines tag Jaccard distance (50%) with keyword Jaccard distance (50%). Keywords are extracted from the title (weighted 3x) and first 500 words of body, with stopword filtering. The result: a baby sleep note links to other parenting notes, a PKM architecture conversation links to other systems-building conversations, and a ServiceTitan interview prep note links to other interview prep notes — across all three source systems.
Series detection finds notes that belong to a sequence — "Phase 1, Phase 2, Phase 3" or date-stamped daily notes — and adds prev/next navigation on top of the similarity-based links. This means Claude can traverse a chain of related sessions in order, not just find nearby topics.
The most operationally complex pipeline. Apple Notes has no API — not an undocumented one, not a deprecated one. None. The only way to get notes out programmatically is an iPhone Shortcut that iterates through the Notes app, serializes each note as a text file with inline metadata, and writes it to iCloud Drive. iCloud for Windows syncs those files to a local folder. The importer reads from there.
The Shortcut times out after ~50 notes. Covering a 278-note library means running it multiple times with different sort orders (by date created, date modified, title) to ensure full coverage. iCloud doesn't sync files eagerly — they appear as cloud-only placeholders until you open the folder in Windows Explorer to trigger download. The importer detects unreadable placeholders and skips them rather than failing.
Not everything lives in Claude conversations. Project specs, meeting notes, and venture evaluations were in Notion. Two separate pipelines handle this: an API sync for live pages (104 from my active workspace) and an export parser for a static guest workspace snapshot (77 pages).
The API sync detects updated pages by comparing last_edited_time against local frontmatter. Notion's block-based content model gets flattened into clean markdown. The export parser handles Notion's zip format and nested page hierarchies. Both are idempotent — same source_id pattern as the Claude.ai sync.
Every change is tested in a dev vault before touching production. The dev vault is a structural mirror of prod with test fixtures. The promote pipeline runs: build, test against dev, test against prod.