FlashQuery is open source — open edition· Star on GitHub

Are you DoorDashing for ?every AI tool you use?

Fifteen minutes re-explaining yourself at the start of every session. Copy-paste between Claude and Cursor. Ferry the same three files into four different tools. You asked the AI to do work for you, somehow you're the one running the errands.

FlashQuery is the data layer that stops the errand-running. It connects the models to your stuff - your selected files and memories. Every MCP-compatible AI tool can read and write to.

git clone https://github.com/FlashQuery/flashquery
A delivery person rushing with bags of documents, visualizing the inverted dynamic of running errands for your AI tools

You're not the only one doing this.

People have started calling it context loss. You just want to say, "Hey, remember we talked about this?" This digital amnesia is leaving the parking brake on while trying to accelerate.

"A glorified human clipboard. Copy, paste, repeat. Copy, paste, cry a little."

— Developer post, builder.io

"My workflow is 70% AI, 20% copy-paste, 10% panic."

Hacker News

"AI coding tools still suck at context."

LogRocket, 2026

Harrison Chase (LangChain): "In order to own your memory, you need to be using an Open Harness." Nate Jones's agent stack analysis and Locked In make the same case from the infrastructure side.

The field has been trying to solve this for a year

Since early 2025, developers have been independently building persistent-memory systems for Claude Code. At least a dozen HN posts. Each solves part of the problem, but none spans all of it.

FlashQuery is what happens when that layer gets built once, in the open, across every data type.

What it is

  • A server process you run locally, configured via YAML.
  • An MCP server exposing one interface over your data to any MCP-compatible tool.
  • A Supabase-backed store for relational data, memory, and vectors.
  • A Markdown vault, a folder of plain text, editable in anything.
  • A Git repo underneath. Every change versioned. git push backs it up.
  • Plugin-extensible. Domains bring their own schemas, skills, and tools.

What it isn't

  • Not a SaaS. Not an agent framework. Not memory-only.
  • Not a replacement for Claude, Cursor, or any LLM. It sits between you and whichever model you use.
  • Not a note-taking app. Obsidian is still Obsidian.
FlashQuery architecture: AI clients connect via MCP to FlashQuery, which routes to Supabase (Postgres + pgvector), a Markdown vault, and a Git repo

Built on:

Supabase Postgres Markdown Git MCP

Nothing proprietary, no vendor-controlled backends, privacy-first, and stable.
If FlashQuery disappeared tomorrow, your data remains yours.

What FlashQuery handles so you don't have to

TL;DR: This is the months of plumbing you'd prefer not to build.

Drop a markdown doc in the vault folder on your filesystem. FlashQuery sees it, watches for changes, makes it semantically searchable. Edit it in TextEdit, Obsidian, VS Code, Vim, Emacs, whatever you use. No import workflow or sync daemon, and no app to open. ls it, grep it, git blame it. Your folders are the system.

Memory lives in Postgres, managed by FlashQuery. Claude, Cursor, ChatGPT, Qwen, local models — all of them can read and write memories. Switch tools or models; the context doesn't reset because it was never inside any of them. Scope memory to a conversation, a project, a person. Then query it like the database it is.

If your vault folder happens to be a git repo, every document dragged into to the vault, every edit an agent makes, every decision written down are committed to Git with a descriptive message. git log to see what changed. git diff to see exactly how. git revert to undo anything the AI touched. Your vault is an auditable record of changes.

We built section-aware markdown doc tools. AI can get a synopsis without loading the whole document, insert a paragraph without rewriting the rest. Your token budget goes to reasoning instead of parsing. The MCP tool surface is designed so the model does the least work necessary to answer the question.

Claude Code, Cursor, ChatGPT Desktop, Cowork, Claude Desktop — one MCP config entry per tool, same vault, same memory, same plugins behind all of them. Add a plugin once and every connected tool sees its new capabilities automatically. No per-tool integration work to maintain.

FlashQuery is an MCP server. Point a Cloudflare Tunnel or Rathole at it and your entire data layer — vault, memory, plugins — is reachable from anywhere over a private connection you control. Your files stay on local disk. Nothing changes about how your AI tools connect to it.

Full architecture, plugin spec, and MCP tool surface: github.com/FlashQuery/flashquery

Create "AI apps" without building the infrastructure.

A plugin is a schema and a set of skills. FlashQuery handles the rest: storage, versioning, search, the MCP surface.

CRM Plugin
shipping

Contacts, businesses, opportunities, interactions, all in FlashQuery's data layer. No separate database, no login, no UI.

"I just had coffee with Sarah Chen,
 VP of Design at Meridian Labs."

Routes into Supabase, the vault, and Git at once. Later: "brief me before my Meridian call" returns everything the system knows.

Product Brain Plugin
shipping

A durable, queryable knowledge layer for the product you're building: definitions, research, decisions, requirements, tracking. Who needs Confluence?

"Why did we decide to store documents
 in Markdown instead of the database?"

Answer sourced to the decision document, because the document and the database share the vault.

Skill Server Plugin
on the roadmap

A skill registry built as a FlashQuery plugin, Your .claude/skills/ directory with versioning, semantic discovery, and a telemetry-driven improvement loop.

Designed, on the roadmap.

The plugin spec is open. Browse the plugins repo · Join #flashquery on AI Product Hive if you're building one.

We run on it.

FlashQuery has been built and tested inside Claude Code and Cowork. The CRM and Product Brain plugins are in daily use by the FlashQuery team. This isn't a lab demo, it's the environment the project lives in.

Run it locally.

FlashQuery is a scoped server process. No cloud dependency, no telemetry, no account required. Connect it as a local MCP server and Claude Desktop, Claude Code, or Cursor can read and write your vault on the $20 Claude subscription, no Cowork needed.

Self-host in the cloud.

Stand up FlashQuery on a VPS or personal Supabase project you control. Same vault, same plugins, reachable from every machine. Nothing hosted by FlashQuery.

What's coming next

PDFs, Word docs, spreadsheets, same drop-in workflow.

Drop a non-Markdown file in the vault. A cheaper model handles the embedding pass so the expensive model gets structured context, not a raw binary. Same folder, more file types, no extra setup.

Plugins from the community.

The CRM and Product Brain plugins are the first two. The spec is designed for anyone to build against. We expect the third plugin won't come from us. It'll come from someone in #flashquery who needed something we hadn't thought of.

Describe what you want to track. FlashQuery builds the schema.

Tell FlashQuery in plain English what you need. It designs the right Postgres schema, creates the tables, wires up the MCP tools. You can export the result as a shareable plugin when you're done.

Document projections: the expensive model reads a summary, not the whole file.

A cheap model preprocesses vault documents into compact, skill-specific views. The expensive model reasons against that view instead of the raw file. Same document, as many projections as there are skills that use it. Token budget spent on reasoning, not parsing.

Stop DataDashing. FlashQuery is on GitHub.

git clone https://github.com/FlashQuery/flashquery

Full setup, plugin spec, and architecture notes live in the repo.

View on GitHub

Best place to try it today: Claude Code or Cowork.

Find us in #flashquery on AI Product Hive.

Demo video coming soon.