TL;DR: If AI is becoming a primary interface for how we work, then the traditional "app" may be dissolving. You don't need a CRM application if the AI handles the interaction and the data just gets organized underneath. I'm building an open-source, self-hostable data layer that makes that possible, works with any AI, and keeps everything transparent, portable, and yours.

The Friction

I've been thinking a lot about a friction that I suspect many of you feel too, especially if you work with LLMs regularly, whether through a chat interface, a coding tool, or an AI agent.

The workflow tends to go something like this: you bring data to the AI, you do the work (drafting, analyzing, brainstorming, coding, whatever it may be), and then you have to bring data back out of the AI into whatever environment the rest of your work lives in. Some of what was created or discussed during that session matters later, some doesn't, and you're the one making those decisions and manually moving things around. Every session. Both directions.

Memory Is a Start, but It's Too Narrow

And if you switch AI tools, or even just start a new conversation, you're starting from scratch. Context, preferences, prior work — it's all locked inside a session or a provider. The most common solution so far has been "AI memory," which is a start, but I'd argue it's a narrow framing of a broader problem. What is a memory, really? It's stored information. A preference, a fact, a prior decision. But so is a document. So is a contact record. So is a project plan. Memory, as most AI tools implement it, is just one thin slice of all the information that flows through these workflows, and it's usually hidden away in a black box you can barely see, let alone organize. There are open frameworks working to pull memory outside of the language model and give users more control, which is a step in the right direction. But my inkling is that considering memory alone is too narrow — it's a subset of a much larger data management problem.

The Agents Are Freer, but the Data Isn't

There's been good progress on multiple fronts. Tools like OpenClaw and Claude's Cowork mode have shown what's possible when AI agents work directly with your files and your system, rather than pulling everything into a proprietary cloud. And we're seeing connectors emerge too — Claude recently added Google Workspace integration so you can access your Gmail, Calendar, and Drive from within a conversation, which is a clear signal that people want their data reachable from inside the AI. That's all meaningful progress. But each of these solutions is still tied to a specific vendor pairing, a specific platform, or a specific AI provider. The data itself — all the information generated and consumed across AI workflows — still doesn't have a managed, vendor-agnostic home. The agents are freer, but the data they work with is still largely ad hoc.

The App Is Dissolving

This is where my thinking took a turn. If AI is increasingly becoming a primary interface for how we work (and I believe it will continue to be), then something interesting starts happening to the concept of a traditional "app."

Consider a CRM. There are three levels of how AI can interact with it (with the third covering where I think things are headed):

The first level is what most of us do now: You might use AI to capture details about a meeting you just had, get some useful summarization output, and then go open your CRM to manually enter the relevant details. The AI helped with the thinking, but the data entry is still on you.

The second level is what many CRMs are moving toward: They have their own AI chat built in, which is convenient for doing what we did above. But you're still operating within the boundary of what that CRM provides. You can't easily traverse across that boundary into your project notes, your meeting history, or anything else that lives outside the app's walls. Or, in a slightly different flavor, people imagine the AI auto-inserting information into the CRM for them, which eliminates the manual step but still treats the CRM as the center of gravity.

The third level (which I think is the more interesting one): What if the CRM as a separate application just... didn't need to exist? What if the AI was the interface, and the data was simply stored in a way that you could access it naturally through the AI, and simultaneously through a set of auto-organized, human-readable documents that let you browse and edit everything in the most natural way possible?

That's the dissolved app. The application as a distinct piece of software goes away. What remains is the data, managed by an infrastructure layer that any AI can talk to. (Yes, I realize this sounds like a concept car. Bear with me.)

What That Actually Looks Like

To make this less abstract, here's what happens under the hood when you tell the AI:

"I just had coffee with Sarah from Acme; they're unhappy with their design agency and budgeting for a rebrand in Q3."

From that single statement, the data layer:

  1. Creates a contact document for Sarah (or updates it if she already exists), written as a plain or templated markdown file you could open and edit in any text editor. The Q3 rebrand opportunity and their dissatisfaction with their current agency are captured as natural prose in the document, not crammed into form fields.
  2. Writes a structured record to a database with Sarah's association to Acme, interaction date, and relevant tags, so you can later run precise queries like "who haven't I spoken to in 30 days?" or "show me all contacts tagged as qualified leads."
  3. Embeds the context semantically, so that weeks later when you ask "who mentioned being unhappy with their current vendor?" or "which companies are budgeting for Q3?", the system surfaces the right results by meaning, not just by keyword or tag match.

No forms, no fields, no switching apps. These three layers work together — a query like "show me contacts at companies budgeting for Q3" combines semantic search (finding the Acme document where Q3 budgeting is mentioned) with structured records (resolving which contacts are associated with Acme). The data is organized across documents, structured records, and semantic embeddings simultaneously, so you can reach it from whichever angle makes sense at the time: browsing files, querying the database, or just asking the AI.

But this is where it gets interesting. You already have connectors into your email, so having the AI parse messages (sent and received) and annotating summaries of key points into the same contact records is easy, given those documents from earlier are still available to the AI. Before you even read your email, the AI could have reviewed an inbound message from a prospect, summarized it in their contact document, and notified you that there's an opportunity to close. And there's no traditional CRM software in sight here.

What I'm Building

It isn't a memory system, and it isn't a knowledge base. It's the data infrastructure layer that sits underneath both of those concepts and unifies them with structured records, all accessible through a single interface that any LLM can connect to.

Concretely, it's an open-source data layer that runs in a container on your own machine, sits between any AI you choose (completely vendor-agnostic), and manages a structured store underneath. Everything is transparent — your data lives as human-readable documents and in a database that you host, in open formats you own. The AI reads from it and writes to it. But so can you, directly, using whatever tools you already use. No manual bridging between the AI world and the rest of your workflow.

The "app" dissolves, but the data remains, organized and accessible. It's private-first, self-hostable, and not tied to any single AI provider — you can switch tools without losing anything, because the data was never theirs to begin with.

Where I Am and Where This Is Going

I have a working implementation that I'm using daily. It's not a concept deck or a whitepaper — it's running software that manages documents, records, and semantic memory through a unified layer, connected to whichever LLM I happen to be working in on a given day. I'm building toward an open-source release, and I'm sharing this now because I want real feedback before that happens.

A few things I'm genuinely curious about: Does this friction resonate with how you work with LLMs today? Does the idea of the "dissolved app" — where the AI is the interface and the data layer handles the rest — feel like the right direction? And if you had something like this, what's the first problem you'd point it at?

If this is something you'd want to try when the repo drops, or if you think I'm headed down the wrong path entirely, I'd like to hear either way. You can reach me directly or drop a comment.

Matt Genovese
Matt Genovese
Founder at FlashQuery to give enterprises sovereign control over their AI control plane.