Skip to content

Knowledge and memory

The single most misunderstood thing about Jootle is its memory.

People assume it works like a chatbot, where every conversation is fresh and the AI forgets you the moment the window closes. Or they assume the opposite, that it has a perfect transcript of every word you’ve ever said and grep-searches it on demand.

Both are wrong. The truth is more useful.

Your AI has two distinct kinds of memory, and they work together.

Conversational memory. Recent messages, in order. This is what you’d call short-term memory. When you say “the third one”, your AI knows what “the third one” refers to because it has the last few minutes of conversation right there.

The knowledge graph. A structured picture of the people, projects, places, and facts that matter in your life. This is long-term, accumulated over weeks and months. When you say “tell Lily I’ll be late”, your AI knows who Lily is from this graph, regardless of whether her name came up in today’s chat.

You don’t choose which one is used. Your AI picks the right one (or both) for each question. You just notice that it stops re-asking you who Lily is after the first few times.

Two ways. One is invisible, one is explicit.

When you have ordinary conversations with your AI, it quietly notices facts and adds them to the graph. The first time you say “Lily and I are heading out to dinner”, your AI files away that Lily is a person, that you have a relationship with her, and that you do things together. It doesn’t interrupt to confirm. It just remembers.

Over weeks, the picture fills in. Your AI starts to know who works at which company, which kid plays which sport, which clients pay on net-30 and which pay on net-60. None of it is spreadsheet data entry. It’s the residue of normal conversation.

You can inspect the graph at any time. The Knowledge Map in your sidebar shows the entities your AI has formed opinions about, the relationships between them, and (under the hood) the conversations that produced each piece. If something is wrong, you correct it in plain English. (“Lily is my wife, not my partner.”) The fix flows back into the graph and into every future answer.

Sometimes you want to be sure a fact lands. For that, there’s a tag.

When your AI sees [REMEMBER:] in its response, that line gets saved to long-term memory regardless of whether it would have been picked up naturally. You don’t usually type this yourself; you ask your AI to remember something and it produces a [REMEMBER:] line:

You: Make a note that I prefer outbound calls to be made between 9 and 11 in the morning, never after lunch.

Your AI: Got it. [REMEMBER: Shane prefers outbound calls between 9 and 11am, not after lunch.]

You can see the [REMEMBER:] line in the response. That’s how you confirm it actually saved. If you don’t see one, ask again, possibly more explicitly.

There’s nothing magical about the tag. It’s a hook the AI uses to write to memory, and putting it in the response makes the save transparent and reversible. If you ever want a memory erased, you tell your AI to forget it, and the row is deleted.

What gets remembered tends to be:

  • Stable facts. Names, relationships, addresses, preferences, recurring constraints. Things that will still be true next month.
  • The shape of ongoing work. That the kitchen project exists, who’s involved, what the budget is. Status of the project at a point in time, not every change to that status.
  • Standing instructions. Rules you’ve given your AI. “Don’t draft emails over 150 words unless I ask for more.” “Always price in USD.” “I get nervous when invoices go over 90 days.”

What does not get pulled into the graph:

  • Casual chat. “What’s the weather”, “tell me a joke”, and so on. These are in conversational memory while they’re recent and then they age out.
  • Single-use scratch data. A specific value you needed for one task (“the meeting is at 3pm Tuesday”) is in conversation context, not in the long-term graph.
  • Anything sensitive that shouldn’t persist. If you say “this is just between us, don’t write it down”, your AI honors that. (If you’re not sure it understood, ask “did you write that down?” and it’ll tell you. If it shouldn’t have, ask it to forget.)

This split matters. A well-organized knowledge graph stays useful for years. A knowledge graph stuffed with every passing detail becomes noise.

You’ll notice the difference within a week.

Early on. Your AI asks more questions. “Who is Lily?” “What’s A Box of Chaos?” “Should I assume that’s for the kitchen project?” This is fine. It’s filling in the graph.

After a couple of weeks. Those questions stop. You can say “tell Lily I’ll be late” and the email composes itself. You can say “log this against the kitchen project” and your AI knows which project, which contractor, which budget line.

After months. Cross-context recall starts to surprise you. “Why did we go with Mike instead of the other contractor?” produces an answer because your AI remembers the comparison conversation, the contractor’s quote, and the reasoning that landed at the decision. You didn’t put that anywhere on purpose. It just accumulated.

Two questions come up:

“Is my data going to OpenAI / Anthropic / DeepSeek?” Your prompts go to whichever AI provider is handling the current task (see AI providers and routing). Those providers process the prompt to produce a response and, per their policies, do not train on your data. They do not see your knowledge graph as a whole, only the relevant slices needed to answer a given question.

“Can someone else read my graph?” No. Your Jootle instance runs on a private VPS we provision and operate for you, single-tenant by design. Jootle the company can support and update your instance, but we don’t browse customer data. Your knowledge graph lives in your instance’s database, isolated from every other customer’s, full stop.

The same patterns work whether the fact is small or large:

  • State it as a fact. “Lily is my wife.” “We bill our clients monthly.” “Forge is the toolkit I use for content production.”
  • Correct an existing fact when it’s wrong. “Actually, Mike is the electrician, not the general contractor.”
  • Set a standing rule. “Never schedule meetings on Fridays after 3pm.” “Always summarize long documents under 300 words unless I ask for the full thing.”
  • Erase a memory. “Forget that I said Mike was the general contractor.”

There’s no API. There’s no form. You just talk. The [REMEMBER:] and corresponding write happens behind the scenes, and you’ll see it in the response if you want to confirm.

Where memory shows up in the rest of the handbook

Section titled “Where memory shows up in the rest of the handbook”

You’ll see references back to this chapter throughout the rest of the handbook. The short version every time:

  • In projects: your AI remembers who is on each project and what’s been decided.
  • In contacts: the “Contacts” view is a slice of the knowledge graph filtered to people.
  • In integrations: when you connect Gmail, your AI starts learning who you correspond with and which threads matter.
  • In standing instructions: the rules you set in one conversation apply across every future conversation.

It’s all the same memory.