Skip to content

Correcting and teaching

Your AI will get things wrong. This isn’t a defect of Jootle; it’s the nature of any assistant who is learning your life, your preferences, and your work in real time. What matters is what happens after the mistake.

This chapter is about the four kinds of corrections, and how to make each one stick.

1. A factual correction. Your AI believed something that isn’t true. (“You thought Mike was the general contractor, but Mike is the electrician. The general contractor is Anna.”)

2. A preference correction. Your AI did something the way it does for most people, and you want it done differently. (“Don’t include the executive summary in weekly reviews. I read the whole thing.”)

3. A judgement correction. Your AI picked the wrong option among reasonable options. (“This time we went with the cheaper contractor. Don’t always optimize for cost; in this case quality was the right call.”)

4. A behavior correction. Your AI did something it shouldn’t have. (“Don’t send emails on my behalf during weekends. Even if I ask, push back and confirm.”)

The fix for each is different. The mechanic for making the fix durable is the same.

State the correction. Then add the words “from now on” or “remember that” or “going forward”.

That second piece is the part that turns a one-time correction into a durable rule.

Without “from now on”: “No, Anna is the general contractor, not Mike.”

Result: your AI corrects the immediate response, but a week later might make the same mistake.

With “from now on”: “No, Anna is the general contractor, not Mike. Remember that for the kitchen project.”

Result: your AI writes a [REMEMBER:] line, the knowledge graph updates, and a week later it gets Anna right without prompting.

You don’t need a magic word. “Remember that”, “going forward”, “from now on”, “always”, “by default” all work. What matters is that you’re signaling persistence.

These are the simplest. Plain language, present tense, with the persistence signal.

“Anna is the general contractor on the kitchen project, not Mike. Mike is the electrician. From now on please get this right.”

Your AI updates the relevant entities in the knowledge graph. The next reference to “the contractor” on that project points to Anna.

If the fact was wrong in a specific artifact that’s already been produced, ask for the artifact to be revised too. The knowledge graph fix changes future outputs, not past ones.

These are about how your AI presents work to you, not about facts in the world.

“Going forward, default to under 150 words for any draft email unless I specifically ask for a longer one.”

“I prefer status updates in plain prose, not bullet lists, when they’re for an internal audience.”

“Don’t include executive summaries on weekly reviews; I read the whole thing.”

Preference corrections often need scoping. “Short” is meaningless without a number. “Direct” is fuzzy without an example. The more concrete the correction, the more reliably your AI applies it.

Examples of well-scoped preferences:

  • Audience-scoped: “When writing for Lily, drop technical detail.”
  • Length-scoped: “Default replies to clients under 200 words.”
  • Format-scoped: “When I ask for a comparison, default to a table, not a list of paragraphs.”
  • Tone-scoped: “When emailing my mother, warmer and less businesslike.”

You can layer them. The combination “to Lily, under 150 words, no bullet points, friendly tone” is a perfectly valid stack of preferences that all fire together when the audience is Lily.

These are trickier because the right answer depends on context, and you’re trying to tell your AI to recognize context, not to apply a fixed rule.

“This time we went with quality over cost. Going forward, if a decision affects something we’ll live with for years (the kitchen, the office, anything physical), bias toward quality even if it’s more expensive. If it’s something temporary or low-stakes, bias toward cost.”

That gives your AI a principle to apply, not a fixed answer.

Judgement corrections work best when you explain the reason, not just the choice. “We picked the more expensive contractor because the cheap one’s references were thin” is much more useful than “we picked Anna”. The first one generalizes; the second one only applies to that decision.

These restrict what your AI is willing to do.

“Don’t draft replies to my therapist. Always send the message you’ve drafted to me first, and let me write the reply myself.”

“Never send anything to a client between Friday at 5pm and Monday at 9am, even if I ask.”

“If a request involves money over $500, ask me twice before doing it.”

These should be rare. Most customers don’t need many behavior corrections; the approval gates (Approvals and gates) do most of the work. But specific guardrails you want to lock in, especially around emotional or high-stakes communication, are worth stating once and stating clearly.

Sometimes the right correction is “forget that”.

“Forget that I said the launch date was June 5. Don’t reference it anywhere.”

Your AI removes the entry from the knowledge graph and confirms. You should ask the same question fresh after, to verify it’s actually gone (memories are usually erased, but if a fact has propagated to several places, you may need to chase the lingering references).

There’s no audit log of erased memories for the customer; they’re gone. (Internal logs may retain the change for technical reasons, but they’re not surfaced.)

Once in a while, your AI will resist a correction. This usually means one of two things:

  1. The correction conflicts with a standing rule you set earlier. “You told me always to bias toward quality on physical projects, and you’re now telling me to use the cheaper option. Which rule should win?” That’s worth answering, because the answer becomes the next rule.

  2. The correction looks like it could cause harm. “You’re asking me to forget that the lawyer’s email is privileged. I want to confirm because that may matter later.” Tell it to forget anyway if you mean it, or confirm the constraint.

Pushback isn’t disobedience. It’s a moment where your AI noticed a conflict you might not have. Use it.

A reasonable daily rhythm with corrections:

  • During the day: correct things in the moment with “from now on” attached.
  • Once a week: scan your activity log for things that almost-but-not-quite landed right. Each one is a candidate for a durable preference.
  • Once a month: ask your AI to summarize what it thinks your top rules are. (“What are the standing instructions you’re operating under right now?”) Read the list, prune the ones that aren’t quite right.

Over a few months this produces an assistant calibrated to your actual preferences, not the default ones. That’s the goal.