When things go wrong
This chapter is for the moments when your AI gets it wrong. Not the moments when it’s slow or its phrasing is awkward — the moments when something real is off.
Three categories cover the great majority of these:
- It said it did something, and it didn’t.
- It did something you didn’t want.
- It can’t seem to do something it should be able to do.
Each has a pattern.
”It said it did, and it didn’t”
Section titled “”It said it did, and it didn’t””This is the most disorienting. You ask for an action, you get a confident reply that it’s done, and reality says otherwise. The email isn’t sent. The calendar event isn’t there. The task isn’t on the list.
The cause, almost always: the action didn’t actually fire. Your AI may have drafted, hit an internal gate or error, and then summarized in a way that read like completion when it was actually a pause.
Diagnostic steps:
- Check the Activity log. Settings → Activity. Every action your AI takes is logged with timestamp, action type, and outcome. If the action you expected isn’t there, it didn’t happen. If it’s there with a “failed” status, you have the cause.
- Check the relevant approval gates (Approvals and gates). The action may be sitting in an approval queue waiting for you.
- Re-ask explicitly. “Did you actually send that email? Show me the activity entry.” A direct question forces an honest answer.
Fix forward:
- If the action is queued: approve or revise.
- If the action failed: ask your AI to retry, with the error in mind. (“Try sending again; last time it failed.”)
- If the action was never attempted: phrase the request more directly. “Send this email to Mike now. Don’t draft; send.”
To prevent this pattern: be wary of “Done!” without a citation. When your AI says it sent something, ask “what time?” or “show me the message”. The audit log answers cleanly.
”It did something you didn’t want”
Section titled “”It did something you didn’t want””The opposite case. An action happened, and you wish it hadn’t.
Steps:
- Identify the action. Activity log will show what was done and to whom.
- Reverse if possible. Email sent: send a follow-up correcting. Calendar event created: delete it. Task created: remove it. Most actions are reversible, just not free.
- Diagnose the prompt. Look at the message you sent. Was it ambiguous? Did it imply more authorization than you meant? “Reach out to Mike” can be interpreted as “draft an email” or “send an email”; if you wanted the former, the wording was the problem.
- Adjust the gate. If this category of action shouldn’t have fired without confirming, tighten the relevant gate. “From now on, never send emails to clients without my explicit approval.”
Most “did something I didn’t want” moments are about gates, not the AI getting confused. The fix is at the policy level, not the per-conversation level.
”It can’t seem to do something it should”
Section titled “”It can’t seem to do something it should””You ask, and your AI either says it can’t, or it tries and produces something useless.
Possibilities:
- It actually can’t. The capability isn’t built. Honest answer: ask “is this a feature you actually have today?” Your AI is supposed to be candid about its limits; sometimes it’s overly optimistic in framing what it can do. Push back.
- Configuration is missing. The integration is disconnected, a toolkit isn’t installed, a credential is expired. Check Settings → Integrations and the connected provider’s health (green / yellow / red dots).
- The phrasing is wrong. The thing exists but your request doesn’t trigger it. Try saying the same thing differently. (“Schedule” vs. “block” vs. “add to my calendar”.)
- An upstream service is down. Sometimes Google’s API is having a bad day, sometimes Anthropic is throttling. Activity log should show the failure. Try again in a few minutes.
The pattern that helps: when your AI says “I can’t”, ask why. “Can’t because the integration isn’t connected” leads to one fix. “Can’t because this feature doesn’t exist” leads to a different one.
When your AI just gets a fact wrong
Section titled “When your AI just gets a fact wrong”Less dramatic, more common. Your AI thinks Mike is the contractor when Mike is the electrician.
Fix in conversation:
“No, Anna is the contractor. Mike is the electrician. Remember that going forward.”
Your knowledge graph gets updated. See Correcting and teaching.
If the wrong fact also appears in an artifact already produced, ask for that artifact to be revised separately. The knowledge graph fix changes future outputs; it doesn’t rewrite past ones.
When the conversation gets weird
Section titled “When the conversation gets weird”Sometimes a conversation starts behaving oddly. Your AI loses track of what you’re working on, gives evasive answers, or seems “stuck”.
Two things to try:
- Start a new conversation. Open a new chat thread. The previous one’s accumulated context isn’t following you, but the knowledge graph (the long-term stuff) is.
- Be explicit about what you want. “Forget what we were talking about. Here’s the new request: …”
For more persistent oddness, check:
- The provider’s health (Settings → AI Configuration). A “yellow” or “red” dot can degrade response quality.
- Recent system messages. Sometimes a temporary issue is flagged in the activity log or the dashboard.
When you’re stuck
Section titled “When you’re stuck”If none of the above resolved it:
- Take a screenshot of what you saw, including any error messages or strange responses.
- Note the timestamp and what you’d asked for.
- Email support@jootle.com with the details. We can investigate from the server side, which often surfaces causes that aren’t visible to you.
We treat this seriously. “It worked weirdly this one time” is one of the most informative signals we get for improving Jootle. Don’t feel like you’re being dramatic by reporting it.
Common patterns that prevent future problems
Section titled “Common patterns that prevent future problems”The habits that compound:
- Tighten gates before loosening them. Better to click “approve” too many times in the first month than to miss a bad action.
- State preferences as you notice them. “From now on, default to X.” Builds your AI’s calibration to your needs.
- Review the Activity log weekly. Five minutes of skimming surfaces patterns you’d otherwise miss.
- Trust but verify. When your AI says it did something, periodically check that it actually did.
- Don’t authorize what you wouldn’t read carefully if a human had drafted it. Approval gates are there for a reason.
Most “things going wrong” stories have a moment earlier where a gate was loosened too eagerly or a preference wasn’t stated. The customers who have the smoothest experience are the ones who corrected aggressively in the first month.