Most AI memory systems are plagued by a common problem: they always provide an answer, even when they have no useful information to offer. When asked about something that was never mentioned, instead of saying ‘I don’t know,’ they confidently provide a wrong answer based on the closest match in their vector store.
This issue arises from the fact that vector similarity searches always return results, leaving no room for a ‘nothing found’ state. As a result, the AI treats the returned information as real context and builds a confident-sounding answer on top of potentially irrelevant data.
What if AI memory systems had confidence levels? Before feeding context to the language model, the system could check the relevance of the information and provide different instructions based on that confidence level. For instance:
- High confidence: answer normally
- Low confidence: ‘I’m not sure about this, but here’s what I found’
- No confidence: ‘I don’t have that information’
This approach may seem like a basic requirement, but most systems overlook it, prioritizing retrieval speed and accuracy over the potential consequences of returning irrelevant information.
Another interesting aspect is user frustration. When a user says ‘I told you this already,’ it’s a valuable signal that the system has forgotten something important. This feedback can be used to boost the importance of the reminded information, improving the overall performance of the AI.
The question remains: how should AI handle not knowing something? Should it always try to provide an answer, or is ‘I don’t know’ sometimes the better response?
Photo by Valentin Ivantsov on Pexels
Photos provided by Pexels
