Kinprint - Kin preloader for your personal ai
General
General

Mar 5, 2025

Why giving a memory to your personal AI is difficult

Written by Simon Henriksen

Why giving a memory to your personal AI is difficult

In our last article, we explored why memory is essential for personal AI. It lets conversations continue, while helping to make pattern recognition and emotional intelligence possible for AI assistants.

However, designing an effective memory system is much harder than it might seem. It’s actually one of the biggest challenges facing the artificial intelligence industry right now.

That’s because, at the end of the day, AI chatbots are at their most intuitive when they can be spoken to like a human. But the natural adaptiveness, selectivity, and overall power of the human memory is easy to take for granted - especially when you’re building an AI’s memory from scratch to support it.

AI memory must be intentionally designed to balance a number of things humans naturally handle, like:

  • recall
  • forgetting (yes, really)
  • contextual awareness
  • ethical considerations

All so it can feel natural to talk to, while still serving as a reliable and private data store.

So, let’s talk about what makes that so difficult - that’ll let us home in on particular problems (and Kin’s solutions) in our next articles.

Your personal AI needs to be (slightly) forgetful

Human memory, as we all know, cannot perfectly recall everything. Instead, memories our brains deem ‘less significant’ often fade away unless they are refreshed - following a pattern mapped out by scientific models like the Ebbinghaus forgetting curve.1

A personal AI needs to do something similar - remembering what matters, while allowing irrelevant or outdated information to fade away.

However, it’s not that simple. Given the AI’s role to support human memory, and its artificial nature as an AI chatbot, things can get complicated.

Some example challenges:

  • Remembering too much information leads to cluttered storage and outdated facts, which can cause inefficient memory processing.
  • Forgetting too much risks losing essential context, leading to unnatural and frustrating interactions.
  • Should AI memory decay naturally over time? Or would it be more helpful remembering every important memory, and just filtering out trivial facts?
  • How can AI determine the importance of a memory? A user saying “I just had lunch” should not be treated the same as “I just got married” - but either one could be important in conversations.

(Some) facts evolve, and your AI companion needs to know that

As humans, we constantly update our beliefs and opinions based on new experiences. And sometimes, they change for no reason at all.

When this information changes, it’s often easy for us to update our memories without a second thought. We know when new changes must replace old information (“I got a new job”), and when multiple things can be true at once (“I have a dog (but also a cat)”).

However, when building an AI’s memory, we have to decide how it does that.

Personal AI needs to manage this systematically, efficiently, and transparently - all while staying smooth and intuitive for users. That creates the following problems.

Some example challenges:

  • If a user first says, "my favorite color is red," and later says, "my favorite color is blue," does the AI replace the old fact or track how preferences evolve?
  • If a user states, "I work at Company A," and later, "I work at Company B," should the AI assume a job change, or does the user have multiple jobs? Should it ask?
  • Some facts naturally expire and evolve (“I live in Paris” can change with a move). How should AI track and update evolving facts?
  • If a user mentions “Karen” multiple times, but refers to different people, how does AI differentiate between them without mistakes or constant questioning?

How you understand context needs to be taught to artificial intelligence

Human memory doesn’t just store facts - it also stores context around them. We remember details differently based on emotional importance, sensory input, and relevance, to name a few.

You may recognise a pattern here: this is yet another thing we do subconsciously that an AI’s memory needs to be designed to do across different languages and cultures. These are some of the issues that come with that.

Some example challenges:

  • How much detail should be stored? Should it capture every word of a conversation, or extract key themes? How are misinterpretations prevented?
  • How should AI determine relevance? Should it weigh emotionally significant moments more heavily? Or is it more situational than that?
  • Can AI dynamically adjust memory depth, summarizing older interactions while keeping recent ones detailed? Where is the cut-off point for ‘older’?
  • How should AI group memories into episodes? For instance, an event like “My wedding” consists of multiple related memories - how does AI cluster them into a coherent narrative?

AI assistants work with many types of data

Memory for personal AI isn’t limited to conversational interactions.

Users may share Journal entries, images, documents, health data, calendar events, and external articles, to name a few.

A truly personal AI must not only learn from anything that is shared with it, but also understand why the user shared it, then make and record the appropriate insights.

Here’s an idea of the unique challenges each data type comes with.

Some example challenges:

  • Conversational Memory: Correctly interpreting and recording tone, intent, and context.
  • External Media: Determining the user’s interest, concern, or agreement with shared content, and its relevance to any current conversations.
  • Journals and Notes: Balancing privacy with deep learning insights.
  • Health Data: Processing trends and ensuring understanding while ensuring user privacy.
  • Calendar Data: Aligning memory with real-world commitments and priorities.
  • Events as entities: Recognizing events like “TechCrunch 2025”, and understanding them as time-sensitive memories.

Time is even more confusing for AI chatbots than it is for you

Human memory naturally organizes events in a time-based sequence - but even we often struggle with ambiguous time frames like ‘soon’ or ‘next year.’

For natural conversation, an AI must interpret these vague and contextual words correctly most of the time - and do so across all the languages it supports. Otherwise, it’ll be much harder to discuss things like time management with it.

Developing the robust temporal models to track these kinds of evolving user interactions comes with many difficulties, including the following.

Some example challenges:

  • "I’m fixing my bike soon" vs. "I’m having a baby soon" - the meaning of "soon" changes drastically with context.
  • Should an AI prioritize recent memories over older ones when making decisions?
  • How should an AI handle evolving plans, such as tracking a trip from early planning to completion?
  • How should an AI handle memories with varying levels of time specificity (year, year+month, exact date)? Would a ‘time-tree’ structure help?

Your personal AI doesn’t know how memories should be strengthened

Certain memories stick with us more than others due to emotional significance, repetition, and personal importance.

An AI must mimic this selective reinforcement well enough for its memory to allow useful and natural conversations to take place, like preparing for a difficult conversation at work.

Predictably, that means we run into a few important questions as we figure out exactly what methods should be used to do this.

Some example challenges:

  • Should an AI reinforce memories based on repetition, emotional markers, or explicit user signals? If it should be a mix, what percentage of each should be used?
  • Should an AI infer importance based on user engagement, such as frequent references to a topic?
  • How can an AI mimic human-like recall, where some memories are instantly accessible while others require effort to retrieve? Should it even try?
  • Can an AI store emotions related to memories, and recognize that emotionally-charged experiences often have stronger retention?

AI assistants don’t have human brains (for memory storage) like you

The human brain is a sophisticated memory storage and retrieval system - one an AI can’t take advantage of directly.

Instead, an AI memory requires structured storage techniques to be designed and implemented into it by hand.

Creating the right data model for efficient retrieval is critical to ensuring relevant, accurate interactions - and of course, isn’t straightforward.

Some example challenges:

  • Should an AI use a fixed data schema (structure), or an adaptive format that evolves over time?
  • How can an AI efficiently retrieve relevant parts of past interactions, without overwhelming users with unnecessary details?
  • Vector databases can allow for information like memories to be searched for through understanding a search’s meaning and intent, rather than just its keywords (semantic search). But how should an AI ensure it doesn’t under- or over-interpret the search, and skew the results?
  • Graph-based memory can help track relationships - but does it make recollection too complex?
  • Hybrid models combine structured and unstructured data, but balancing them effectively requires careful design. Is it worth the effort?
  • How can an AI dynamically prioritize the most relevant memories, outside of choosing the most recent or most-frequently discussed/recalled?
  • Should an AI generate memories based on data inferences (e.g. “User seems to enjoy hiking based on this discussion about it”)? How do we distinguish inferred knowledge from explicit facts?

Personal AI only has the morality and trust we give it

As personal AI becomes more trusted and more effective with its memory, the ethical questions of what it will do with its user’s personal data, and how it will protect the privacy of it, will only grow.

For an AI to reach its full potential as a personal assistant, its users must fully trust that their data is handled responsibly and transparently. Otherwise, the AI will never be told everything it needs to provide an individual and insightful service.

The privacy and security of AI data is already one of the most-discussed issues in the industry, but developing a memory system also brings additional challenges.

Some example challenges:

  • How can developers ensure an AI’s memory will be secure and respect sensitive user data?
  • Users should be able to view and edit what the AI remembers easily. How can this be made robust and intuitive?
  • AI must avoid reinforcing harmful biases through selective memory retention. How can this be reduced, and the AI taught to recognise and repair it if it happens?
  • Who controls the AI’s memories—the user or the platform? How is that made clear?
  • If a user says “I don’t like when you ask about my love life,” should an AI remember this as a rule for future interactions? What rules should surround meta-learning like this?

Unlocking the potential of AI-powered memory with Kin

As this series so far has begun to show, memory can transform even a basic ChatGPT-like system into a true personal companion.

However, the challenges in building an effective memory system are substantial, and have only been skimmed over here. But, solving them opens up the possibility for something incredible: AIs which truly understand us, grow with us, and provide meaningful, long-term support.

Our AI team is working on making Kin the realisation of that possibility. Every part of Kin needs to be imbued with functionality and privacy in equal measure for this - and its memory is no small part of that.

In the next articles, as promised, we’ll get a little more technical. They’ll dive into the specific challenges and innovative solutions around some of the concepts we raised here, and explore how the future of personal AI memory is being shaped.

From understanding time intuitively, to branching out from Android and iOS to desktop, to potential integrations with cloud services like Gmail and Google Drive (we only just added Calendar integration), it’s looking like an exciting place.

Stay tuned.

1. Murre, J.M.J.; Dros, J. 2015. “Replication and Analysis of Ebbinghaus’ Forgetting Curve”. PLOS ONE, 10(7). Available at: https://doi.org/10.1371/journal.pone.0120644 [accessed 03/05/2025]

Simon Henriksen

Simon Henriksen

I’m Simon Westh Henriksen, Co-Founder of Kin. As CTO, I’m dedicated to making Kin the most personal, private, and trustworthy AI assistant we can - all while showing why this technology is cutting-edge along the way

Get help with

understanding AI

Talk to Kin

The Kin app must already be installed for this to work