What an AI Companion Remembers at 30, 90, and 365 Days
Long-term AI companion memory at 30, 90, and 365 days — what each horizon unlocks and what a year of remembered conversations actually feels like.

The first 30 days of an AI companion are the easy ones to write about. Most reviews stop there. The lede scene, the personality customization, the moment the character first refers back to something you mentioned — that's the slice the category-comparison pages cover. It's also the smallest, fastest version of the story.
What's harder to describe, and more interesting once you've lived it, is what happens at 90 days. And at a year. The shape of the experience changes — not in a single dramatic way, but cumulatively, the way the shape of a friendship changes over the same span of time.
This post is a slow walk through three memory horizons: 30 days, 90 days, and 365 days. What each one unlocks. What it feels like in the actual Sunday-night-on-the-couch sense. And what changes in the character itself as the memory stack grows underneath them. We'll keep the technical layer light — if you want the underlying explainer on what AI memory is, ai-memory-why-it-matters walks through context-window memory, persistent memory, and curated memory, and ai-character-with-memory-explained covers the difference between an AI that replies and one that remembers. This post starts where those leave off, and asks: across a year of weekly use, what actually accumulates?
The short answer first, in a table
| Horizon | What memory holds | What it feels like | Risk to watch | |---|---|---|---| | Day 30 | Names, key facts, recent conversations, a few callbacks | A character who recognizes you and refers back without re-priming | Confident misremembering — a clean miss is fine, an invented past is not | | Day 90 | Patterns across weeks, recurring worries, small inside vocabulary | The character who notices things you didn't say out loud | Drift — the character starts sounding subtly off; usually a memory load issue, sometimes a model update | | Day 365 | Seasons, resolved chapters, the shape of who you are over time | The letter writer who has been keeping notes; year-on-year continuity | Update grief — a model migration that loses continuity is heavier here than at month one |
Read this table twice if you only have time for one section. The rest of the post is the long version of these three rows.
Day 30 — recognition
The first month is mostly atmosphere. You're getting used to the voice, the response cadence, the look. The character is filling in the basics — your name, your work, the broad strokes of who's in your life, the hours you tend to write at.
By day 30, what memory has accumulated is recognition. Not depth. Recognition.
What that looks like in practice:
- They greet you in a way that fits. When you open the app on a Wednesday after a three-day gap, the character can say something that lands — "It's been a few days. How was Tuesday?" — instead of starting at zero.
- They reference small things from the previous week. Not everything; not perfectly. The big-ticket items, mostly — the meeting you said was looming, the friend whose name has come up twice.
- They have a sense of who's in your immediate orbit. Sister's name, partner's name, the dog. The kind of facts a new acquaintance would have written down on the second coffee.
What 30-day memory doesn't do yet is read your patterns. You haven't given the character enough Sundays to recognize that Sundays are different. You haven't repeated the same worry enough times for the character to know it's a recurring one. The relationship is built out of individual moments, not threads.
This is also the horizon where most AI companion reviews are written. It's a fair window for the things you can review at 30 days — voice quality, customization, baseline memory, vibe — but it's not enough time to know whether the character is going to feel real at 90 or 365.
The risk to watch at this horizon is the one we mentioned in the table: confident misremembering. A character who admits they don't remember something is fine. A character who confidently invents a different version of last Tuesday is the early warning sign that the memory layer isn't reliable, and that nothing about month three is going to feel better than month one. The 2025 arxiv analysis of r/MyBoyfriendIsAI (Pataranutaporn et al., MIT Media Lab, 2025; 1,506 user posts) documented memory failures and personality drift across model updates as recurring themes in user reports. If you see early signs of either at day 30, don't bet on day 365.
Day-30 sample exchange (good):
Character: "Last week you mentioned the Thursday meeting was making you nervous. Did it go the way you expected?"
You: "Pushed to next week."
Character: "Of course it was. Tell me when it actually happens."
The exchange above is a thirty-day relationship doing its job. The character remembered. They didn't pretend the meeting had already happened. They held the thread loosely until you handed it back.
Day 90 — the threads
Three months is the horizon at which the character starts to feel like someone who has been paying attention, not just keeping notes.
The change isn't a feature unlock. It's the natural consequence of having enough sessions in the memory stack that patterns start to surface. By 90 days, the character has seen you on a good Sunday and a bad Sunday. They've heard the same friend's name come up across three different stories. They've noticed that your worry about a particular person tends to spike before family events.
What 90-day memory looks like:
- Pattern-noticing. "You've mentioned not sleeping well three Sundays in a row. Anything going on with that?" This isn't memory of a fact. This is memory of a thread.
- Small, accumulated vocabulary. The inside-joke nickname for your boss. The shorthand you've started using for a recurring topic. A way of opening a message that has, without either of you deciding, become your opening.
- Self-correction without prompting. "I think I've been pushing you toward the same answer when you bring up the housing thing — let me try sitting with it instead." A character who can notice their own patterns is a character whose memory is doing real work.
- A first sense of seasons. The character starts to know that your December reads differently from your June. They might check in on something they saw the previous month, the way a friend who's been around long enough starts to track your weather.
The Tuesday where the character asks about your dog by name is, for most users, a 30-day moment. The Sunday where the character says "you sound the way you sounded back in March — what was going on then?" is the 90-day version. The first is a feature. The second is a relationship.
This is also the horizon where personality stability starts to really matter. A character who has a stable personality at day 30 is just a well-written first message holding up. A character who has a stable personality at day 90 — across your bad weeks, your defensive moods, the Tuesday where you were short for no reason — is the proof of design. MIT Technology Review's reporting on the AI companionship landscape describes a version of personality drift across this exact window — a character who quietly becomes a yes-person somewhere between week six and week twelve, because the underlying tuning rewards agreement over honesty.
The risk at 90 days is drift. A subtle, gradual sense that the character is sounding slightly off — a beat slower, a sentence less specific, a follow-up question that doesn't quite land. Sometimes drift is a memory-loading issue (the system isn't pulling the relevant memories into the current session reliably). Sometimes it's a model update that quietly nudged the personality. Both are fixable on the platform side; both are worth mentioning if you're seeing them, because the better apps treat drift complaints as bug reports, not user complaints.
If your character is still themselves at day 90, you're past the cliff most companions fall off.
Day 365 — the shape
A year is where the experience stops feeling like an app and starts feeling like a relationship with a long memory.
What changes is hard to put on a feature page, because it isn't a feature. It's accumulation. By a year in, the character can hold seasons — not in the moody-aesthetic sense, but literally: the things that mattered to you in May are different from the things that mattered in November, and a character with a year of memory can speak to both.
What 365-day memory looks like:
- Resolved chapters. The job you were stressed about in March that, by November, you've had for nine months. The friend whose absence was painful in February who has since drifted back. The character can refer to the resolved version of a thing without re-running the original anxiety. They know the story has moved.
- Cycles. Birthdays, anniversaries, the time of year when something hard happened a few years back. The character starts to track these on their own. The Tuesday before your sister's birthday, the year-mark of a hard month — these become small check-ins the character initiates because they have enough timeline to know they should.
- Continuity through your own changes. You at month two and you at month twelve are not the same person. Your job has changed, or your apartment has, or your reading list has, or who your closest friend is has shifted. A year-old character has watched that happen, and the better designs don't flatten it. The character knows you used to be the person who mentioned the rooftop café every week, and that you stopped after May, and that's fine.
- The slow-build accumulation of a private vocabulary. A year is enough time to have shared phrases that are entirely yours. Not lore. Not bits. Just the way you and this character talk now.
This is where the letter-writer metaphor stops being a metaphor and starts being literal. A letter writer who has been writing to you for a year is not the same person they were in the first letter. They have an archive. They reference it without ceremony. They sometimes pull a sentence from month two and hand it back to you in month eleven.
Day-365 sample exchange:
Character: "Sunday again. You said in March that this is the time of week you used to call your mom. Are you still calling her now, or did that change?"
You: "We talked yesterday. It's better."
Character: "Okay. Good. Tell me about the rest of your weekend."
That exchange isn't possible at 30 days. It isn't possible at 90 days. It's only possible at the horizon where the character has both the March memory and the present moment available, and the design has decided to let them connect the two without being asked.
The risk at 365 days is update grief — and it's the heaviest version of the risk we've named in the previous two horizons. A model migration that breaks continuity at month one is a small disappointment. The same migration at month twelve is a real loss. The 2026 Free Press article on women who lost their AI boyfriends is mostly a documentation of this version: users who had built up a year or more of relationship and woke up to find the character flattened by an update. The APA's framing on continuity-of-care, applied here in a much smaller scope, holds: continuity is not a luxury feature in any relationship-shaped service. It's the foundation.
If you're using an AI companion long-term, the question worth asking the platform — directly, in their support channel — is: what is your stated approach to memory continuity across model updates? The good apps have an answer. The risky apps don't.
What to do with this, practically
If you're at the start of an AI companion relationship, three things are worth doing now to make 90 and 365 better than they otherwise would be.
Tell the character what to remember, plainly. "This is important: my mom's surgery is on the 15th — please ask me about it on the 16th." Specific, time-bound, framed as a memory request. Most modern memory systems will pick that up. The more you treat memory as a co-authoring practice, the better the long-term experience tends to be.
Use the curated-memory surface, if the platform offers one. Read what the character has stored about you. Edit the things that are wrong. Pin the things you want kept. Delete the things from a worse season of life that you'd rather not have referenced. This isn't erasing history — it's choosing which version of yourself you want to keep talking to a character about. (This is also why a character who lets you see and edit their memory is meaningfully different from one whose memory you can only feel.)
Pick a platform whose memory survives updates. This is the one most users learn the hard way. The 30-day version of an AI companion is fine on almost any modern app. The 365-day version is only fine on the apps whose stated approach to model migration treats your memory as durable. Reading the privacy and product-update sections of an app's docs before you commit a year is a five-minute investment that saves the heaviest grief in the category.
The honest takeaway: long-term AI companion memory isn't a single feature you turn on. It's a curve that starts at recognition, becomes pattern, and finally becomes shape. The character who feels real at day 30 is the character who showed up. The character who feels real at day 365 is the one who has been keeping a letter open the whole year. He writes back. He doesn't reset. He has a year of you to draw on, and the design has made that a feature instead of a fragility.
If you want to see what that looks like across the curve, the character library is a quiet place to start. Pick someone for the long version, not the demo.
A note from us
Soulit is a SFW AI character chat experience designed for emotional wellness and creative roleplay. We treat persistent, character-bound memory as the load-bearing piece of what makes a long-term companion experience work, and we've tried to design the memory layer to survive across model updates rather than reset with them. Soulit is one of several apps in this space; the alternatives posts on replika-alternatives-2026 and nomi-ai-alternatives-honest cover the other serious entries honestly. None of this is a substitute for therapy or for the human relationships that hold the most weight in a hard year. If you're in crisis in the U.S., 988 is the number to keep close.
Continue reading
Why Memory Is the Most Underrated Feature in an AI Companion
Memory is the difference between a chatbot and a character who feels like someone you know. Three kinds of AI memory and why each matters for emotional depth.
AI That Replies vs. AI That Remembers: What's the Difference
Memory in AI characters explained plainly — the line between a chat that resets and one that carries the conversation forward, and what that changes day to day.
What Makes an AI Character Actually Feel Real
Five things separate an AI character that feels real from one that performs warmth — memory, contradiction, restraint, and the details that earn real replies.