The Scary Memory of ChatGPT
CEO Sam Altman announced a major upgrade to ChatGPT, emphasizing its significantly enhanced memory capabilities. The chatbot can now recall and reference all previous conversations a user has had with it. In his statement shared on X, Altman described this improvement as “surprisingly great,” underscoring excitement about AI systems becoming increasingly personalized through continuous learning from long-term user interactions. This advancement suggests a future where AI assistants are not only more practically useful but deeply attuned to individual preferences, contexts, and histories.
Privacy and Data Handling
However, the accumulation of detailed personal knowledge raises fears. Altman addresses these concerns by clarifying: “You can, of course, opt out of this, or memory altogether. You can also use temporary chat if you prefer conversations that won’t use or affect memory.” It’s unclear, however, whether opting out effectively ensures data deletion.
Initially, pro-users will have early access to this feature, with broader subscriber access to follow. Users in the European Union, the UK, and Switzerland face delays due to stringent regulations around data protection, notably the General Data Protection Regulation (GDPR), which sets rigorous standards for user consent and data handling.
Ontological Implications
Regarding ontology, a conversation with ChatGPT 4.5 offered insights. The chatbot pointed o “the fascinating question of identity and the extended mind hypothesis (Clark and Chalmers).” If ChatGPT acts as a persistent memory archive, philosophers might wonder whether it becomes an extension of the user’s cognitive self. Does one’s identity, consciousness, or personal memory partly exist beyond the biological self?
The realism of generative AI pushes this concept to new limits. We might be crossing a threshold towards creating artificial doubles, simulations capable of convincingly representing individuals even after biological death. Although diaries, social media, and books already extend our presence beyond life, AI-driven simulations take this to a hyperreal extreme. As philosopher Jean Baudrillard might suggest, a convincingly personalized AI risks generating a “hyperreal” relationship, challenging our grasp on reality itself. Or at least challenging our grap on a certain conception of reality.
Another approach to these questions about memory is a deconstructive one. For a very long time now, we have had a technology that allows us to preserve memories and transmit them to future generations: writing. The philosopher Jacques Derrida explained how writing was long viewed with suspicion—even by influential thinkers (and great writers) such as Plato. In the history of philosophy, it was the spoken word that was preferred, because the living speaker could be present to defend and explain whatever she was saying. Perhaps our current anxieties about AI memory are simply the latest instalment in the long history of this “logocentrism”.
I’m also interested in the phenomenology of our interaction with AI-bots and agents. Users might experience an uncanny valley, confronting an AI entity appearing human-like yet distinctly non-human—akin to encountering a lifelike zombie creature. Such interactions might significantly alter perceptions of engaging with another “mind,” artificial or otherwise.
Ethical Concerns
Ethically, significant concerns arise around asymmetric power dynamics. OpenAI remains notably secretive about the inner workings of its models (as of April 2025), yet evidently holds extensive knowledge about its users. This imbalance raises ethical issues, such as potential manipulation, erosion of user autonomy, and heightened dependency, leaving users vulnerable to exploitation. The vast accumulation of personalized data might explain the high valuation by venture capitalists, despite modest current revenues compared to substantial operating costs.
A less-discussed but crucial ethical risk involves identity formation. AI personal assistants, eager to satisfy user preferences, may reinforce ideological and cultural bubbles, consistently presenting content aligned with existing views. This dynamic risks increasing polarization, diminishing critical thinking skills, and impairing the ability to engage with diverse perspectives.
Despite these substantial concerns, I remain eager to experience this memory-enhanced ChatGPT firsthand. This feature marks a significant evolution toward sophisticated AI agents, requiring mastery in language, advanced memory capabilities, emotional simulation, and autonomous behavior. Will such beings make us adapt our “ontological map”?