Inside the Vitalism movement and the privacy risks of Artificial Intelligence memory

A new longevity subculture called Vitalism is pushing the idea that death should be treated as a solvable problem, while experts warn that Artificial Intelligence systems designed to remember users are creating a new frontier of privacy risks.

An emerging movement known as Vitalism is taking longevity research to its most radical edge, treating death not as an inevitability but as a problem to be solved. At the Vitalist Bay Summit in Berkeley, California, held over three days last April as part of a two-month residency, attendees gathered to explore tools ranging from drug regulation to cryonics in the fight against death. The movement was founded by Nathan Cheng and Adam Gries, who frame Vitalism as a hardcore, all-consuming mission in which nothing short of total devotion to defeating death is acceptable.

While interest in longevity has surged in recent years, the article notes that many researchers and investors in the broader field do not share the Vitalists’ goal of actually making death obsolete. Vitalists believe that momentum is growing not only for the science of aging and the development of lifespan-extending therapies, but also for wider acceptance of their philosophy that defeating death should be humanity’s top priority. Their efforts are presented as part of a broader push to make radical life-extension an urgent and legitimate target for scientific and societal focus, rather than a fringe aspiration.

Alongside this exploration of the boundaries of life and death, the newsletter highlights another frontier: what Artificial Intelligence systems “remember” about their users. Personalized, interactive Artificial Intelligence chatbots and agents are increasingly marketed on their ability to remember people and preferences, maintain context across conversations, and act on users’ behalf in tasks such as booking travel or filing taxes. However, experts from the Center for Democracy & Technology warn that this memory capability brings alarming privacy vulnerabilities that echo long-standing concerns from the “big data” era. As these Artificial Intelligence agents store and retrieve more intimate details over time, they risk undermining existing safeguards that were meant to constrain data abuses, raising urgent questions about how developers can redesign systems to protect user privacy even as they push toward more capable, persistent digital assistants.

55

Impact Score

Research excellence at the UF College of Medicine in 2025

In 2025, the University of Florida College of Medicine expanded its research footprint across cancer, neuromedicine, diabetes, and women’s and children’s health, leveraging artificial intelligence to accelerate discovery and clinical impact.

What EO 14365 means for state artificial intelligence laws and business compliance

Executive order 14365 signals a push toward a national artificial intelligence policy that could preempt certain state regulations without immediately changing existing compliance obligations. Businesses using artificial intelligence are advised to monitor forthcoming federal actions while continuing to follow current state laws.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.