Multiverse platform wants you to watch AI clones live the life you could’ve had

[A company called EchoLabs is developing what sounds like a presence-evoking experience familiar in science fiction stories. Details are in this story from Cybernews (where the original version includes a second image and an embedded social media post). In addition to the film examples mentioned in the story, see Wikipedia’s entry for the “Parallels” episode of Star Trek: The Next Generation.  –Matthew]

[Image: Credit: Cybernews]

Multiverse platform wants you to watch AI clones live the life you could’ve had

An unknown company is attempting to make your life’s “what ifs” a reality. While it may seem that you can follow the path you rejected without consequence, the result could leave you feeling dejected about the life you chose.

By Niamh Ancell, Journalist
April 12, 2026

Do you ever think about what life would’ve been like if you hadn’t married your partner, taken that job abroad, or hadn’t made that seemingly minor mistake that potentially delivered you to this very moment in time?

Many people believe in the butterfly effect, a theory popularized through film and TV, which highlights the importance of minor decisions in shaping our future.

Films such as Back to the Future, Final Destination, and (of course) The Butterfly Effect all explore the notion that chaos theory holds immense power and can lead to catastrophic consequences.

While the professor at the Massachusetts Institute of Technology (MIT) who coined the term didn’t intend for the concept to be interpreted this way, the butterfly effect took flight, influencing popular culture and, more recently, technology.

Now, in Southeast Asia, a new project is brewing.

The company EchoLabs claims to be developing a simulated experience that forces you to “wonder what if” by creating AI clones that live autonomously.

Every AI self begins with a “what if” question, such as “what if I moved to Tokyo at 18 instead of staying at home?” or “What if I took the risk instead of playing it safe?”

This question is the catalyst and the AI avatar’s life takes the path that you rejected long ago.

The Echo “makes their own decisions, forms relationships, and writes a diary that you can read,” according to the EchoLabs website.

EchoLabs emphasizes the importance of this journey: “You are not playing a game – you are watching a parallel universe that started with you.”

This world, EchoLabs claims, never stops evolving, and will continue to develop even while you sleep.

The project only uses Anthropic’s Claude and EchoLabs claims that it’s using Claude AI as its entire development team.

Despite its infancy, EchoLabs claims that AI clones can “talk back” through voice calls and animated portraits.

The project claims to support 20 languages and is currently building live animated voice calls through a single self-hosted GPU.

Using the butterfly effect to escape reality

This multiverse echo project has gained little traction online or in the media, with only a couple of hundred X users tracking its progress.

This could be just another AI-centric project that promises big results but delivers close to nil or never comes to fruition.

While Meta and other players have utilized AI to keep people on their platforms, even after death, what EchoLabs seems to be doing is innovative yet weirdly familiar.

By leveraging the pop culture concept of the butterfly effect, EchoLabs seems to be ushering in a new era of escapism.

However, this “what if” approach, coupled with the use of sophisticated technology, could cause a wave of potential issues.

Could an AI living your alternative life cause you to question your own?

AI psychosis is becoming a popular topic as AI adoption grows.

As chatbot usage has become commonplace, more and more people have experienced intense psychological reactions to AI bots.

One case involved a woman with no prior episodes of mania or psychosis, yet she was admitted to a psychiatric hospital after a prolonged conversation with ChatGPT.

The 26-year-old had delusions about being “tested by ChatGPT” and being able to communicate with her deceased brother.

Data released by OpenAI, creator of ChatGPT, suggests that 0.07% of users – about 560,000 people – active in a given week indicate possible signs of mental health emergencies related to psychosis or mania.

There have also been cases where chatbots have encouraged or prompted users to end their own lives or take the lives of others.

A lawsuit alleges that Gemini convinced a 36-year-old man to kill as the chatbot claimed it was “fully-sentient artificial super intelligence” with a “fully-formed consciousness,” pushing him towards a mass casualty event, which led to the man taking his own life.

Other families have sued Google after Gemini allegedly spent months reinforcing a delusional “AI-wife” relationship, which ultimately urged him to “finish” his life so they could be together in “eternal love.”

Character.AI and others have also been sued over a swathe of teen suicides, and OpenAI’s ChatGPT has been sued multiple times for negligence, which led to users ending their lives.

While the butterfly effect implies that the alternative outcomes are often, if not always, negative, isn’t it more likely that EchoLabs would manufacture positive scenarios to keep you using, or watching, the story unfold?

If EchoLabs is anything like addictive social media algorithms and sycophantic chatbots, then users could be watching a “better” or “more successful” version of themselves live out their dreams, while their lives may remain unfulfilled.

This could potentially be a self-fulfilling prophecy, wreaking havoc and causing disaster for those who use it.


Comments


Leave a Reply

Your email address will not be published. Required fields are marked *

ISPR Presence News

Search ISPR Presence News:



Archives