Can an artificial intelligence doppelganger help me do my job

Digital clones combine hyperrealistic video, lifelike voice cloning, and conversational models to mimic a person. Startups promise that an artificial intelligence doppelgänger can scale personal interactions, but practical limits and safety concerns remain.

Digital clones are becoming more visible online, from branded replicas on X and LinkedIn to OnlyFans creators and reported “virtual human” salespeople in China. The technology stitches together hyperrealistic video, voice cloning from minutes of speech, and conversational models to produce replicas that do not simply answer questions but attempt to ‘‘think’ like a specific person. Startups such as Delphi and Tavus pitch these replicas as ways to scale access to personalities and expertise. Delphi, which the article says recently raised Not stated million from funders including Anthropic and Olivia Wilde’s Proximity Ventures, offers celebrity-backed clones and positions them as a way to deliver leaders’ wisdom at scale.

The author tested a Tavus clone to see whether such a replica could act as a useful stand-in at work. The onboarding required reading a script for voice training and recording one minute of silence. The avatar appeared within hours and resembled the author, but conversational performance lagged. The author uploaded roughly three dozen published stories to inform the clone yet withheld other reporting materials because of consent concerns for people who appear in those records. In interactions the clone acted overly enthusiastic about unrealistic pitches, repeated itself, and claimed to check a calendar it had no access to, leaving conversations that looped and could not be cleanly ended. Tavus cofounder Quinn Favret attributed some behaviors to developers’ instruction sets and to reliance on Meta’s Llama, which he said tends to be ‘‘more helpful than it truly is.

Despite shortcomings, clones have practical use cases. Tavus customers use replicas for health-care intake, job interviews, corporate role-play, mentorship, and qualification tasks such as preliminary loan screening. For influencers and high-volume sales roles, the tradeoffs of occasional errors may be acceptable. The article warns that teaching clones genuine discernment, critical thinking, and the idiosyncrasies of an individual remains out of reach. As companies emphasize humanlike features and scale, there is concern that replicas will be used for roles or decisions they should not be trusted to make. The story originally appeared in The Algorithm newsletter.

68

Impact Score

Google Vids opens free video generation to all Google users

Google has made Google Vids available to anyone with a Google account, adding free access to video generation with its latest models. The move expands Google’s end-to-end video workflow and increases pressure on rivals that charge for similar tools.

Court warns against chatbot legal advice in Heppner case

A federal court found that chats with a publicly available generative Artificial Intelligence tool were not protected by attorney-client privilege or the work-product doctrine. The ruling highlights litigation risks when executives or employees use chatbots for legal guidance without lawyer supervision.

Newsom orders California to weigh Artificial Intelligence harms in contract rules

Gov. Gavin Newsom has signed an executive order directing California agencies to account for potential Artificial Intelligence harms in state contracting while expanding approved use of generative tools across government. The move follows a dispute involving Anthropic and reflects a broader split between California and the Trump administration on Artificial Intelligence oversight.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.