If you were shopping online, and all the clothes were worn not by a model, but by you — how would you feel? What if a brand emailed you a new lookbook or campaign with you as the star? Would you buy more? As personalised, photorealistic AI twins enter the mainstream, fashion will have to figure out if and how they want to use them — and how customers feel about them.
It’s not a far-off hypothetical. Thanks to improvements in generative artificial intelligence, a host of tech startups are finally enabling fashion retailers to create, then digitally dress, images of consumers that mimic professional e-commerce models.
“This technology is now really good enough to do something that the fashion industry has wanted to do for about 30 years. The Clueless comparison comes up a lot, and so it’s been in the public consciousness for some time, but the tech has never really been there yet,” says Dorian Dargan, co-founder and CEO of Doji, an app that enables people to discover fashion through their digital avatar. He and co-founder Jim Winkens previously worked at Apple creating avatars for the Vision Pro and in generative AI at Google DeepMind, respectively. “Fashion felt like the perfect place for this. It wasn’t like we were using technology in search of an industry.”
A short scene from the 1995 fashion-favourite film has become a point of obsession for tech founders wanting to recreate Cher’s closet. None have cracked it — but new technologies hold promise.

The hope is that the tech is realistic enough to overcome the “uncanny valley” effect of being humanlike but just a bit off, and that processing speeds are fast enough to get in front of our trigger-happy attention spans. It also holds the promise of solving one of online shopping’s most pressing conundrums. “Digital selling has its benefits — convenience, endless choices — and its challenges,” like an inability to try clothing on, or touch and feel the product, “but being able to see oneself in the moment helps address one of those hurdles,” says Marshall Cohen, retail chief industry advisor at Circana (formerly NPD Group).
Now that the tech is here, the next consideration is a psychological shift with looming unknowns: do we actually want to see altered versions of ourselves as fashion models when we shop? And how will this impact what we buy?
Many tech companies and retailers are willing to find out. Last week, DressX unveiled DressX Agent, a membership-based fashion marketplace where shoppers can mix and match outfits from those including Farfetch, Mytheresa, Ssense and Diesel. Stitch Fix has been testing a prototype that will digitise its try-before-you-buy styling service, available to clients in beta mode this week. Google has one-upped its previous digital dressing tech, which dressed a host of hired models, with Doppl, an experimental app that digitally dresses one’s doppelgänger. Alta lets people digitise their wardrobes and save daily outfits on their own mini-mes. And designer Sergio Hudson is among the first to test Spree.AI, a white-label try-on provider that counts Naomi Campbell among its board members.
Whether or not people take to their AI likenesses splashed across the internet will determine the next phase of online shopping.
How AI twin features work
Most AI twin services invite users to upload selfies, with some adding a full-body picture into the mix. Users can specify basic measurements, like height and clothing sizes, before generating a realistic-looking digital version of themselves, which can take from a number of seconds to a few minutes. Shopping and styling capabilities vary from there.
Stitch Fix’s new service, called Vision, styles people’s twins in a range of shoppable outfits shown in matching settings, serving weekly drops through its app via push notifications. The looks are algorithmically informed, based on years of data, says chief technology officer Tony Bacos, but unlike with boxed ‘fixes’ that are physically sent to customers, stylists do not manually assemble each combination. “There’s something magical in seeing an image of yourself that you never posed for, especially when you are looking amazing,” Bacos says. “Part of the reason our clients come to us is they want help understanding what is on-trend and what will be comfortable and in their budget that matches their lifestyle.”
Doji and DressX Agent enable people to search and mix and match pieces, before clicking out to buy. Alta blends pieces that a consumer already owns with suggested items, based on styling prompts. Google lets you try on any outfit you find on the internet, and generates animated videos in addition to stills. Many enable ChatGPT-style prompts to find or create looks, inviting the sort of experimentation and play that inspires us to keep generating outputs. Altogether, this provides the type of engagement and personalisation that retailers chase, in turn providing data on what people try on.
What this means for our psyches
It’s too soon to know how this will impact shopping, both practically and emotionally. The most high-stakes quandary is the impact of letting a machine paint our portrait.
Rather than recreating an anatomically exact 3D render of the person or the clothes — as is the business model of Bods — generative AI essentially “guesses” at how to fill the gaps in information; this is why it’s faster and easier to generate, but often to blame for proportions that aren’t 100 per cent accurate. One service we tested enlarged some parts while slimming others. (The AI models are only as diverse as the human bodies they were trained on.)
Generally, the more accurate the render, the longer it takes. And people don’t always tend to love a completely raw, accurate render of themselves — think of it as the digital version of the sobering effect of harsh lighting and unflattering angles in a department store. In studying the impact of 3D scans on people’s moods, researcher Jessica Ridgway Clayton, who is the assistant dean of graduate studies at Florida State University, found that viewing one’s avatar in 3D resulted in decreased body satisfaction and mood, and magnified the discrepancy between one’s ideal self and one’s actual self.
Clayton predicts that as people become more acclimated to viewing themselves in 3D, negative effects will decrease. What has not changed, however, is consumers’ desire to purchase products depicted on more ideal bodies than their own. Clayton’s research shows that purchase intentions are greater when viewing ideal body types, than those that are considered more realistic or plus sized. So while we might think we want to see ourselves accurately represented, we are more inclined to purchase if the image is idealised. “People want to purchase products that help them achieve their ideal selves,” she explains.
This raises further questions. If you like how you look, you’re more likely to want to use your avatar, but how far is too far when it comes to filtering? Alta founder and CEO Jenny Wang says Alta aims to be as photorealistic and proportionate as possible. “Our users often share how personalised their avatars are, from highlighting their beaded locs to capturing their detailed tattoos,” she adds.
It’s up to each service just how much they want to polish someone’s likeness. For Stitch Fix, image quality standards have historically been the main blocker. “When looking at images of yourself, the bar is pretty high,” Bacos says. (He doesn’t love the term “VTO”, for virtual try-on, for this.) “No one is looking their best in the dressing room. I’d rather create an instant reel of you looking amazing vs awkward poses.”
In terms of “optimising” customer appearances, it’s a delicate line, and not everyone will have the same point of view — the same way some vary on how much Zoom enhances their appearance on video calls, Bacos explains. The goal is providing the best version of clients. “But if we overshoot in trying to optimise it, it won’t look like them, and in many cases there is a risk of being offensive. They might think, ‘What was wrong with me? Why did you make me three pounds lighter?’”
Because it’s so personal, the digital twins risk the problem of “looking alive and feeling dead on the inside”, Doji’s Dargan says. Fashion “is in the business of image-making and we need to do a good job of creating images that you feel inspired by, but we don’t want to transform you into someone else… It is a really critical part of the experience, because if you feel disconnected and don’t like the way you look, even with all the best features around it, it doesn’t work.”
His theory is that younger consumers, who are famously invested in the practices of self-expression and authenticity, are more receptive to seeing themselves in place of models. This follows the transition towards more diverse bodies in fashion, the rise of the relatable creator, and a self-saturated reality in which our own personalities have become personal ‘brands’. Through this lens, it makes sense that centring our own image in e-commerce would be the next natural progression.
That was one impetus for Farfetch to partner with DressX Agent. It “allows us to reach a younger, digitally native audience with next-generation AI try-on, creating a more personal and immersive shopping experience that is globally accessible”, says Erwan Jacob, Farfetch’s VP of growth marketing.
There is one glaring omission. Because AI is guesstimating proportions, outfit visualisations are not meant to accurately predict exact size and fit. While smartphones are getting better at guessing dimensions, inconsistencies in clothing sizes, variations of fabric and the nuances of size preferences means that the massive problem of online returns is not likely to be removed anytime soon by AI twins.
Brands should offer options, and fun
The most successful services will offer the option to provide feedback on results and — critically — the option to tweak them. Similar to generative AI chatbots, many enable users to rate the results. The next step could be for people to proactively decide how much “touching up” feels appropriate. University of Missouri researcher Song-yi Youn found that perceived control, personalisation and responsiveness can enhance consumers’ experiences with interactive virtual try-on services.
It should also be easy. DressX lets people either upload a full-body image or simply share their sizes to quickly generate body doubles. It is white labelling its tech so that brands can serve customer imagery in their own marketing and VIP styling services. Stitch Fix creates outfits without requiring people to select anything. Alta enables users to upload their closets through purchase receipts and links.
Finally, facilitate a feeling of play, with images that people are eager to share. DressX co-founder and CEO Daria Shapovalova acknowledges that sometimes, people might want to see an aspirational person in place of themselves, so they could create an avatar of Elizabeth Taylor, for example, to try the clothes on instead. “Fashion is about magic,” Shapovalova says. “Sometimes, you would want to see Zendaya in a campaign rather than yourself; or maybe, when there is a star of a campaign, you also add yourself.”
Comments, questions or feedback? Email us at feedback@voguebusiness.com.


.jpg)
