This article on digital fashion and AI is part of our Vogue Business membership package. To enjoy unlimited access to our weekly Technology Edit, which contains Member-only reporting and analysis and our NFT Tracker, sign up for membership here.
Digital fashion is getting an even shorter delivery window. DressX — the digital fashion marketplace that dresses customer-submitted photos in digital items — is now letting people quickly design and wear their own digital clothes. DressX Gen AI, launching today, enables people to submit a photo and be digitally dressed within seconds in an outfit that is generated based on text prompts.
While the use case for the resulting image is generally the same as any other image digitally tailored by DressX — a picture of someone wearing something that doesn’t exist in the flesh — there are a few big differences with this update: the time it takes to be dressed and the ability to generate new designs.
DressX shoppers have historically shopped on its website the same way they do on fashion e-commerce sites. When they find an outfit they like, they pay for it (anywhere from about $20 to more than $1,000) and submit a photo. They then receive a photorealistic image of themselves dressed in the outfit within around 24 hours. This new capability, which was trained on the DressX fashion library, cuts the delivery time to about 24 seconds.
The first generations of digital tailoring typically required the company to manually fit the item on the pictured figure, and submitted photos generally required people to be wearing minimal, form-fitting garments (like a swimsuit) to look realistic. Dressing people using generative artificial intelligence offers a step up in terms of final output; the tool can detect and “fill in” necessary details, such as adding skin where a person’s original image was covered up by clothes or adding shoes that naturally correspond with the look.
Additionally, the new tool enables people to design their own pieces via text prompts. In typical generative AI fashion, the process of “prompt and result” is a bit more akin to a conversation with an unpredictable computer, in which it might interpret the prompt differently to how the user originally envisioned. Even the same prompt can result in different results each time it is generated. (The DressX Gen AI tool currently doesn’t enable people to follow up on their prompt with edits.)
How it works
As an early adopter of digital fashion — Instagram is my “metaverse” — I was curious to experience the accuracy of DressX’s “instant fashion” tool. Even though the original delivery window for DressX was fast, the premise of instant gratification has a promising ring. I also enjoy the call-and-response nature of generative art tools, as it’s more of a collaboration with a machine than one-way, traditional design. You never quite know what you’ll get, but that’s part of the fun.
Using the DressX AI chatbot is surprisingly easy, but the results can be unpredictable. I tried prompting it to dress me in a “black and white tuxedo”. The resulting image returned a tuxedo jacket, shirt and bow tie — but paired with tailored black shorts in place of trousers. Another prompt for a “white suit with a large floral floppy tie” created an outfit that included a teal blazer with white shorts and a floral tie.
The tool seemed to have a fixation with shorts, likely because the original image featured shorts. The prompt, “bright floral gown with a long train” returned a knee-length floral dress with a long train. In many looks, I was surprised to see that it could essentially “erase” the shorts I was wearing in the original picture to add in skin. Another nice surprise? Each look arrived with corresponding shoes.
Perhaps reassuringly, it doesn’t easily recreate designer pieces. I asked it to essentially recreate the iconic Gucci suit (red suit, blue blouse) immortalised by Gwyneth Paltrow in the ’90s with the prompt, “red Gucci pantsuit with a blue blouse”, but the result was a teal suit and a red top that wasn’t really reminiscent of the Gucci version. The tool will get better as more people use it, says co-founder Natalia Modenova. She adds that even if people input branded prompts, that the tool will not directly replicate existing designer pieces, but rather use it as inspiration to generate something similar, to varying degrees of interpretation.
People can also submit an “inspiration” image for the garment, whether that’s an e-commerce image of an outfit or a floral print; the result will not be a direct replica, but will incorporate some elements of the look. When I tried submitting a 1940s magazine photoshoot as inspiration and added some words, the AI didn’t understand that I was pinpointing the outfit in the image, rather than the entire image, as inspiration. Later, when I submitted a flat-lay image from designer Fia Machado found in the DressX catalogue, the tool seemed to better recognise that it was a garment that I wanted to use for inspiration.
The tool is currently free, and available via dedicated channels on DressX’s Discord server. People are allowed a few prompts and then are invited to submit their email addresses to continue. DressX co-founder Daria Shapovalova says the inaugural goal of the tech is to test and learn how people are using it, and that they are considering adding a subscription model similar to Dall-E, in which people pay for a certain number of prompts.
Does digital fashion just need more time?
While this makes it even easier for people to don digital fashion, the question remains if consumers are still interested in digital fashion at all?
After a surge of curiosity surrounding the arrival of the concept of “the metaverse” in popular culture, brands’ innovation strategies have pivoted to more practical pursuits and to AI, which often are pitched in concert as a way to shave down time spent on menial tasks while democratising access to creativity.
Digital fashion hasn’t suffered quite the same setback as metaverse worlds or NFTs have more broadly, in part because of the increasing prominence of gaming, but digital fashion still suffers from an awareness gap. And fair enough; the ability to wear clothes that don’t exist physically does require a leap of imagination. That is compounded by the inherent nature of photorealistic clothing and tailoring — even if the wearer says it’s digital, people who aren’t familiar with the concept still might not understand how it works, or how to try it. And Discord is not necessarily as natural to use as a traditional social media platform or website. (DressX will be exploring broadening access to its AI tool beyond Discord, which began as a platform for gamers to socialise.)
In addition to cutting down digital fashion’s delivery time, the hope is that “text-to-fashion” capabilities will inject new energy into digital fashion adoption, says Shapovalova, by parlaying DIY design with the option to instantly dress oneself (or any image of someone else). While many creators have begun experimenting with generative art tools to design fashion, the items aren’t easily “wearable”. This connects those dots, piggybacking on the recent momentum for generative art tools. (It’s also a step up from avatar-style dressing; DressX items are also available for Meta, Roblox and Bitmoji avatars.)
“For us, it’s an opportunity to entertain more people, and to provide an opportunity for everyone to become a creator,” Shapovalova says.
Among digital fashion and metaverse startups, DressX has some of the most significant traction and scale. It’s raised at least $15 million in funding and has expanded into business-to-business tools and services. It has partnered with brands including Diesel, Hugo Boss; dressed models on the covers of Vogue Czechoslovakia and Vogue Singapore; and worked with EDM music group Sofi Tukker and Madonna.
It’s now exploring building out a business-to-business offering for DressX AI that enables retailers to add “instant dressing” to their own e-commerce sites for enabling virtual try-on. Going forward, it could also enable customers to dress themselves with prompts that are trained solely on their own intellectual property, so a brand could input its signature colours, silhouettes and prints, for example. This would work especially well with heritage brands who have large catalogues, Modenova says.
Gucci has already explored a version of this. The house has commissioned artists to use generative AI to make artworks out of its IP. More broadly, brands and designers are testing how generative AI can be used to generate artworks. Collina Strada’s Hillary Taymour did the same thing, and turned it into a collection. She also used generative art to create visuals for a fashion show. Balmain’s Olivier Rousteing experimented with feeding in his past designs to create new ones, he shared during an onstage interview at the SXSW conference; he ultimately didn’t use them because they looked dated, and he found his own team could do better, he said. “I realised that my designers in my team could have done better. It was really good, but not as good as what we could have done on our own.”
This isn’t the first time that DressX has offered the ability to instantly wear digital fashion. It also has a “DressX camera”, available on its app and as an add-on to video conference software (such as Zoom and Microsoft Teams) that enables people to wear digital items on video calls via augmented reality. But while the quality is relatively believable for accessories and cosmetics, it still doesn’t translate well for clothing and fabric.
For now, the mission is to distribute as many digital items as possible, Shapovalova says, especially at a time when more people are already experimenting with designing fashion using generative art tools. “Maybe it will simplify the process, or maybe it will be too complex? But that is for us to understand.”
Sign up to receive the Vogue Business newsletter for the latest luxury news and insights, plus exclusive membership discounts.
Comments, questions or feedback? Email us at feedback@voguebusiness.com.




