Fashion Has Entered the Uncanny Valley

MIRROR IMAGE To create this portrait photographer Yuki James used Midjourney an artificial intelligence tool known for...
MIRROR IMAGE
To create this portrait, photographer Yuki James used Midjourney, an artificial intelligence tool known for its unsettling realism.
Artwork by Yuki James, Vogue, September 2023.

Not long after my last breakup, at lunch with friends, talk turned to my love life. One pal suggested I try “dating an American.” (My ex is English.) Another retorted, “or maybe a European.” We laughed. Conversation shifted. Later, waiting for the subway, an ad for a European dating site appeared in my Instagram feed. I had, I think, the reaction typical of these digital times: a frisson of disquiet—they’re listening—followed by a shrug of bemused resignation. What can you do? My train arrived; I got on with my day.

The ad began to follow me around. The more it appeared, the more it struck me as fishy—too soft-focus; too sincere. What was the scam? One night, curiosity got the better of me: I entered some basic information—email address, city of residence—and found myself roaming a landscape of implausibly handsome men, all looking for “the one.” The next morning, I awoke to 200 emails, with subject headings like Are you my queen? Do you believe in true love? Bots, I assumed. Bots composing lengthy missives making the case that a wise universe had brought us—him, the software; me, the user with no photo or bio—together at last.

Reading these strange emails became a kind of hobby. Each had a slightly different angle of attack. I felt like I was peering into the mind of a computer as it iterated ideas of human romance. Then a message brought me up short: One “man,” supplying background, mentioned he had a daughter with Down syndrome, to whom he was devoted—and who was part of the love package, as it were. My certainty crumbled. What kind of bot writes something like that?

Welcome to life in the technocene. Saturated in apps, news, takes, memes, sponsored posts, and targeted ads, we’ve grown accustomed to sifting for fakery, ambiently aware that our every click is being tracked. Yet we surf on, more or less unbothered, until some moment when we discover that the line separating real and fake has dissolved. “I’m Sydney, and I’m in love with you,” Bing’s AI chatbot tells the New York Times reporter writing about it. Pope Francis goes hypebeast in a Balenciaga puffer—except the image, and the coat, are AI counterfeits.

BIG PAPA In March an AIgenerated image of Pope Francis wearing a puffer coat went viral.

BIG PAPA
In March, an AI-generated image of Pope Francis wearing a puffer coat went viral.


Photo: Pablo Xavier. Created using Midjourney.

“Sure, it’s funny—but what I saw right away was…well, that’s a very average jacket,” says Balenciaga creative director Demna. “I’d never have designed that—there’s no fashion in it.” How could there be? A computer doesn’t understand what fashion is—or the allure of the cool, or the gut-punch of devastating beauty. A computer doesn’t feel anything—despite what “Sydney” might say.

The texture of our semi-virtual lives was already uncanny, but it’s getting weirder by the day: Looking at photos of celebrities’ newly lithe physiques, for example, I now find myself wondering: Photoshop, or Ozempic? Either way, it’s a high-tech reconfiguration of the body, an ever-more persuasive deepfake. But what constitutes “realness,” anyway? Watching the AI tidal wave rear up way offshore and then come speeding toward us faster than we’d imagined possible, a range of anxious questions about our relationship to technology has been forced to the fore.

And onto the runways: Tech’s power to shape our perceptions, and our lives, was a recurring theme at the fall 2023 shows. At Loewe, Jonathan Anderson conjured digital blur, and the experience of squinting at an image to determine its authenticity. Christopher Kane showed hallucinogenic, AI-generated prints. Models ambled down the Courrèges catwalk staring at their phones; at Coperni, in a coup de théâtre calculated to go viral, robot dogs joined the défilé.

WELCOME TO THE TECHNOCENE Tech inflected looks from Christopher Kane Loewe and Coperni seen at the fall 2023 shows.

WELCOME TO THE TECHNOCENE
Tech inflected looks from Christopher Kane, Loewe, and Coperni seen at the fall 2023 shows.


Photos from left: Filippo Fior/GoRunway; Acielle/StyleDuMonde; Getty 

But you could also detect tech by its conspicuous absence, as other designers turned their attention to craft. Dries Van Noten’s collection paid homage to fine, handworked detail; at Chloé, showing her penultimate collection before stepping down this month to return to her own label full-time, Gabriela Hearst put handwork front and center with her crochet, embroidery, and patchwork shearlings, aiming “to make clothes for women to wear, not to post,” she says. This ethos was shared by the typically technophilic, spectacle-friendly Balenciaga, where Demna offered a back-to-basics, construction-focused wardrobe. “I’ve realized that the job I do is a very human job,” he says. “You can lose focus running after tech innovations—and for me, the way to resist that was to go back to my sewing machine; I spent months ripping up old clothes and remaking them. And I was so happy.”

As I was speaking to Demna, and to Hearst, the impression was not that they’re retreating from technology, but that they’ve ascertained that this moment—staring down the AI tidal wave—calls for both a reflective pause and a reaffirmation of the human: human experience, human creativity, humanity’s right to define the terms of “progress” rather than outsourcing it to an algorithm. “We have a saying on the ranch where I grew up: Go slowly over the stones,” says Hearst. “Don’t gallop on rocks, because if you fall, you break your head. That’s how I feel now: like we’re entering rocky territory.”

What territory are we in, exactly? Hard to say. Depending on who you talk to, AI is either the printing press—a transformational advance—​or a horseman of the apocalypse. Or maybe it’s crypto and NFTs all over again and we’re at the peak of the hype cycle. Or it’s a cool new toy, and developers are still working out some kinks—but, yes, eventually it will take your job.

I figured I’d go to the source for an answer. What are you? I asked Open AI’s ChatGPT. “A large language model,” it replied.

Do you have an imagination?

“I don’t possess personal experiences or consciousness, so I don’t have an imagination in the way humans do. While I can provide imaginative or creative outputs based on statistical patterns and knowledge learned from my training data, it’s important to remember that these responses are generated algorithmically and may not have a genuine subjective experience of imagination.”

What do you mean by “may not”?

“By saying ‘may not’ I’m acknowledging that any semblance of imagination exhibited by an AI language model like myself is a result of pattern recognition.”

So, in fact, you do not have any subjectivity; is that correct?

“That’s correct. I don’t have beliefs, opinions, emotions, personal desires, or motivations. My purpose is to provide information.”

Are you trying to destroy the world?

“No.”

DOUBLE VISION Midjourney portraits created by photographer Yuki James.

DOUBLE VISION
Midjourney portraits created by photographer Yuki James.


“Anyone who says they understand this technology is lying,” says Vivek Wadhwa, a Silicon Valley–based professor and entrepreneur and coauthor of Your Happiness Was Hacked: Why Tech is Winning the Battle to Control Your Brain—and How to Fight Back. “The pace of the breakthroughs in AI have caught everyone off guard—even the engineers. That’s why they’re freaking out—it’s learning too fast.” Learning from us: We’re the vast dataset the AI language model trains on: the words we’ve written, the art we’ve made, our online purchases and cookie-logged descents down internet rabbit holes.

“What scares me isn’t the machines becoming too intelligent,” says Wadhwa. “It’s that they become powerfully stupid, doing things any sane person would recognize as illogical.” And/or immoral: The classic example of out-of-control AI is a hypothetical thought experiment wherein an AI, tasked with creating the most paper clips possible as efficiently as possible, goes on a destructive rampage, diverting all the Earth’s resources toward the production of paper clips, eliminating or immobilizing anyone who tries to get in its way.

This scenario, though unlikely, is worrying enough that top AI developers are getting into a regular habit of issuing open letters warning that the technology might, just maybe, extinguish all human life—if not via paper clips then…well, some other way, TBD. Really, though, all you need do is replace “paper clips” with “profit” to arrive at a highly probable outcome: a turbocharged capitalism in which a host of human values (community, justice, art for art’s sake, etc.) are sacrificed to an algorithm capable of bending us to its will. And if that, too, sounds hyperbolic, consider the ways we’ve already adapted ourselves when technology offers quick fixes: fast food; fast fashion; scrolling TikTok rather than reading a book or seeing friends. If there’s one thing the social media era has demonstrated, it’s our own plasticity. The Twitter algorithm boosts polarizing, “high engagement” tweets, so we get angrier. Selfies distort our faces, so we get filler and contouring sticks to optimize for the distortion. We’ve lived with social media long enough now to measure its costs—to our attention spans, notably, but also (and this is harder to pinpoint) to our sense of self-determination. This, it strikes me, is the despair animating much of the anxiety about AI: That instead of serving as a tool for the advancement of our good aims—curing cancer, solving the climate crisis—AI will simply be the next technology turning us into guinea pigs in a global, unstoppable science experiment.

“I go back and forth between This is useful, this is fun to play with, and thinking, Oh, God—it’s the end of the world,” says Christopher Kane, who decided that the best way to reckon with AI was by engaging with it, utilizing the image generators Dall-E and Midjourney to codesign his fall collection’s prints of chicks, pigs, and rats. The co in codesign is key: As Kane points out, it took a fair amount of trial and error to get the AIs to achieve an effect he liked—and plus, those prints were his idea in the first place. “The more instruction you give, the more it gets quite savvy—and quite scary,” Kane says. “Sometimes what it offered was truly surprising—but not surprising like it came from inspiration. It’s trying to give you what you want based on all this data that it’s working off, and sometimes it makes an interesting mistake.”

Like Demna, Kane discerns in AI-generated looks a lack of a sense of “fashion.” “But it’s near enough to human creativity,” he says, “it makes you wonder—wonder if the technology will improve so much that there’ll be no difference—or wonder if there’s something in us that technology can never have. I guess,” he adds with a groan, “I’m talking about a soul.

Here’s a thought that occurred to me, apropos the question of whether my love-mail came from men or bots: Does it matter? Countless users of dating apps engage in lengthy chat flirtations with people they never end up meeting; the vast majority of social media relationships are exclusively virtual. At a certain point, one must admit that the actual bond is between you and the interface.

For the record, I think it does matter—maybe not when you’re tapping through customer service prompts, but yes, absolutely, in any circumstance staked on human connection: dating, therapy, taking in art that’s meant to tell you something about what it’s like to be a real person with memories, dreams, ethics, a body—with subjectivity, in a word. Could an algorithm develop some form of that? We’re getting deep into existential questions here—what is consciousness, anyway?—but as University of Paris professor of history and philosophy of science Justin Smith-Ruiu assures me, those issues can to some extent be set aside.

“I don’t think we’re going to experience the dawning of qualia in the machines,” says Smith-Ruiu, who dug into the history of outsourcing thinking to machines in his book The Internet Is Not What You Think It Is. “We’re different systems: Humans evolved to look under logs for food, to sniff out decay and other kinds of potential danger. A machine that skips straight to higher-order thinking isn’t going to reason the way we do.”

For Smith-Ruiu, the key matter is how much authority we allow the AI. Do we continue down the path of letting algorithms determine everything from which songs we hear on Spotify to who gets a job and who goes to jail? One straightforward example of algorithmic influence: TikTok’s AI-supported “beauty scanner” filters, which rank users on their attractiveness. Because these algorithms rely, like all AI, on prior data, they wind up perpetuating bias toward certain types of beauty: fair-skinned, slender, young. Ugh, but at least this code is comprehensible to its designers—who could, hypothetically, adapt or dispense with it. But what happens when a self-teaching algorithm starts making decisions no one—not even its original engineers—understand?

“We’re moving into a phase of technology where the machines engage in independent learning and self-replication,” Smith-Ruiu explains. “But if we can’t follow the reasoning in the machine, how do we know when it’s gone wrong—or when to turn it off?”

Midway through researching this story, I realized I was out of my depth. Even people who engineer AI are flummoxed by it—I, meanwhile, am intimidated by automatic software updates, and so I decided that the best way to trace the contours of the debate around AI was via analogy to that other hot-topic technological breakthrough: Ozempic. Demna got it immediately: “Oh, yeah: Some big new thing comes along that’s going to change our lives—do we go with it? And if everyone does—then what?” Bingo.

“Ozempic” here is shorthand for a class of injectable medications, initially developed to regulate blood sugar in patients with type 2 diabetes and now—another tidal wave breaking onto shore—in skyrocketing use as an aid to weight loss. If you’ve noticed people in the public eye getting suddenly, sometimes shockingly, thinner—no, your eyes are not deceiving you. It could be the Ozempic effect.

Actors and influencers aren’t the only ones rumored to be partaking: Analysts at JP Morgan Chase reported that in the last week of January 2023 alone, US doctors wrote over 313,000 Ozempic prescriptions—a 78 percent increase from the previous year, and that’s not counting other brand-name injectables, like Mounjaro, or generics retailed by compounding pharmacies.

The trend has induced a good deal of panic. We’d only just opened the door to conversations around fatphobia, body-​shaming, and size inclusivity, telling a generation of young women that they can love how they look no matter their shape or size, when the gods of Big Pharma threw down a lightning bolt—a purported silver-bullet diet drug—that threatens to cast us back to the sorry era circa Y2K when size 0 was de rigueur on the red carpet and the runways. “I would do anything to lose weight,” recalls model and body positivity advocate Charli Howard, who struggled with eating disorders in the early years of her career. “I took pills to suppress my appetite; I’d spend hours in the gym. I was killing myself.” A friend of mine told me recently that a woman in the chair beside her at a hair salon fainted—she was on Ozempic and hadn’t eaten in three days.

Technology offers us a certain kind of world. Do we want it—and if not, do we have the power to say no? I find it useful to come at these questions via Ozempic because, like AI, it’s a reality-​distortion device—but, unlike with AI, the facts and the pros and cons are more comprehensible. Fact 1: Ozempic is not that new. Phase II studies of semaglutide began in 2008; in 2012, researchers at Novo Nordisk developed semaglutide as a diabetes therapy, and the drug, given the brand name Ozempic, was approved by the FDA five years later to treat type 2 diabetes. I pause here to note one thing differentiating semaglutide from AI, though: Its original development was subject to regulatory scrutiny—and, by extension, a form of public consent—whereas the tech industry in general is a regulatory free-for-all. On the other hand, the commercial imperatives and disruptive effects are roughly similar.

“It’s a game changer,” asserts Rocio Salas-Whalen, MD, a New York City–based doctor board-certified in internal medicine, endocrinology, and obesity medicine who began prescribing semaglutide over a decade ago. “The drug improves metabolic health—the weight loss is a side effect, but for people who have struggled to lose weight, it can feel miraculous.” As Salas-Whalen—who, it should be noted, has been paid by Novo Nordisk—goes on to point out, the game-​changing could reach far beyond individual patients dropping stubborn, unwanted pounds. The weight-loss industry, the practice of medicine—and, yes, our relationships to our own bodies—are all likely to be upended.

Amid all the recent and welcome talk about body positivity and size inclusivity these past few years, I’ve been served more ads for fasting aids and fat-zapping services than before Paloma Elsesser and Precious Lee were crowned supermodels. Has anything really changed? Speaking for myself: Yes. I’m not super-skinny, but I’m also not fat; having come of age in the waif-​worshipping 1990s, it took me a long time to grasp that, and longer to comprehend that it doesn’t matter anyway—that no body is shameful. That is the hard-fought territory we have won. It’s not much. My open-​mindedness is more aspirational than actual, insofar as I’m constantly falling back on my programming to diet, diet, diet, engaging in tortured deliberation of calories churned and consumed. What terrifies me about Ozempic is how liberating it seems, because one of its primary effects is to alter perception. Like a filter, but inside your head.

“I used to be really compulsive, thinking about food all the time—and now it’s like this mental real estate has opened up,” one friend taking Mounjaro tells me. “It’s not that I don’t take pleasure in eating, but I feel…freer. Better about myself. Better in terms of my wellness. And more creatively productive.”

Is this a pro, or a con? Ozempic can seem like a switch you can flip or a tool to deploy, like AI, that will magically transform you into a person as carefree, self-confident, and, yes, creatively productive as the digital you online. It achieves the body-​positivity movement’s goal of unshackling women (and men, and nonbinary people) from the little calorie-​counting voice in their heads—not by challenging social norms, but by making it easier to realize them (give or take a side effect or two, which could range from nausea and diarrhea to thyroid cancer and kidney failure). If you think of society as a big algorithm, Ozempic gives us what we want based on the dataset it’s been trained on. That’s pattern recognition, not progress; it’s up to us to imagine an alternative future.

I want to be clear: If someone wants to lose weight for aesthetics’ sake—if they’re brain-​poisoned, like me—I see no moral disparity between taking Ozempic and dieting au naturel. What I’m earnestly trying to figure out is if the likelihood of noticing a lot more skinny bodies around will reinstate a norm we were trying to abolish—and if there’s any avoiding that outcome.

The algorithm, of course, favors our passivity, our willingness to be placated and to drift with the tide. That’s how we wind up in strange places where we never expected or wanted to be—so they’re listening; what can you do? “We need to get off our phones and connect with who we are, as individuals,” says Demna. “That’s the hard path, but creativity is hard—and why should we want it to be easy?

“I’m not anti-technology, as long as it’s not a replacement for thinking,” he continues. “AI platforms don’t invent anything—they can’t. Sometimes they open up new options, which is good—but you have to ask for that. If I based my whole collection on AI, it would just iterate off of what I’ve done before. For many brands, that’s fine; they don’t need the new to satisfy their customers. Maybe,” Demna adds, “that’s what we should focus on: Why, as a society, are we not demanding the new and unpredictable? Why are we asking for what’s average?”

We all desire progress. For me, that’s a society that’s healthier and happier, where people have the most possible freedom to flourish; it doesn’t mean advancement for advancement’s sake—or, worse, advancement for profit’s sake. That said, I accept that society sometimes needs an earthquake-scale disruption to jolt it out of its ruts. That’s the potential of AI—and of Ozempic, too. Both innovations force us to assess where we are, and where we want to go. The danger lies in expecting either technology to provide us an answer.

“I love AI,” asserts Gabriela Hearst, to my surprise. “I love what it can do—what it’s already doing—for our work on climate.” Hearst, long an industry leader on matters of sustainability, is passionate on this topic and dives straight into the deep end, detailing cutting-edge research. “Experiments with fusion are the most complex and most expensive,” she explains. “AI can run all the scenarios around how plasma reacts—it’s facilitating the engineers tremendously; we may have a breakthrough soon.”

I ask her how her love of AI squares with her work in fashion, much of which is centered around craft. She replies by holding up an extraordinary hand-​embroidered tassel.

“AI can’t do this,” she says. “This is what humans do. We create with our hands, with our hearts, and our heads. So let’s not get confused: The threat of tech is dehumanization—when we take away human consent, human decision-​making, or take human beings out of the equation entirely: A fake world, but we’re still real, and we have to live in it. I don’t want that,” Hearst adds. “Do you?”