‘I discovered DALL-E and was blown away’: How artists are using AI to create very offline art

Artists and creatives are experimenting with AI at the same time many of them are worried about its impact on their livelihoods.
Artists and creatives are experimenting with AI at the same time many of them are worried about its impact on their livelihoods.
Karol Serewis/SOPA Images/LightRocket via Getty Images

Hello and welcome to Eye on AI. 

In recent issues, I’ve covered why artists and creatives are pushing back on generative AI. In short, they worry about their work being used to train these models without consent, their work being devalued, and fewer opportunities for human creatives in a world where AI can be faster and cheaper. But even the more than 200 prominent musical artists who last week signed an open letter laying out such concerns emphasized they believe that “when used responsibly, AI has enormous potential to enhance creativity.” 

We’re still very much in the early days of these issues and generative AI overall, and there are no easy answers for how to balance all this just yet. But as it’s unfolding, some artists are experimenting with how they can use generative AI as a tool to enhance, not hinder, their creative work.

One such artist is Sydney Swisher, an Illinois-based painter and photographer. I’ve been following Swisher for a while. Her unique works involve painting on fabric, immersing a scene—a quaint windowsill, a dresser adorned with jewelry and photos, a dreamy bedroom—into the pattern (usually floral) on a piece of fabric, bringing them together as one. The result is always beautiful, textured, and dimensional, and the whole thing feels incredibly offline. So I was shocked to learn that in between thrifting old fabrics and brushing on paint, Sydney uses generative AI to help bring her visions to life.

After choosing a fabric to serve as the base for the painting, Swisher selects photos she’s taken to use as reference images, brings them into Midjourney, and adds prompt descriptions to describe a memory she has that the fabric seems to invoke. She then tweaks the prompts, adds more reference images, and plays with how the images and fabric can interact. After selecting one image Midjourney produced, she further edits it in Photoshop to get it closer to the specific memory she’s channeling in order to create a final reference image she’ll paint.

“I discovered DALL-E and was blown away by its ability to craft a scene I had in my mind. I found Midjourney around April of last year and was amazed at the quality of images and ability to edit specific details,” she told me, adding that the quality of Midjourney images won her over and that she also loves being able to feed it her own reference images, including old film photos from her childhood.

A large part of her art, Swisher said in a TikTok video describing her process, is based around machine intelligence visually interpreting descriptions of memories.

“I want my work to be a reflection on memory. I have very vague visual references in my mind of places that evoke strong nostalgia and emotion,” she said. “I’m interested in the exploration of trying to fully bring that time back. 

She continued: “When you focus on trying to pinpoint every detail of a memory, pieces will always be blurry and you begin to question yourself. What parts are accurate? Did I dream that, or did it really happen? The first time I saw a generated image from DALL-E, I felt like it perfectly captured this feeling. To me, it truly looked like a still from a memory or a dream.”

In another example of creatives trying to play nice with generative AI, filmmaker Paul Trillo spoke on the Hard Fork podcast about his experience previewing Sora, OpenAI’s new text-to-video model that recently sent both Hollywood executives and much of the general public into a panic. Trillo’s account is one of the first hands-on reviews of Sora, and he describes how he tried to test what the model could do and found it had its own sense of editing and pacing. 

“I was shocked. I was floored. I was confused. I was a little bit unsettled because I was, damn, this is doing things that I didn’t know it was capable of,” he said when asked what his initial emotional response was the first time he typed in a prompt and got a video. 

Trillo said the video he created in days with Sora would’ve taken months with traditional tools—and also that the model “can give us some experimental wild, bold weird things that may be difficult to achieve with other tools.” Still, he thinks the model is only supplemental to human video production. Similar to Swisher, he views it as a new part of the process he can use to achieve his vision.

“I’m only going to focus on this tool to get everything out of my head,” Trillo said. 

And with that, here’s more AI news.

Sage Lazzaro
sage.lazzaro@consultant.fortune.com
sagelazzaro.com

Today’s edition of Eye on AI was curated by Sharon Goldman.

AI IN THE NEWS

Amazon CEO Andy Jassy tells shareholders he’s ‘optimistic’ about generative AI. Amazon may have a reputation of falling behind in the race to lead generative AI, but in an annual shareholder letter published on Thursday, Amazon CEO Andy Jassy said he is “optimistic that much of this world-changing AI will be built on AWS.” CNBC pointed out that under Jassy, “Amazon has morphed into a leaner version of itself, as slowing sales and a challenging economy pushed the company to eschew the relentless growth of the Bezos years.” But still, Jassy said he believes generative AI will become Amazon’s next "pillar," after its retail, Prime subscription, and cloud computing units.

Intel unveils details of new AI chip in battle with Nvidia. Nvidia may hold more than an 80% market share in AI chips, but that isn’t stopping Intel from making a move to challenge Nvidia’s dominance. On Tuesday, at its Vision event, Intel shared details about its latest Gaudi 3 chip, which Reuters reported is “capable of training specific large language models 50% more quickly than Nvidia's prior generation H100 processor.” It also said that it can compute gen AI outputs, which is known as “inference,” more quickly than Nvidia H100s. Of course, since Nvidia CEO Jensen Huang recently announced a new "big, big GPU" called Blackwell at its GTC conference, the race is most certainly still on. 

Adobe is reportedly buying videos for $3 per minute to build a new AI model. OpenAI's text-to-video model, Sora, currently available only as a demo, has made headlines over the past two months, and other tech companies have made it clear it may be tough to catch up. Even Google DeepMind CEO Demis Hassabis reportedly told a colleague it could be tough to match Sora. But according to Bloomberg, Adobe is paying its network of photographers and artists “to submit videos of people engaged in everyday actions such as walking or expressing emotions including joy and anger,” with a goal of sourcing assets for training AI models. The report said that pay comes to about $2.62 per minute of submitted video, although it could be as much as about $7.25 per minute.

Arm CEO says AI’s ‘insatiable’ energy needs are not sustainable. According to the Wall Street Journal, Rene Haas, CEO of the U.K.-based smartphone chip-design company Arm, says AI models like OpenAI’s ChatGPT “are just insatiable in terms of their thirst” for electricity. The comments were made ahead of a Tuesday announcement that the U.S. and Japan were funding a $110 million program for AI research in the U.K. and Japan. “The more information they gather, the smarter they are, but the more information they gather to get smarter, the more power it takes,” he said. Without intervention to create more efficiency, AI data centers could consume 20-25% of all U.S. power requirements by the end of the decade. “Today that’s probably 4% or less,” he said. 

FORTUNE ON AI

London AI firm V7 expands from image data labeling into workplace automation —by Jeremy Kahn

AlphaSense, a Goldman Sachs–backed AI research startup valued at $2.5B, gears up for IPO as it crosses $200M in annual recurring revenue —by Jessica Mathews

Vinod Khosla is betting on a former Tesla autopilot engineer who quit to build small AI models that can reason —by Sharon Goldman

The AI-driven rally ‘is not a one-way street,’ but tech earnings should jump 18% this year as demand booms, UBS Global Wealth Management says —by Will Daniel

AI CALENDAR

April 15-16: Fortune Brainstorm AI London (Register here)

May 7-11: International Conference on Learning Representations (ICLR) in Vienna

May 21-23: Microsoft Build in Seattle

June 5: FedScoop’s FedTalks 2024 in Washington, D.C.

June 25-27: 2024 IEEE Conference on Artificial Intelligence in Singapore

July 15-17: Fortune Brainstorm Tech in Park City, Utah (Register here)

Aug. 12-14: Ai4 2024 in Las Vegas

EYE ON AI NUMBERS

39%

That’s the percentage of CEOs scaling their generative AI efforts and moving from pilots to industrialization across multiple functions or business units, according to a new CEO survey by consulting firm KPMG. Another 29% remain focused on specific use cases in certain functions or teams, while another 15% said they are broadening the adoption of generative AI tools across their workforce. 

The survey of 100 CEOs at organizations with revenues over $500 million found that CEOs “see generative AI as central to gaining a competitive advantage and are working to rapidly advance deployment of the technology across their enterprises in a responsible way.” However, while the survey found that 41% of CEOs plan to increase their investment in generative AI, another 56% anticipate that their investment will stay flat. 

This is the online version of Eye on AI, Fortune's weekly newsletter on how AI is shaping the future of business. Sign up for free.