(Spoiler: Not quite yet!)
You’ve probably played around with, or at least seen, generative artificial intelligence software like DALL-E, Midjourney, or Stable Diffusion. The technology essentially takes human prompts and then scours the internet for images and references, combining multiple sources to rapidly generate complex imagery. They’re a blast to use, cranking out visuals to match any random input, often surprising us with their specificity, artistry and beauty. It can feel like magic.
For artists and creators, these new tools create possibilities for artistic expressions that previously would’ve taken teams of people, many days and a lot of money to produce. Writer and filmmaker Travis Carlson has been utilizing Midjourney to bring his fan fiction epic to life. Released weekly, Buffalo Bill’s Mafia: An Episodic Mythical Adventure incorporates storylines from the most recent NFL game, which requires quick turnarounds. “I tried to hire an artist to produce even one image a week,” says Carlson. “With A.I. I’m producing more images in a single night than a master romanticism artist could paint in their lifetime.” What does take time is the learning curve required to effectively describe the results you are imagining in the language that the software understands. “It feels like search engine poetry,” quips Carlson.
Photographer and Art Director James Bogue echoes this poetic sentiment. “It takes an astute observer of the world to be able to articulate cultural history and the zeitgeist of our collective present that makes for potentially powerful images.” Bogue works with Midjourney to create fantastical portrait, product and fashion distortions that at first glance may appear to be photographs — yet he views a partition between the mediums. “I see the results as comps, placeholders of ideas. For me, the final execution is still the personal process of Photography or 3D rendering.”
Bogue recognizes this may be a philosophical hang-up as generative AI forces us to widen and redefine what Digital Arts means. “AI is now a force of nature, human nature. And it is in our nature to ask questions, create new tools, and hold that mirror to the world to ask more questions.”
On the “sunnier” side is FIFTEEN senior art director Tim Martin’s alter ego on Instagram, @damned_devito — a feed of strictly DALL-E generated images of Danny Devito. While nailing the art of comedy, Martin does “not consider these to be art in the traditional sense.”
Martin is also not ready to crown DALL-E an artist. “It doesn't feel right to me to grant a piece of software the status of artist. So by default, if what is being made is art, then I would consider myself the artist.”
Carlson somewhat agrees. “Does a microwave require a human? Yes. Can I take credit for heated up entrees? I guess.”
All three creators liken generative AI to an evolution of the tools in a designer's tool belt. “AI-generated art is a conversation between human and machine,” says Bogue. “It still takes the mind of a skilled user to bend the technology to their will,” adds Martin. “It’s not a replacement for humans or artists,” muses Carlson. “But an impossibly efficient enhancement to the workflow.”
What remains to be seen is how this new tool will affect the world of advertising. First of all, can AI-generated images even be sold, or used in ads? At the moment, the answer is yes. Per their terms, DALL-E users do have the right to use images for commercial purposes. Even though the software scrapes from copyrighted images, the output falls under the “fair use” doctrine.
Changes may be coming, however. Artists seeing elements of their original work show up in AI art are calling foul and pushing for new legal protections. Numerous stock image hosting sites, including Getty Images (whose watermark often shows up on AI creations), have proactively banned the upload and sales of such to their platform over future copyright concerns. Shutterstock is creating its own AI generator that will pay the people whose images are being utilized.
Legalities aside, does AI art pass the eye test? You be the judge. We fed DALL-E a prompt for a classic advertising execution: ”Magazine advertisement featuring a photo of an attractive shirtless man drinking a bottle of coca-cola on the beach.” Here are the results:
Each has… issues that surely wouldn’t get past a Creative Director, much less a client. Not least of which is the lack of the actual logo, which none of the generators are allowed to use. “Details matter to a brand,” says FIFTEEN Creative Director Rachel Spence. “Sure you could take these as a starting point, bring it into photoshop and correct all the errors. But with the time that would take, you’re better off starting with an image you have complete control over.”
So we have not quite arrived at the “make a sneaker commercial” button. Still, there are ways ad agencies can work alongside the tools. “The prospect of using it in storyboarding, concepting or to plan a photoshoot is very exciting to me,” says Martin. Spence agrees. “Less asking the client to use their imagination. We can show them where we’re trying to go visually without having to spend the hours to blow it out ourselves.”
Investment in AI technology is among the highest in Silicon Valley, and level ups come quick. Future states may see brands using the underlying technology to build their own brand-based AI, or even AI video generation. For now, however, successful advertising still requires a team of talented (human) artists and strategists utilizing the latest technology to combine inputs from many sources to generate results.
Same as it ever was.