There’s no question that Artificial Intelligence is all around us. So how can you keep up as a creative? For us here at NoGood, the use of AI is making us change the way we think, collaborate, and design. Let’s take a look at how we’ve used certain tools, methods, and tips that are leading the industry today and how you can use them to your advantage.
Our Most Commonly Used AI Design Tools
Nowadays, you can find any type of AI tool to help you with just about anything. Here are some tools that our design team has found most useful when it comes to making our creative ads:
Adobe Photoshop’s Generative Fill
Arguably the AI tool we use most often since our performance ads need to be made in different sizes for different channels. If an image needs more space for copy, then a quick solution is to generate more of the background. What might’ve taken ages to photoshop or source another image, can be done in a matter of minutes.
Here we start with the original vertical image of a woman sitting on the grass. Since we also needed to make a 1×1 size, instead of scaling it to fit, we just generated more of the background. That way there’s much more space for our text and CTA.
Many times you don’t even need to type in a prompt to create the rest of the background. Just select the area and click “generate” and Photoshop does a solid job at matching the environment.
Need to create scroll-stopping ads?
Here’s another example of using the Generative fill to expand the background for a connecting carousel post. Type a prompt of what you want and if you’re struggling then you can start by generating smaller parts. Once you get the right look, select and generate the rest. If you’re looking for detail, sometimes it helps to generate section by section rather then a whole giant area at once.
Midjourney
Our team typically uses this tool for concepting ideas and forming backgrounds or objects whenever we lack resources or photography. It does take a bit of trial, error, and even luck to get what you’re looking for. Of course it won’t be able to generate the client’s actual product in the image so the final ad would still need some editing.
For our SteelSeries Easter Sale graphic, the goal was to simply have the Steelseries headphone hatching out of an egg. Here we tried getting specific with the prompt typing in “smaller black headphone facing the side, inside of an orange easter egg that is hatching with pieces of egg shells on the floor.”
As you can see the prompt is getting a little misinterpreted. It looks like the egg is wearing the headphones instead of headphones coming out of it.
The issue we usually come across with Midjourney is inconsistency. Even changing a few words in this prompt, from “facing the side” to “facing the front” caused pretty different results compared to before. After multiple attempts we settled on this variation.
Since this generated headphone was just a placeholder, the final step was to Photoshop Steelseries’ headphone with the same angle in its place. We always want to use the actual product since that is what we’re advertising. After some retouching and color correcting here was the final result:
Adobe Firefly
This is great if you’re trying to be even more accurate/specific when it comes to the shape and material you want to generate. For this example we used Firefly to help create dynamic 3D graphics for this landing page.
You can combine two references for Firefly’s image generation: composition and style. For our composition we made 3D icons in Adobe Illustrator. We discovered that Firefly gave better results when the composition had bright or contrasting colors.
Next was creating our style. We prompted Firefly to create a “3d inflated clear glass cube, made of matte transparent shiny material, isolated on a black background.”
Now that we had our composition and style, we imported both images into the image generator. You can play around with the toggles and effect options to get the results you desire.
This tool has been extremely helpful in creating specific and consistent assets, which was a problem we found in programs like Midjourney. By switching the composition image and keeping the same style reference and prompt we were able to make multiple 3D icons with the same look and feel.
A Deeper Dive Into Our Method
Having AI at our disposal allows us to think of bigger, more out-of-the-box ideas for our clients. Here we’ll go through step by step on how we leveraged AI to create this organic holiday IG story for Oura.
1. Ideation
To prepare for the holidays, we wanted to tie Oura to the seasonal spirit in a creative way. The client wanted bigger ideas so this was a good opportunity to try something new. The goal was to create something tongue-in-cheek that would be different than the typical gift-giving ad. The Oura Ring is a smart ring that helps track heart rate, stress, sleep, and more. Who better to benefit from this product than the Grinch himself.
2. Prompt Engineering
Due to budget constraints, we weren’t able to hire the Grinch as our model. However, thanks to AI image generators, we were still able to carry out our concept. We knew we wanted the grinch’s hand laid out in a clear way and had to describe our vision in specific details. Using Midjourney, we typed up multiple descriptions to feed into the prompt.
Prompt:The Grinch’s green fuzzy long fingers laid out on a red blanket
Results: Strange options including the Grinch’s face which was not what we wanted.
Prompt: The Grinch’s green fuzzy hand with sharp pointy fingers laid out on a red blanket
Results: Fingers were almost too “human” or the hands were misshapen.
This process took much longer than anticipated. What helped however was being able to choose which variation (V1 – V4) we liked the most. Once clicked, Midjourney will create 4 more options similar to that variation.
After countless variations and lots of back and forth we stumbled upon a good enough view and angle of the hand to move forward to the next step. Each prompt wasn’t too far off from each other so we just had to keep generating until we were satisfied. Click the “U” button with the corresponding number (1 through 4 starting from the top left to the bottom right) to select your image. You can also choose to vary your image, zoom out (which will typically expand the background), or upscale the image.
Prompt: The Grinch’s green fuzzy hand with pointy fingers laid out on a red blanket
Dissecting this final prompt, details like “green fuzzy” and “pointy fingers” gives Midjourney the texture and details it needs to know to do its job. It’s also important to include what you want for the background, in this case it’s the “red blanket.” You can go into further specifics by even describing the image style whether you want it photorealistic, 2D, etc.
3. Photoshopping
The generated image still wasn’t quite there yet. The Grinch’s fingers are known for being long and sharp, but the ones we kept getting from Midjourney felt too stubby or not pointy enough no matter the prompt.
We also needed the actual Oura Ring on the hand, definitely not an image-generated ring. So the hand/fingers were altered in Photoshop along with combining the Oura Ring from the client’s photography. This was where our design skills and oversight played the biggest role.
The composition we chose was clean enough to where we could easily add the ring in a similar position. It was also important to us that even though this was a silly concept, the execution still needed to look professional and the layout needed to be simple and not too distracting from the product itself.
AI & Client Workflow
With AI pushing the boundaries of visual capabilities, it’s important to recognize the collection of data used to generate images and be wary of your project’s needs and standards as well as factor in timelines for trial and error in AI experimentation.
Maintaining transparency to your clients/audience and giving credit to the source of your creation is always a rule of thumb when incorporating AI into your design. When briefing the client, it’s helpful to provide different solutions or tools for executing complex ideas so everyone is aware of the scope of the project.
For our Oura ad, we first pitched the Grinch concept to see if they were open to it. Then we presented a round of executions, informing them of the AI tools we used, to see if they still liked the direction. We emphasized to the client that although it was heavily made with AI, we made sure that the concept was what made the ad special and that the ring was still spotlighted. Initially the idea was meant for a paid ad, but instead the client decided to share it as an organic ad given the circumstances.
AI programs such as Midjourney, still require prompts written by humans in order to be utilized to its fullest potential. Our team is always adapting and looking for the latest problem-solving programs to enhance our creative ads and processes. We leverage AI through prompt engineering, image generation, and more, while making sure that our ideas and creativity still stem from our brains.
Partner with the #1 Performance Branding Agency