The Power of AI-generated Art

It is no secret that blog posts with images get more engagement. So just find some relevant ones and use them, you say. But what image would you use, if you hail from a data software firm that makes a query engine for streaming tables?

Or more precisely, in the words of the head of design Don McKenzie at Deephaven, “how the heck are you supposed to pick images for technical topics” such as viewing pandas DataFrames, using Kafka with Parquet, and Redpanda for streaming analytics, just to highlight a few?

AI to the rescue

The truth is, there are only so many artistic, futuristic photos that can be found on stock photo sites before everything gets stale. I know, because I face this problem to a certain extent with data science and AI stories. But hey, I’m a writer, and I can always figure out an analogy that I can weave in that’ll fit the image I can use.

But back to the story: What is a designer at a data-centric technology organization to do? Why, turn to AI, of course.

“I spent the weekend and [USD45] in OpenAI credits generating new thumbnails that better represent the content of all 100+ posts from our blog. For attribution, I’ve included the prompt used to create the image as the alt text on all our new thumbnails,” wrote McKenzie.

In a nutshell, he turned to DALL-E 2, an image creation tool by OpenAI that can take a text description and generate a highly realistic, original image (and art). As I wrote in April, think in terms of combining concepts and attributes, such as “an astronaut riding a horse in a photorealistic style” or “a corgi on a beach”.

And yes, realistic edits can be made to an existing image using natural language too, and you can add or remove elements, as well as add shadows without needing to fire up Photoshop. So how did McKenzie fare? Pretty well, it appeared.

Image credit: Deephaven blog

Lessons learned

According to him, creativity is required to figure out the right description from which to generate images. A quick re-reading of blog posts was required to get some inspiration, followed by some research for possible objects or concepts. And being specific help, notes McKenzie, “to the point of being redundant”.

Writing the right prompt takes practice too, which means buying more credits (An account only offers 50 credits a day). You get better with practice, though McKenzie recommends using stylistic modifiers as being key to getting interesting images. And yes, browsing the r/dalle2 Reddit channel is recommended to get ideas for a good prompt.

Finally, be prepared to photoshop remove gibberish text, or do some editing to get multiple elements into the same image. McKenzie notes: “Having an AI image generator doesn’t instantly make you a better artist, just like having a Canon 6D Mark II doesn’t make you a better photographer. Curation and judging what looks good is still important,” he noted.

So, the role of humans won’t be going away soon. But for what it’s worth, I do think this is an excellent idea for generating blog images, though there is a waitlist to access DALL-E 2.

In the meantime, you can find another example of AI-generated art used for a data center magazine here.

Paul Mah is the editor of DSAITrends. A former system administrator, programmer, and IT lecturer, he enjoys writing both code and prose. You can reach him at [email protected].​

Image credit: iStockphoto/Artystarty