Visual Experimentation in DIalogue With Generative AI

What impact do intelligent systems have on the design process in communication design? How are the professional image of designers, their tasks, their priorities and their methods changing? And: What kind of relationship can they develop with non-human computational intelligence?

Designers are – and always have been – dependent on their tools. These tools not only help to plan and implement design according to the purpose of their use. They are also means to design faster, more effectively, and more precisely.

With the advance of artificial intelligence (AI), technologies permeate all areas of life and work. Every day, new design tools emerge that challenge designers to explore unfamiliar creative ways of thinking and working. Hardly anyone can get past text and image generators at the moment, for example.

Having new tools available also means having the opportunity to develop innovative creative techniques and methods. These inevitably lead to new aesthetics and ways of seeing, and subsequently shape the style of their era.

This project pursues collaborative experimentation in which AI tools are partners. The goal is to find out how AI can be integrated into the design process and enrich it with new perspectives, insights and alternative directions without interrupting the creative workflow.

The following is the documentation of a dialogue between designer and AI.

TRAIN

I started at a very early stage of the creative process, where there is no defined plan yet, only the intention to research and find inspiration in a process-oriented approach. The initial definition of the framework was to find new combinations of shapes and colors of already existing analog collages.

Therefore, I prepared a dataset of about 240 photos and scans of my analog collages as well as several paper snippets. With this, I trained a custom AI model using RunwayML. During training, the GAN found patterns in my dataset that it later recreated by generating new images. This is how GANs work in a very nutshell, they can generate new images based on a dataset they were previously trained on.

Tools used

RunwayML
Part of the image dataset of about 240 photos and scans for RunwayML

GENERATE

With my Runway model I now generated dozens of collages. The results were very abstract images whose rendering had little to do with the prompts I entered. Color specifications had the strongest effect on the output.

At this point, the model is a tool for quickly and quantitatively exploring a particular style or aesthetic. Now it can be decided which images should be selected and included in the further work process.

Tools used

RunwayML
Excerpt of dozens of generated collages

CURATE

Now it can be decided which images should be selected and included in the further work process.

Image selection process

EXTEND

In order to merge a selection of the best, formally most interesting images into a large collage, they were first distributed on a DALL-E canvas and connected visually piece by piece with the prompt »extension of the background«. By placing a »generation frame« and entering the prompt, the program generated four image options each. If the content was compelling, it could be selected and confirmed.

Next, I applied the same co-designing process to typographic elements. They were loosely placed on the DALL-E canvas and also supplemented with generated shapes using the »extension of the background« prompt.

Tools used

RunwayMLDALL-E
Combining several small images into one large one with DALL-E
Generating connecting elements with DALL-E

SELECT

The resulting image was exported, cut up, and the fragments were placed back on the canvas and merged with the help of the DALL-E generator. Finally, elements were extracted from it and combined into a kind of new sign system.

Tools used

PhotoshopDALL-E
Iteration process with selected image fragments and elements

REARRANGE

The existing square collage images were first upscaled from 1080 to 2160 pixels using Img.Upscaler to ensure good print quality. After printing, they were cut up, arranged and reassembled. Analog compositing took much longer than digital generation, of course, but it was much more controllable.

Creation of an analog collage from generated images

IDENTIFY

In order to identify some abstract image content that had emerged during the experimentation process, two models of the HuggingFace platform were used. The image-to-text demo tool described each uploaded image with a short sentence. For example, »A painting of a snake on a yellow background«, »A white sculpture in the water with a blue sky«, »a woman in a swimsuit is floating in the water«, or simply »a large colorful piece of art«.

The second model, an image classifier demo tool, was to assign a class to each of the same images as before, or to name the highest probability for a given keyword. The results were »rock python«, »shower curtain«, and »tank suit«.

Tools used

HuggingFace
Image-to-text description of picture content
Image class assignment for picture content

COMBINE

The keywords (classes) identified in the previous step were now used to merge two images into a third. For this purpose, the abstract source images were crossed with new, more photorealistic images generated by Midjourney.

Tools used

Midjourney
Merging selected image sections and their previously classified visual contents with Midjourney

INTERPRET

Some previously identified image-to-text descriptions have now been used as prompts to create videos. Kaiber allows users to define, how long the video should be, how much the target frame should differ from the input image, and what style it should have. In the examples, it was noticeable that the associative frames in the middle of the videos were more interesting than the somewhat artificial and overdrawn final results.

Tools used

Kaiber
From abstract to photorealistic images with Kaiber

The principle of dialogue, in which one collaborator says something, the other listens, then responds, perhaps asks a question, before the other has the say again, has proven to be a good way to have fun and to lead to unexpected results.

Of course, the process could be extended forever. For this project, only a small selection of AI tools were used; one could dive much deeper into each step, develop ideas for further processes, then put good results to use on projects and discard bad ones.

But for now, I say thank you for collaborating!