Co-Creating with AI

Exploring and Integrating Stable Diffusion Models in the Creative Process of Poster Design.

Practical Exploration

01    Ideation


01 .01    Explorating

There are many AI tools out there to generate graphic images. To understand and see the visual potential of these tools, I started exploring the most well-known ones, such as DALL·E, Midjourney, and Stable Diffusion, to compare and see the visual differences.
Midjourney Explore Archive


Using basic text-to-image prompts, and without having a specific topic in mind, I asked for a graphic poster from different AI tools.
DALL·E, txt2img


Stable Diffusion’s v1.5 was a model released in autumn 2022 by Runway ML, a partner of Stability AI, and is based on the v1.2 model.
Stable Diffusion, v1.5, txt2img


Stable Diffusion’s xl1.0 model (SDXL) is a model that can generate “legible text” and diverse art styles. It is considered a better image model than v1.5, with its larger dataset, allowing for superior compositions.
Stable Diffusion, xl1.0, txt2img


Graphic-art is a model made by user Pmejna on CivitAI for Stable Diffusion based on v1.5, focused mainly on graphic design. It is a good example of how a model can be trained or fine-tuned specifically for creating graphic images.
Stable Diffusion, graphic-art, txt2img

01 .02   Ideas

This phase was crucial for translating abstract mental ideas into visual representations, laying a foundation for subsequent experimentation. This involved idea generation with mind mapping and sketching to develop initial concepts.
Figural reasoning

01 .03    Concept
To design the posters I needed a topic on which to explore and experiment when creating the posters. Inspired by the thesis topic, I came up with the title “Design Between Human and Machine”. Designing a poster alone involved being introspective, where I was both the client and the designer.

Planning was also important when it came to trying out different creative approaches. Visualising the creative design process in a diagram, inspired by Goldschmidt’s theory, guided me. This, combined with a general, systematic, step-by-step method, was important to focus and not get lost in various directions.
Conceptual reasoning
↑ Mind map around topic
↑ Personal visualisation Goldschmidt’s theory
↑ Step-by-step method

01 .04    Prompt Level

For text-to-image generation, I tried various textual descriptions for the prompts with simple, general and complex ones.

Simple Prompts – are basic and straightforward, providing minimal detail giving the AI more freedom to interpret the text. Example: “Create a blue abstract pattern.”

Stable Diffusion text prompts.
Stable Diffusion, v1.5, txt2img, Simple

© 2024 BASEL