6 April 2025
In this tutorial we're having a first look at how to integrate the image-generation AI tool Stable Diffusion into TouchDesigner. We're creating an audio-reactive texture in an independant component to be able to do frame by frame animation without worrying about losing frames. TD is running in real-time, but the process itself is not (yet) real-time, as one frame takes 5-10 seconds to be created.
Please mind that this is very experimental and just one way (my current approach). I am very open to hear about ways of improving this or ideas on how to expand it. See this more of a starting point and inspiration than a perfectly refined technique.
Make sure that you download the automatic1111 WebUI (below) and some models to work with. The models should be placed inside "stable-diffusion-webui\models\Stable-diffusion". Only tested on Windows, running on an Nvidia RTX 3070 Notebook version.
dotsimulate's SD_API: https://www.patreon.com/posts/sd-api-1-22-85238082 automatic1111: https://github.com/AUTOMATIC1111/stable-diffusion-webui Models: https://civitai.com/ Parameters explained: https://blog.openart.ai/2023/02/13/the-most-complete-guide-to-stable-diffusion-parameters/ Create your own API: https://www.youtube.com/watch?v=4khcLvGjoX8 IG Post: https://www.instagram.com/reel/Ctue_S_o9eF/?igshid=NTc4MTIwNjQ2YQ==
The prompt used for this example: ultrarealistic surreal flowers, ultra detailed, texture, generative art, focus, wes anderson, kodak, light and shadow
This is an example of the outcome: