Behind the creation of the artwork
When I first started working on the album cover for Chrysalis I started tinkering with some of the major underlying themes the album is following which is the life of the butterfly. Although this aspect is actually used as an opportunity to tell a personal and introspective story, it remained interesting to use elements that graphically resembled butterflies to give the first visual taste of the work. From the beginning I took into consideration the idea of using only generative-AI based methods for the realization of the artwork and consequently I exploited my experience in using GAN to create new visual material starting from butterfly photographs.
For this first step I prepared a dataset using creative commons images of macro photographs from flickr and trained a modified version of Stylegan2. The images have been scraped and pre-processed in order to use the model fine-tuning the weight from the stylegan2-ffhq trained model. Here’s a small excerpt of the outcome.
At this point I realized that my main interest was in butterfly wing textures and that I needed a different dataset to be able to explore these patterns more. In this sense I then used the images I found on the data portal of the Natural History Museum and I used some techniques proposed by marian42 to be able to scrape and process these images. In particular, it was necessary to crop the available images, align them and separate the background from the animal with a mask. Once these operations have been carried out, I repeated the training process.
Also in this case the main theme remained the butterfly in its entirety and every transformation of the pigments took place bringing with it other morphological variations superfluous for my interest. I then edited and pre-processed the images with smaller crops focusing only and exclusively on the internal parts of the wings.
The result of the process allowed me to explore this generative content in a free way, enhancing some artifacts that emerged in the training process of the neural network. To evaluate the generation of formats that will go beyond that of the trained model, I used some eps696 techniques in its implementations that allow you to generate images with dimensions and aspect-ratio at will.
I then generated a few thousands images with variable parameters in order to evaluate different directions and possibilities.
After a few session with Riccardo Bazzoni, the album’s author, we concurred on the final direction for the artwork and selected some of the samples for the final post-process phase. Very slight modification on textures and composition were then applied and here you can see the final result.