Random experiments with generative machine learning models.

LOOKING GLASS V1.1

First up, we have Looking Glass, notebook link:

https://colab.research.google.com/drive/11vdS9dpcZz2Q2efkOjcwyax4oob6N40G

This uses both an input image and input text to generate multiple images file. There weren’t many parameters to play with but the input image and text made major differences in the output image. You can adjust the number of training epochs to change the similarity between images.

Trial #1

Flavor Text: Blood flows from ancient text
Image Name: 01608
Epochs: 50
Images Generated: 9
Image

Output Images

Trial #2

Flavor Text: The void stares back at you
Image Name: 01608
Epochs: 50
Images Generated: 9
Image

Output Images

Trial #3

Flavor Text: The void stares back at you
Image Name: 01625
Epochs: 25
Images Generated: 25
Image

Output Images

Trial #4

Flavor Text: The void stares back at you.
Image Name: 07114
Epochs: 25
Images Generated: 25
Image

Output Images

Trial #5

Flavor Text: Enemy of mine the omega.
Image Name: Day7of50
Epochs: 25
Images Generated: 25
Image

Output Images

Trial #6

Flavor Text: Collapsed across time and space.
Image Name: Day7of50
Epochs: 25
Images Generated: 9
I used the blend feature for this set, 50 additional images were used to create more variation across the output images

Image

Output Images

FLESH DIGRESSIONS

For the next set of experiments I used the “Flesh Digressions” colab notebook link:

https://colab.research.google.com/github/dvschultz/ml-art-colabs/blob/master/flesh_digressions.ipynb

I produced three different video outputs using three different .pkl files they can be found at the link below.

https://drive.google.com/drive/folders/1TtgUHAHH6mdXHeO12wQWWp7_Ev4VLxLn?usp=sharing

This video uses birdtogod.pkl
This video uses landscapetogod.pkl
This uses the visionaryart.pkl by Jeramy Torman.

NEURAL CELLULAR AUTOMATION

Next up is Neural Cellular Automation colab notebook link

https://colab.research.google.com/github/google-research/self-organising-systems/blob/master/notebooks/texture_nca_tf2.ipynb

I used the following image as an input

Output Video

DISCO DIFFUSION

I also tried a purely text to image model called “disco-diffusion”.
my input was the following:

“I’ve been crawling on my belly” Tool Lyric
“I think drugs have done some good things for us” Bill Hicks Quote
“prayed like a martyr dusk till dawn” Tool Lyric
“somniferous almond eyes” Tool Lyric
“blue color scheme” Descriptor

Output Image

I had the following errors for a few notebooks.

Deep Dream

Lucid Style Transfer

IllusTrip

I didn’t spent too much time trying to figure out the issues since there are so many notebooks to try. There are a few that I really want to get going like the illstrip and lucid style transfer.

For a lot of the following work I used seed images generated with a notebook created by Jeramy Torman link:

https://colab.research.google.com/drive/1Zny3nZwzkGqzVd-PBQaEK48w9HC2SaL_?usp=sharing