The Computer’s Hallucination

What do computers see

while we leave them be

and on the couch we watch TV?

Resulting video from the audio mix and a model weight form styleGAN, my dataset, and birds.

Resulting video from the audio mix and a model weight form styleGAN, my dataset, and landscapes.

I used a base google colab notebook, VisionaryArtGenerator, altered it and then generated 7000 images from the input. Using those images as a dataset, I used a style GAN to create a .pkl file to input into another google colab notebook called Lucid Sonic Dreams.

VisionaryArtGenerator-https://colab.research.google.com/drive/1Zny3nZwzkGqzVd-PBQaEK48w9HC2SaL_?usp=sharing

Original .pkl to start the process-https://drive.google.com/file/d/1Vsx7oGYhGchjHyOs2UOzofaNNULsc3yJ/view?usp=sharing

https://colab.research.google.com/drive/1T6D9Dkjak19tiynIEBIIk4ozZbJd1yFN?usp=sharing

https://colab.research.google.com/drive/1T6D9Dkjak19tiynIEBIIk4ozZbJd1yFN?usp=sharing

Sample of dataset.
Audio source files.
I trained the dataset on two different models, this the start of the landscape model.
This is the output of the model trained with the dataset.
This time I trained the model on birds.
Output form the birds model.
LandscapeToGod process video
LandscapeToGod latent space walk.
BirdToGod process video.
BirdToGod latent space walk.

BONUS ROUND

I used the same process as above with the audio I used for Not A Dream and this is what came out…