|2:17 PM (30 minutes ago)|
I have two more ideas regarding the use of GANs.
first is to utilise this google image scraper script in a rather more complex way; we can first build a smaller dataset, perhaps the frames of a single video like we tried – then run a CNN on it to detect what’s inside these frames and perhaps save them on a .json file, then it will scrape more images from google then add them to the dataset, then return to first step and again and again.. this would be just a bit more than an object detection but more like an RNN so we have a brief description like ‘a city – raining’
second is within the context of the game; you know our modesetter like video of a woman designing mask with this imaginary tool-software on instagram- actually the people will chose a few things before the VR experience begins. for example do you like starry nights? if yes, GAN which is trained on starry images will be activated and so on..perhaps sometime in future these things will be faster and without the process of training vr software will self-create the world for us, then the signature of us will leave a mark on the construct. and rebuild on itself. a true HYPER_HOLOGRAM.
wednesday 16:43 /// 23.01.2019
What you see here is the individual co-creating with the machine and the end product is the her face/interface driven by emotions which becomes a tangible reality. The dialog of ‘choosing’ and the machine ‘returning’ something back to you and is changing the level of reality is fascinating. We have discussed the use of GANs in this context, the real world application could be as follows; user filing a virtual form before the experience begins such as in the video below so this HYPER_HOLOGRAPHIC world where this AIFX track is playing certain elements could be algorithmically designed. For example, if one would state that he/she is rather into sunny weather the software could load a GAN model trained on sunny sky images to create the skybox for the virtual environment [this also would be a nice homage to Vanilla Sky!]
i mean however i truly realize and marvel the GANs and feel like I could watch those interpolation videos all day too i feel ‘we trained a gan on n number of x images and here are the output y with some effects on it’ approach is rather bland an unimaginative. I sadly do not have a better idea but it feels like seeing that first ever camera footage train approaching and saying ‘you know what we should do.. shoot a movie of a bus!’ we can at least try to create a more autonomous and self creating system to turn upside that almost fetishistic approach.
notes from a chat with a friend:
actually the initialization of this project was nothing more than trying a Python script to slow down an audio clip to extremes – it is called paulstretch.. usually the outcome turns out very shoegaze-y when applied on pop tracks, so we tried it on Realiti. to explain why we chose Realiti, the word we like to use is arbitrary; based on random choice or personal whim, rather than any reason or system. it was somewhat random with a bit of personal whim included, first of all it is a fantastic pop song with a really cool video – also the title of the track was suitable… we like Grimes but nothing more that really, we think she is a cool producer and artist.
watching the interpolation videos I somewhat felt like remembering something from my childhood – especially the scenes where it is creating this over the mountain views of stars. There was just something about that shade of blue..