A machine-learning based video-prediction technology learns and then re-creates a string of nature scenes and human life images. Woven together in a rhythmic, hypnotic pace, visual snippets and motion patterns appear, reborn and fall apart. The video segments begin "normally" and are then being continued by the computing process to become a "new normal". The result is surprising, eerie and poetic, raising both concerns and wonder regarding human-machine relationships, and future effects.
The images unveil the ‘heart’ and ‘soul’ of the machine, and the way it perceives nature and humans. The fragmented, hallucinatory imagery, coming from a machinic vision, looks real, alien and abstract all at the same time, evoking an ‘uncanny valley’ feeling.
Somewhere We Live in Little Loops utilises the technology of video generation by next frame prediction, shedding light on our own sensory and cognition mechanisms. The neural digital brain processing and creating what it ‘sees’, is analogous to our human brain vision processing.
The work highlights the intervention of machine-learning in human life. It is a projection into the future of machine-human interaction, in regards to Artificial Intelligence prediction capabilities and their possible consequences, exploring new moving forms enabled by this new technology.