Video: Deep Learning reconstructing Blade Runner for new video compression techniques — #ai #machinelearning

Terrance Broad from Goldsmith, University of London has trained on the video images of movies Blade Runner and A Scanner Darkly to reconstruct and autoencode each frame from a downsized further compressed video using a convolutional autoencoder neural network.

Blade Runner:

A side by side comparison is shown below with the original to the left and autoencode version right.

A Scanner Darkly:

Please visit the technical details on this project on his medium blog post which also provides a link to his github source.

Research Abstract — see his researcher paper here.

This report details the implementation of an autoencoder trained with a learned similarity metric – one that is capable of modeling a complex distribution of natural images – training it on frames from selected films, and using it to reconstruct video sequences by passing each frame through the autoencoder and re-sequencing the output frames in-order. This is primarily an artistic exploration of the representational capacity of the current state of the art in generative models and is a novel application of autoencoders. This model is trained on, and used to reconstruct the films Blade Runner and A Scanner Darkly producing new artworks in their own right. Experiments passing other videos through these models is carried out, demonstrating the potential of this method to become a new technique in the production of experimental image and video.

Terrance Broad also has a website where you can see some projects he is currently working on including VR, Image Recognition, and other Visual Experiments.

Leave a Reply

Your email address will not be published. Required fields are marked *