Super resolution in images and video is a complex task that represents an array of different perceptual abilities, from object recognition to movement flow recognition. SinGANs architecture showed that SOTA super resolution from a single training image (without priors) is possible. TSR is an architecture that performs temporal super resolution on videos, that showed SOTA performance on a single training video. In this project we tried modifying SinGANs architecture and explore its ability to generalize its super resolution capabilities to 3D data videos, the main difference from TSRs architecture is our usage of GANs and the adversarial training scheme. To achieve this, we expanded SinGANs architecture to support temporal-spatial patches and optimized the architecture and hyper parameters. The results are compared to the SOTA solution (TSR) using different metrics.