A method for enhancing the spatial and or temporal resolution (if applicable) of an input signal such as images and videos.
Many imaging devices produce signals of unsatisfactory resolution (e.g. a photo from a cell-phone camera may have low spatial resolution or a video from a web camera may have both spatial and temporal low resolution). This method applies digital processing to reconstruct more satisfactory high resolution signals.
Previous methods for Super-Resolution (SR) require multiple images of the same scene, or else an external database of examples. This method provides the ability to perform SR from a single image (or a single visual source). The algorithm exploits the inherent local data redundancy within visual signals (redundancy both within the same scale, and across different scales).
Examples of the methods' capabilities can be found here: http://www.wisdom.weizmann.ac.il/~vision/SingleImageSR.html
- Enhancing the spatial resolution of images
- Enhancing the spatial and or temporal resolution of video sequences
- Enhancing the spatial and or temporal resolution (if applicable) of other signals (e.g., MRI, fMRI, ultrasound, possibly also audio, etc.)
- No need for multiple low resolution sources or the use of an external database of examples.
- Superior results are produced due to exploitation of inherent information in the source signal.
The framework combines the power of classical multi image super resolution and example based super resolution. This combined framework can be applied to obtain super resolution from as little as a single low-resolution signal, without any additional external information. The approach is based on an observation that patches in a single natural signal tend to redundantly recur many times inside the signal, both within the same scale, as well as across different scales.
Recurrence of patches within the same scale (at subpixel misalignments) forms the basis for applying the 'classical super resolution' constraints to information from a single signal. Recurrence of patches across different (coarser) scales implicitly provides examples of low-resolution / high-resolution pairs of patches, thus giving rise to 'example-based super-resolution' from a single signal (but without any external database or any prior examples).