Coupe: Deep Computational Photography Library
Project ‘COUPE’ aims to develop a SW that evaluates and improves the quality of images and videos based on big visual data. To achieve the goal, we extract sharpness, color, composition features from images and develop technologies for restoring and improving by using it.
** Each demo may take several seconds due to the network latency between Germany and South Korea. **
Photo Horizon Correction
By training a deep convolutional neural network with a self-supervised data augmentation on the 360 spherical panorama images, we can precisely estimate and correct the horizontal and vertical alignment of slanted photographs for visually pleasing photo composition.
Image Super Resolution
We propose a novel GAN-based SISR method produces more realistic results by attaching an additional discriminator that works in the feature domain. The discriminator encourages the generator to produce structural high-frequency features rather than noisy artifacts.
Defocus Map Estimation
Shallow focus can emphasize one part of the image over another. Our trained convolutional neural network estimates dense defocus map that represents the size of circle of confusion (CoC). By observing the small and large context simultaneously, the network rarely misses the smooth but focused area.
Photo Color Enhancement
The quality of photographs depend on numerous attributes, such as sharpness, colorfulness, composition. Colorfulness is highly subjective attributes that cannot be described in a few words. Our framework automatically learns the characteristic of high quality photographs prefered by the public, and enhances a given photograph to have similar learned characteristic.
Depth Map Enhancement
Raw depth images from consumer depth cameras suffer from noisy and missing values. Reconstruction-based clean-raw pairwise depth dataset enables the network to learn the end-to-end depth map refinement. Our multi-scale Laplacian pyramid network progressively reduces the noise and holes from coarse to fine scales.