Publications

금융권 유일의 연구 조직으로 다양한 신기술 영역에서 하나금융그룹의 위상을 높이고
세계적 권위의 학회에서 대외 성과를 달성하고 있습니다.

Papers

Edge and Identity Preserving Network for Face Super-Resolution

발행일
2021.03.31
발행기관
Neurocomputing(2021)
저자
Jonghyun Kim, Gen Li, Inyong Yun, Cheolkon Jung, Joongkyu Kim
Link
https://arxiv.org/abs/2008.11977

초록

Face super-resolution (SR) has become an indispensable function in security solutions such as video surveillance and identification system, but the distortion in facial components is a great challenge in it. Most state-of-the-art methods have utilized facial priors with deep neural networks. These methods require extra labels, longer training time, and larger computation memory. In this paper,we propose a novel Edge and Identity Preserving Network for Face SR Network, named as EIPNet, to minimize the distortion by utilizing a lightweight edge block and identity information. We present an edge block to extract perceptual edge information, and concatenate it to the original feature maps in multiple scales. This structure progressively provides edge information in reconstruction to aggregate local and global structural information. Moreover,we define an identity loss function to preserve identification of SR images. The identity loss function compares feature distributions between SR images and their ground truth to recover identities in SR images. In addition, we provide a luminance-chrominance error (LCE) to separately infer brightness and color information in SR images. The LCE method not only reduces the dependency of color information by dividing brightness and color components but also en-ables our network to reflect differences between SR images and their ground truth in two color spaces of RGB and YUV. The proposed method facilitates the proposed SR network to elaborately restore facial components and generate high quality 8× scaled SR images with a lightweight network structure. Furthermore, our network is able to reconstruct an 128 × 128 SR image with 215 fps on a GTX 1080Ti GPU. Extensive experiments demonstrate that our network qualitatively and quantitatively outperforms state-of-the-art methods on two challenging datasets: CelebA and VGGFace2. 

 

 

PDF View