Please use this identifier to cite or link to this item:
https://idr.l1.nitk.ac.in/jspui/handle/123456789/14867
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Sankar R. | |
dc.contributor.author | Nair A. | |
dc.contributor.author | Abhinav P. | |
dc.contributor.author | Mothukuri S.K.P. | |
dc.contributor.author | Koolagudi S.G. | |
dc.date.accessioned | 2021-05-05T10:15:54Z | - |
dc.date.available | 2021-05-05T10:15:54Z | - |
dc.date.issued | 2020 | |
dc.identifier.citation | 2020 International Conference on Artificial Intelligence and Signal Processing, AISP 2020 , Vol. , , p. - | en_US |
dc.identifier.uri | https://doi.org/10.1109/AISP48273.2020.9073284 | |
dc.identifier.uri | http://idr.nitk.ac.in/jspui/handle/123456789/14867 | - |
dc.description.abstract | Image colorization is of great use for several applications, such as the restoration of old images, as well as enabling the storage of grayscale images, which take up less space, which can later be colorized. But this problem is hard since there exist many possible color combinations for a particular grayscale image. Recent developments have aimed to solve this problem using deep learning. But, for achieving good performance, they require highly processed inputs, along with additional elements, such as semantic maps. In this paper, an attempt has been made for generalizing the procedure of colorization using a conditional Deep Convolutional Generative Adversarial Network (DCGAN) by adding "Perceptual Loss". The network is trained over the CIFAR-100 dataset. The results of the proposed generative model with perceptual loss are compared with the existing state-of-the-art systems normal GAN model and U-Net Convolutional model. © 2020 IEEE. | en_US |
dc.title | Image Colorization Using GANs and Perceptual Loss | en_US |
dc.type | Conference Paper | en_US |
Appears in Collections: | 2. Conference Papers |
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.