Abstract and keywords
Abstract (English):
The article discusses one of the latest ways to colorize a black and white image using deep learning methods. For colorization, a convolutional neural network with a large number of layers (Deep convolutional) is used, the architecture of which includes a ResNet model. This model was pre-trained on images of the ImageNet dataset. A neural network receives a black and white image and returns a colorized color. Since, due to the characteristics of ResNet, an input multiple of 255 is received, a program was written that, using frames, enlarges the image for the required size. During the operation of the neural network, the CIE Lab color model is used, which allows to separate the black and white component of the image from the color. For training the neural network, the Place 365 dataset was used, containing 365 different classes, such as animals, landscape elements, people, and so on. The training was carried out on the Nvidia GTX 1080 video card. The result was a trained neural network capable of colorizing images of any size and format. As example we had a speed of 0.08 seconds and an image of 256 by 256 pixels in size. In connection with the concept of the dataset used for training, the resulting model is focused on the recognition of natural landscapes and urban areas.

Keywords:
ResNet, convolutional neural network, CIE Lab, Place 365, image colorizing
References

1. Zhang, Richard, Phillip Isola, and Alexei A. Efros. «Colorfulimage colorization» European Conference on ComputerVision. Springer International Publishing, 2016.

2. Liang, Xiangguo, et al. «Deep patch-wise colorization modelfor grayscale images» SIGGRAPH ASIA 2016 TechnicalBriefs. ACM, 2016.

3. Cheng, Zezhou, Qingxiong Yang, and Bin Sheng. «Deepcolorization» Proceedings of the IEEE InternationalConference on Computer Vision. 2015.

4. Dahl, Ryan. «Automatic colorization» (2016).

5. Goodfellow, Ian, et al. «Generative adversarial nets»Advances in neural information processing systems. 2014.

6. Medsker, L. R., and L. C. Jain. «Recurrent neural networks»Design and Applications 5 (2001).7. Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, IlyaSutskever, and Ruslan Salakhutdinov. Dropout: a simple wayto prevent neural networks from overfitting. Journal ofmachine learning research, 15(1): 1929-1958, 2014.

Login or Create
* Forgot password?