Please use this identifier to cite or link to this item:
https://idr.l1.nitk.ac.in/jspui/handle/123456789/8085
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Attokaren, D.J. | |
dc.contributor.author | Fernandes, I.G. | |
dc.contributor.author | Sriram, A. | |
dc.contributor.author | Murthy, Y.V.S. | |
dc.contributor.author | Koolagudi, S.G. | |
dc.date.accessioned | 2020-03-30T10:18:04Z | - |
dc.date.available | 2020-03-30T10:18:04Z | - |
dc.date.issued | 2017 | |
dc.identifier.citation | IEEE Region 10 Annual International Conference, Proceedings/TENCON, 2017, Vol.2017-December, , pp.2801-2806 | en_US |
dc.identifier.uri | http://idr.nitk.ac.in/jspui/handle/123456789/8085 | - |
dc.description.abstract | The process of identifying food items from an image is quite an interesting field with various applications. Since food monitoring plays a leading role in health-related problems, it is becoming more essential in our day-to-day lives. In this paper, an approach has been presented to classify images of food using convolutional neural networks. Unlike the traditional artificial neural networks, convolutional neural networks have the capability of estimating the score function directly from image pixels. A 2D convolution layer has been utilised which creates a convolution kernel that is convolved with the layer input to produce a tensor of outputs. There are multiple such layers, and the outputs are concatenated at parts to form the final tensor of outputs. We also use the Max-Pooling function for the data, and the features extracted from this function are used to train the network. An accuracy of 86.97% for the classes of the FOOD-101 dataset is recognised using the proposed implementation. � 2017 IEEE. | en_US |
dc.title | Food classification from images using convolutional neural networks | en_US |
dc.type | Book chapter | en_US |
Appears in Collections: | 2. Conference Papers |
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.