Please use this identifier to cite or link to this item:
http://dspace2020.uniten.edu.my:8080/handle/123456789/20967
Title: | Small-Scale Deep Network for DCT-Based Images Classification | Authors: | Borhanuddin B. Jamil N. Chen S.D. Baharuddin M.Z. Tan K.S.Z. Ooi T.W.M. #PLACEHOLDER_PARENT_METADATA_VALUE# #PLACEHOLDER_PARENT_METADATA_VALUE# #PLACEHOLDER_PARENT_METADATA_VALUE# #PLACEHOLDER_PARENT_METADATA_VALUE# #PLACEHOLDER_PARENT_METADATA_VALUE# #PLACEHOLDER_PARENT_METADATA_VALUE# |
Issue Date: | 2019 | Abstract: | The need to acquire high performance deep neural network models is a research trend in recent years. Many examples have shown that achieving high validation accuracies require a very large number of parameters in most cases and therefore, the space used to store these models becomes very large. This may be a disadvantage on small storage size and low performance CPU edge devices during image processing that are embedded with neural networks for object recognition tasks. In this paper, we investigate the effect of input images which are partially compressed using the Discrete Cosine Transform (DCT) algorithm on two different Convolutional Neural Network (CNN) performances, known as CNN-C (large model) and CNN-RC3 (small model). DCT is used to reduce some data redundancies but also the risk of losing valuable features for the network to learn efficiently. However, the results show that both CNN architectures with DCT features perform as well as with raw image data, concluding that a properly designed CNN model can still achieve high performance on further compressed images regardless of its information reductions. © 2019 IEEE. | URI: | http://dspace2020.uniten.edu.my:8080/handle/123456789/20967 |
Appears in Collections: | UNITEN Ebook and Article |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
This document is not yet available.pdf Restricted Access | 396.12 kB | Adobe PDF | View/Open Request a copy |
Google ScholarTM
Check
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.