Please use this identifier to cite or link to this item:
http://dspace2020.uniten.edu.my:8080/handle/123456789/21133Full metadata record
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Janahiraman T.V. | en_US |
| dc.contributor.author | Subuhan M.S.M. | en_US |
| dc.date.accessioned | 2021-09-02T07:20:09Z | - |
| dc.date.available | 2021-09-02T07:20:09Z | - |
| dc.date.issued | 2019 | - |
| dc.identifier.uri | http://dspace2020.uniten.edu.my:8080/handle/123456789/21133 | - |
| dc.description.abstract | Traditional methods in machine learning for detecting traffic lights and classification are replaced by the recent enhancements of deep learning object detection methods by success of building convolutional neural networks (CNN), which is a component of deep learning. This paper presents a deep learning approach for robust detection of traffic light by comparing two object detection models and by evaluating the flexibility of the TensorFlow Object Detection Framework to solve the real-time problems. They include Single Shot Multibox Detector (SSD) MobileNet V2 and Faster-RCNN. Our experimental study shows that Faster-RCNN delivers 97.015%, which outperformed SSD by 38.806% for a model which had been trained using 441 images. © 2019 IEEE. | en_US |
| dc.language.iso | en | en_US |
| dc.title | Traffic light detection using tensorflow object detection framework | en_US |
| dc.type | conference paper | en_US |
| item.openairecristype | http://purl.org/coar/resource_type/c_5794 | - |
| item.grantfulltext | reserved | - |
| item.cerifentitytype | Publications | - |
| item.openairetype | conference paper | - |
| item.fulltext | With Fulltext | - |
| item.languageiso639-1 | en | - |
| Appears in Collections: | UNITEN Ebook and Article | |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| This document is not yet available.pdf Restricted Access | 396.12 kB | Adobe PDF | View/Open Request a copy |
Google ScholarTM
Check
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.