Skip to main content
padlock icon - secure page this page is secure

Open Access Efficient Pre-Processor for CNN

Download Article:
 Download
(PDF 797.5 kb)
 
Convolution Neural Networks (CNN) are rapidly deployed in ADAS and Autonomous driving for object detection, recognition, and semantic segmentation. The prior art of supporting CNN (HW IP or multi-core SW) doesn't address efficient implementation for the first layer, YUV color space, and output stride support. The given paper proposes a new pre-processing technique to enhance CNN based HW IP or multi-core SW solution. The pre-processor enables new features namely (1) Higher parallelism for the first layer with boosting of first layer (2) Efficient YUV color space (3) Efficient output stride support. The pre-processor uses novel phase-split method to enable supporting above features. The proposed solution splits input to multiple phases based on spatial location e.g. 2 phases for YUV 4:2:0 format, 4 phases for output strides 2 etc. The proposed solution is a unified solution that enables utilization (>90%) for the first layer and reduction of bandwidth of 2-4x for output stride of 2. For YUV color space, this reduces the computation by factor 2 along saving of ∼0.1 mm2 of silicon area with negligible loss in accuracy.
No References for this article.
No Supplementary Data.
No Article Media
No Metrics

Keywords: CNN (CONVOLUTION NEURAL NETWORK); COLOR SPACES; DEEP LEARNING; OUTPUT STRIDE; PRE-PROCESSING; SKIP; UTLIZATION

Document Type: Research Article

Publication date: January 29, 2017

More about this publication?
  • For more than 30 years, the Electronic Imaging Symposium has been serving those in the broad community - from academia and industry - who work on imaging science and digital technologies. The breadth of the Symposium covers the entire imaging science ecosystem, from capture (sensors, camera) through image processing (image quality, color and appearance) to how we and our surrogate machines see and interpret images. Applications covered include augmented reality, autonomous vehicles, machine vision, data analysis, digital and mobile photography, security, virtual reality, and human vision. IS&T began sole sponsorship of the meeting in 2016. All papers presented at EIs 20+ conferences are open access.

    Please note: For purposes of its Digital Library content, IS&T defines Open Access as papers that will be downloadable in their entirety for free in perpetuity. Copyright restrictions on papers vary; see individual paper for details.

  • Access Key
  • Free content
  • Partial Free content
  • New content
  • Open access content
  • Partial Open access content
  • Subscribed content
  • Partial Subscribed content
  • Free trial content
Cookie Policy
X
Cookie Policy
Ingenta Connect website makes use of cookies so as to keep track of data that you have filled in. I am Happy with this Find out more