Compression of Convolutional Neural Network for Natural Language Processing

Authors

DOI:

https://doi.org/10.7494/csci.2020.21.1.3375

Keywords:

natural language processing, convolutional neural networks, FPGA, compression

Abstract

Convolutional Neural Networks (CNNs) were created for image classification tasks. Quickly, they were applied to other domains, including Natural Language Processing (NLP). Nowadays, the solutions based on artificial intelligence appear on mobile devices and in embedded systems, which places constraints on, among others, the memory and power consumption. Due to CNNs memory and computing requirements, to map them to hardware they need to be compressed.

This paper presents the results of compression of the efficient CNNs for sentiment analysis. The main steps involve pruning and quantization. The process of mapping the compressed network to FPGA and the results of this implementation are described. The conducted simulations showed that 5-bit width is enough to ensure no drop in accuracy when compared to the floating point version of the network. Additionally, the memory footprint was significantly reduced (between 85% and 93% comparing to the original model).

Downloads

Download data is not yet available.

Downloads

Published

2020-01-27

Issue

Section

Articles

How to Cite

Wróbel, K., Karwatowski, M., Wielgosz, M., Pietroń, M., & Wiatr, K. (2020). Compression of Convolutional Neural Network for Natural Language Processing. Computer Science, 21(1). https://doi.org/10.7494/csci.2020.21.1.3375

Most read articles by the same author(s)