Compression of Convolutional Neural Network for Natural Language Processing

Krzysztof Wróbel, Michał Karwatowski, Maciej Wielgosz, Marcin Pietroń, Kazimierz Wiatr

Abstract


Convolutional Neural Networks (CNNs) were created for image classification tasks. Quickly, they were applied to other domains, including Natural Language Processing (NLP). Nowadays, the solutions based on artificial intelligence appear on mobile devices and in embedded systems, which places constraints on, among others, the memory and power consumption. Due to CNNs memory and computing requirements, to map them to hardware they need to be compressed.

This paper presents the results of compression of the efficient CNNs for sentiment analysis. The main steps involve pruning and quantization. The process of mapping the compressed network to FPGA and the results of this implementation are described. The conducted simulations showed that 5-bit width is enough to ensure no drop in accuracy when compared to the floating point version of the network. Additionally, the memory footprint was significantly reduced (between 85% and 93% comparing to the original model).

Keywords


natural language processing; convolutional neural networks; FPGA; compression

Full Text:

PDF


DOI: https://doi.org/10.7494/csci.2020.21.1.3375

Refbacks

  • There are currently no refbacks.