Literature Review of Deep Network Compression

Ali Alqahtani, Xianghua Xie and Mark W Jones

Abstract

Deep networks often possess a vast number of parameters, and their significant redundancy in parameterization has become a widely-recognized property. This presents significant challenges and restricts many deep learning applications, making the focus on reducing the complexity of models while maintaining their powerful performance. In this paper, we present an overview of popular methods and review recent works on compressing and accelerating deep neural networks. We consider not only pruning methods but also quantization methods, and low-rank factorization methods. This review also intends to clarify these major concepts, and highlights their characteristics, advantages, and shortcomings.

Related Files

PDF iconOpen access link

DOI

10.3390/informatics8040077
https://dx.doi.org/10.3390/informatics8040077

Citation

Ali Alqahtani, Xianghua Xie and Mark W Jones, Literature Review of Deep Network Compression, Informatics 8(4):77 (2021). https://dx.doi.org/10.3390/informatics8040077

BibTeX

@article{NetworkCompressionReview,
title = {Literature Review of Deep Network Compression},
journal = {Informatics},
volume = {8},
number= {4},
pages= {77},
date = {2021-11-17},
year = {2021},
month = {11},
day = {17},
issn = {2227-9709},
doi = {10.3390/informatics8040077},
author = {Ali Alqahtani and Xianghua Xie and Mark W Jones},
}