Neural Network Compression and Storage Using Linear Feedback Shift Registers (LFSRs)

Session Number

Project ID: CMPS 33

Advisor(s)

Prof. Yanjing Li, University of Chicago

Discipline

Computer Science

Start Date

17-4-2024 9:40 AM

End Date

17-4-2024 9:55 AM

Abstract

This research paper explores the application of Linear Feedback Shift Registers (LFSRs) to enhance the compression of neural networks. LFSRs, which employ a linear function to determine input bits based on previous states, are commonly used for generating bit sequences and pseudo-random numbers that can be used to generate pseudo-random weight approximations. Compressed neural networks offer a transformative solution by significantly reducing memory demands. The study looks at weight visualizations of neural networks and explores possible LFSR approximation with it which could potentially improve storage efficiency without compromising model performance. We hope to find some patterns that we can use to optimize the compression process, leading to more efficient neural network implementations. By leveraging the properties of LFSRs in conjunction with neural network weight visualization techniques, we aim to uncover novel strategies for enhancing neural network compression while maintaining or even improving model accuracy. This approach has the potential to contribute to the field of neural network compression and pave the way for more streamlined and resource-efficient deep learning applications.

Share

COinS
 
Apr 17th, 9:40 AM Apr 17th, 9:55 AM

Neural Network Compression and Storage Using Linear Feedback Shift Registers (LFSRs)

This research paper explores the application of Linear Feedback Shift Registers (LFSRs) to enhance the compression of neural networks. LFSRs, which employ a linear function to determine input bits based on previous states, are commonly used for generating bit sequences and pseudo-random numbers that can be used to generate pseudo-random weight approximations. Compressed neural networks offer a transformative solution by significantly reducing memory demands. The study looks at weight visualizations of neural networks and explores possible LFSR approximation with it which could potentially improve storage efficiency without compromising model performance. We hope to find some patterns that we can use to optimize the compression process, leading to more efficient neural network implementations. By leveraging the properties of LFSRs in conjunction with neural network weight visualization techniques, we aim to uncover novel strategies for enhancing neural network compression while maintaining or even improving model accuracy. This approach has the potential to contribute to the field of neural network compression and pave the way for more streamlined and resource-efficient deep learning applications.