Channel Coding
Low-Density Parity-Check Codes
Forward Error Correction (FEC) is a key component of virtually every modern digital communication system and enables the reliable transmission of any kind of digital content by dealing with the errors that are caused by a noisy transmission channel.
Shortly after the breakthrough of the iterative decoding principle in Turbo codes, Low-Density Parity-Check (LDPC) codes, originally presented by Gallager in 1963, were rediscovered by MacKay in 1995:
"The practical performance of Gallager’s 1962 codes, using Gallager’s 1962 decoding algorithm, would have broken practical coding records up until 1993."
Today, LDPC codes belong to the most powerful state-of-the-art channel codes and are included in many of today’s communication systems. Examples include IEEE 802.16e (WiMAX), IEEE 802.11n (WLAN), DVB-T2, DVB-S2, ITU-T G.hn (Powerline communication), IEEE 802.3 10 Gbit/s Ethernet and many more.
Flexible LDPC Codes
Using a conventional design process, these codes need to be tailored specifically to the exact considered transmission setup if they are designed to perform close within the theoretical limits. The varying requirements of modern mobile communication systems (e.g., 5G) and their growing heterogeneity cannot be accounted for by these conventional designs.
An alternative design allowing for highly flexible code rates and block sizes yields a mother code matrix.
The flexible code rate is achieved by information shortening (adding dummy bits) and parity puncturing. These techniques are considered during the design process of the mother code matrix: theoretical performance limits are taken into account for each of the desired code rates. Finally, the matrix is structured in a way that facilitates information shortening and parity puncturing by sorting the to-be-punctured and to-be-shortened columns to the “outside” of the matrix (left and right columns).
If shortening and puncturing are applied jointly, they can also be used to reduce the block length of the mother code. However, to achieve a highlyflexible block length, the matrix is designed as a template for quasi-cyclic (QC) LDPC codes.
GPU Decoding of LDPC Codes with Higher-Order Galois Fields
LDPC code correction capabilities can be improved by using higher-order Galois fields. In order to efficiently decode these non-linear code words, a decoder was implemented with Nvidia's Compute Unified Device Architecture (CUDA) which supports irregular codes with arbitrary Galois field orders and without the need for a special code structure.
A high speedup can be achieved by employing quasi-cyclic LDPC codes which allow updating multiple neighboring check nodes in parallel without performance loss.
Using the parallel processing capabilities of a GPU efficiently requires ordering the data in a well-considered way, because the (up to) 3-dimensional data is stored in a linear (1-dimensional) memory:
The implementation proposed in [Beermann15a] provides increasing efficiency for increasing Galois field dimensions and can be applied to arbitrary non-binary LDPC codes.
References
[beermann16]
Moritz Beermann
Flexible Low-Density Parity-Check Codes: Rate, Length, and Complexity
Dissertation, 2016
[beermann14b]
Moritz Beermann, Florian Wickert, and Peter Vary
Highly Flexible Design of Multi-Rate Multi-Length Quasi-Cyclic LDPC Codes
8th International Symposium on Turbo Codes and Iterative Information Processing, August 2014
[beermann14c]
Moritz Beermann, Enrique Monzó, Laurent Schmalen, and Peter Vary
GPU Accelerated Belief Propagation Decoding of Non-Binary LDPC Codes with Parallel and Sequential Scheduling
Springer Journal of Signal Processing Systems, January 2015
Rateless Coding
Fountain Codes
The Digital Fountain protocol allows to produce on the fly a potentially limitless number n of encoded packets (encoded symbols) from the k original packets (input symbols). Furthermore, it enables the recovery of a message that consists of k equal length packets upon the reception of any k(1 + ε) encoded packets, where ε ≥ 0 is a small overhead. In contrast to other incremental redundancy schemes no channel state information or feedback message is required.
For instance, in a multicast scenario with a large number of subscribers, each user listens to the encoded data stream only until having collected sufficiently many encoded packets to be able to decode the original message. This protocol is called a digital fountain due to the analogy to a water fountain which is seen as an unlimited source of water allowing to fill the cups of many at the same time. Similar to filling one's cup by collecting a sufficient number of nameless water drops, the original message should be decodable after collecting sufficiently many encoded packets, i.e., each encoded packet should equally and optimally contribute to the decodability. The corresponding erasure-resilient codes are called (digital) fountain codes or rateless codes. The latter name results from the property that their code rate r = k/n is not determined a priori. If the server transmits long enough, users are not even required to tune in at the same time, but can start their session independently of others. In a traditional setup, however, each user would notify the server of lost packets and request a retransmission. If many users were involved, the server as well as the network would easily become overwhelmed, leading to even more packet losses.
Originally, Fountain codes have been designed under asymptotic assumptions k → ∞ for the greedy belief propagation (BP) decoding algorithm. However, these codes suffer from a poor finite length performance, especially for small to medium input sizes. Therefore, a different code design is needed for these practical lengths, enabling low-delay applications and facilitating to use the optimal maximum likelihood (ML) decoder. Considering that the required computational complexity becomes affordable in this case, the ML decoder with its outstanding erasure correction capabilities becomes almost imperative. ML decoding on erasure channels is equivalent to solving consistent systems of linear equations over finite fields. Due to the randomness that is induced by the erasure channel, the decoder has to deal with random systems of linear equations that merely underlie one design constraint, namely a distribution on the number of non-zero coefficients. This distribution is better known as the check node degree distribution. Therefore, the determination of the expected erasure correction performance of rateless codes under ML decoding is equivalent to the fundamental mathematical question of whether a system of designed random linear equations over finite fields can be partially or completely solved.
References
[schotsch11]
Birgit Schotsch, Radu Lupoaie, and Peter Vary
The Performance of Low-Density Random Linear Fountain Codes over Higher Order Galois Fields under Maximum Likelihood Decoding
Annual Allerton Conference on Communication, Control, and Computing, September 2011
[schotsch12]
Birgit Schotsch and Radu Lupoaie
Finite Length LT Codes over Fq for Unequal Error Protection with Biased Sampling of Input Nodes
Proceedings of IEEE International Symposium on Information Theory (ISIT), July 2012