Visual Communication - Information Rate vs Data Rate
## Information Rate vs Data Rate

The above figure presents the information-entropy _{e}(_{e}) plot that characterizes the information rate _{e} of the encoded signal versus thet associated theoretical minimum data rate, or entropy, _{e} for -bit quantization. Results are constrained to 5 bits because the perturbation due to coarse quantization is treated as an independent random noise. Within this constraint, the _{e}(_{e}) plot illustrates the trade-off between _{e} and _{e} in the selection of the number of quantization levels for encoding the acquired signal. It shows, in particular, that:

This result appeals intuitively, because the perturbations due to aliasing and photodetector noise can be expected to interfere with the signal decorrelation as well as with the image restoration.
The figure below compares the theoretical minimum data rate _{e} with the measured data rate E_{3} for a common redundancy reduction scheme that consists of differential pulse code modulation (DPCM) and entropy coding as shown below. The DPCM encoder first predicts the acquired sample *s*_{o} based on some previous samples *s*_{1}, *s*_{2}, ..., *s*_{n}, and it then subtracts this prediction from the actual value. The decoder reverses this process by adding the prediction to the received signal. The entropy (Huffman) coding, which follows the DPCM, deals with the efficient assignment of binary code words. The efficiency is gained by letting the length of the binary code word for a quantization level be inversely related to its frequency of occurrence. The measurements E_{3} are the average of 30 trials with different realization of the random-polygon scene. The actual data rate E_{3} for non-Gaussian signals can be lower than _{e}, which is defined only for a Gaussian signal.

*Web site curator: Glenn Woodell*