Visual Communication - Information Rate vs Data Rate

Information Rate vs Data Rate

 
 
 
 
 
 
The above figure presents the information-entropy e(e) plot that characterizes the information rate e of the encoded signal versus thet associated theoretical minimum data rate, or entropy, e for -bit quantization. Results are constrained to 5 bits because the perturbation due to coarse quantization is treated as an independent random noise. Within this constraint, the e(e) plot illustrates the trade-off between e and e in the selection of the number of quantization levels for encoding the acquired signal. It shows, in particular, that:

 
This result appeals intuitively, because the perturbations due to aliasing and photodetector noise can be expected to interfere with the signal decorrelation as well as with the image restoration.

The figure below compares the theoretical minimum data rate e with the measured data rate E3 for a common redundancy reduction scheme that consists of differential pulse code modulation (DPCM) and entropy coding as shown below. The DPCM encoder first predicts the acquired sample so based on some previous samples s1, s2, ..., sn, and it then subtracts this prediction from the actual value. The decoder reverses this process by adding the prediction to the received signal. The entropy (Huffman) coding, which follows the DPCM, deals with the efficient assignment of binary code words. The efficiency is gained by letting the length of the binary code word for a quantization level be inversely related to its frequency of occurrence. The measurements E3 are the average of 30 trials with different realization of the random-polygon scene. The actual data rate E3 for non-Gaussian signals can be lower than e, which is defined only for a Gaussian signal.

 
 
 
 
 
 

Results

Retinex coding

Home Site map


 
 
Web site curator: Glenn Woodell