(Chapter 5 of the textbook).

Here’s the key document and the images.

Brightness and Contrast ( 11:32 video)

Contrast stretch (9:02 video)

SNR and Entropy

Read sections 5.1.2 and  5.1.3 from the textbook.

SNR is the ratio of the power in the signal to the power in the noise. It tends to be expressed in decibels (dB).

Entropy is a bit difficult to grasp, and you can omit the mathematical treatment in section 5.1.2 if you’re uncomfortable with it. Entropy measures the amount of information in an image. This will depend on the number of (occupied) gray levels in the image, and the probabilities of each gray level in the image (which you get from the gray-level histogram). Entropy is greater when there are more (occupied) gray levels, and for an image with an equalized histogram.

Signal-to-Noise Ratio (6:40 video)

Here’s a zip file with images to use when exploring Entropy …. more images

Entropy (11:36 video)

Huffman coding: For an 8-bit/pixel image (using fixed-length coding) the maximum value of the entropy is 8 bits/pixel. Images with a smaller value of entropy can be compressed losslessly using Huffmann coding. 

Huffmann coding uses a variable-length coding scheme – with fewer bits used to encode the more frequently occurring characters (or pixels). It results in fewer bits to code the message (image) … and therefore achieves lossless compression.

Which type of images can be compressed significantly (i.e. which have low entropy)? Look at the worked example on p.133 of the textbook, and read this.