In the previous lessons, we’ve shown that the conversion between binary and decimal can be time consuming specifically when dealing with large numbers. Moreover, the binary representation uses about 3.3 bits for every decimal digit. This means that binary representations are usually too long to be read comfortably.
Hexadecimal and octal encoding solve this problem, providing a more compact representation of a binary number. This can be achieved using a set of 2^4=16 or 2^3=8 symbols in stead of a sequence of four or three bits. In hexadecimal encoding the symbols that are used are the numbers from 0 to nine for the first 10 bit sequences, and the first six letters from A to F, for the remaining combinations.
In octal encoding we just have to use the first 8 digits instead of sequences of three bits, as shown in the following table.
The hexadecimal encoding is used almost every time the need arises to show the content of a binary number. It is of course very common in Assembly language and system programming, but it can be very useful also when representing colors.