Hexadecimal encoding

Data representation

In the previous lessons, we’ve shown that the conversion between binary and decimal can be time consuming specifically when dealing with large numbers. Moreover, the binary representation uses about 3.3 bits for every decimal digit. This means that binary representations are usually too long to be read comfortably.

Hexadecimal and octal encoding solve this problem, providing a more compact representation of a binary number. This can be achieved using a set of 2^4=16 or 2^3=8 symbols in stead of a sequence of four or three bits. In hexadecimal encoding the symbols that are used are the numbers from 0 to nine for the first 10 bit sequences, and the first six letters from A to F, for the remaining combinations.

BinaryDecimalExadecimal
000000
000111
001022
001133
010044
010155
011066
011177
100088
100199
101010A
101111B
110012C
110113D
111014E
111115F

In octal encoding we just have to use the first 8 digits instead of sequences of three bits, as shown in the following table.

BinaryDecimalOctal
00000
00111
01022
01133
10044
10155
11066
11177

The hexadecimal encoding is used almost every time the need arises to show the content of a binary number. It is of course very common in Assembly language and system programming, but it can be very useful also when representing colors.