Hexadecimal Numbers – Computer Science


title: Hexadecimal Numbers

The Hexadecimal Numeral System

Hexadecimal numbers, often shortened to “hex numbers” or “hex”,
are numbers represented in base 16 as opposed to base 10 that we use for everyday arithmetic and counting.

In practical terms, this means that each column of a number written in hexadecimal can represent up to 16 values.

Digits in hexadecimal use the standard symbols 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9 to represent the corresponding value,
and use the first six letters of the alphabet to represent the values 10 through 15 (E.G: A, B, C, D, E, F).

In programming, we prefix hexadecimal constants with 0x, with some exceptions.

Examples and explanation

In the standard base 10 system, each column represents increasing powers of 10,
while in base 16 each column represents increasing powers of 16.

Consider the following number in base 10: 1337

1000100101
1337

It is one thousand three hundred and thirty seven because 1*1000 + 3*100 + 3*10 + 7*1.

Similarly, consider a hex number (base 16): 0xBEEF

16^316^216^116^0
B (11)E (14)E (14)F (15)

Converting to decimal, it would be 11*16^3 + 14*16^2 + 14*16 + 15*1 which gives 48,879.

Here are some other examples of equivalent hex and decimal values:

0x1 == 1 == 1*16^0 0xF == 15 == 15*16^0 0xFF == 255 == 15*16^1 + 15*16^0 0xFFF == 4095 == 15*16^2 + 15*16^1 + 15*16^0 0x1000 == 4096 == 1*16^3 + 0*16^2 + 0*16^1 + 0*16^0

As seen in the table example above, with one hex digit we can represent numbers up to and including 15. Add another column and we can represent numbers up to 255, 4095 with another column, and so on.

Uses of Hexadecimal in Low Level Programming

Hexadecimal first found its use in Computer Science as a convenience feature.

Data in our computers has a lowest common storage unit, the byte.
Each byte contains 8 bits, and is able to store a number between 0 and 255 inclusive.

Hexadecimal has the advantage of being terse and having well defined boundaries.

A single byte is always represented by two hexadecimal digits
from 0x00 to 0xFF, the latter being the largest per-byte value of 255.

The terseness and byte-aligned nature of hexadecimal numbers make them a popular choice for software engineers working on low-level code-bases or embedded software.

Uses of Hexadecimal Numbers in JavaScript

JavaScript supports the use of hexadecimal notation in place of any integer, but not decimals.

As an example, the number 2514 in hex is 0x9D2, but there is no language-supported way of representing 25.14 as a hex number.

Using hexadecimal in your code is a personal and stylistic choice, and has no effect on the underlying logic your code implements.

Uses of Hexadecimal Numbers in CSS

CSS has for a long time used hexadecimal notation to represent color values. Consider the following selector:

.my-container { background-color: #112233; color: #FFFFFF; }

The background-color‘s value is in fact three hex bytes.

The CSS processor treats these as three individual bytes, representing Red, Green, and Blue.

In our example, 11 corresponds to the Red color component, 22 corresponds to the Green color component, and 33 to the Blue color component.

There is currently no way as of CSS3 to define a color with an alpha component using hex.
The proposed CSS4 Draft1 includes a proposal to allow for an extra byte to specify alpha values.

For now, use of the standard rgba() function is the recommended way to add an alpha value to your colors.

References:

More Information:

This article needs improvement. You can help improve this article. You can also write similar articles and help the community.