Vocademy |
A computer works on a certain number of bits at a time. It has a data bus consisting of a group of parallel wires, each of which handles one bit of data. For example, a particular data bus could take the states of, let's say, 18 vacuum tubes, and transfer them to a bank of 18 relays. Such a computer works on 18 bits at a time. In the early days of computers, regardless of the number of bits a particular computer worked on at a time, that number of bits was called a word. Some computers had 18-bit words; others had 40-bit words, etc.
EDSAC (Electronic Delay Storage Automatic Calculator) used a 12-bit word[1]. |
EDVAC (Electronic Discrete Variable Automatic Computer) used a 40-bit word[2]. |
Early computers were designed to do mathematical calculations. However, some heretics found that they were also useful for storing, and manipulating alphabetical information if you simply devised a code to store that information as a binary entity[3]. Such a code already existed that was used for teleprinters (e.g., Teletypes). That was a 5-bit code called Baudot[4].
How do you handle such information? With a 40-bit word, you can handle 10 alphabetical characters at a time. However, that requires a subprogram to extract the 5-bit groups out of the word. It's faster to use five of the 40 bits, and handle a word at a time, but that wastes 35 bits for each character. Meanwhile, different computers usedg different groupings of bits to store alphabetical information, usually five, six, or seven bits. Engineers were already using the term "byte"[5] to refer to these groupings. Engineers at IBM standardized character storage into an eight-bit byte, and designed their computers to combine bytes into words that were multiples of eight bits. Today, virtually all computers use eight-bit bytes, and words that are multiples of eight. When handling character encoding, data is handled a byte at a time. When performing mathematical operations, data is handled one, two, four, or eight bytes (or more) at a time as appropriate.
A byte can have 256 states. This means that a byte can be all zeros (00000000), all ones (11111111), or any combination in between. That's 256 possibilities. For mathematical operations, if a byte is 00000000, that represents the number zero. If a byte is 00000001, that represents the number one, etc.
How about 11111111? Is that 256? No. There are 256 possibilities of ones and zeros, but all zeros (00000000) represents the number zero. This leaves 255 more combinations to handle numbers greater than zero. The highest combination, 11111111, represents the number 255. Therefore, one byte can have any of 256 combinations of ones and zeros, and can handle numbers from 0 to 255.
Nybbles. For real? Yes. Nobody knows who coined the word "nybble," but WSU Professor David Benson thinks he may have. Of course, a nybble is half a byte or four bits. A nybble can have up to 16 combinations of ones and zeros, and the highest number it can represent is 15.
As said above, a word is a grouping of bytes. A word is two bytes (16 bits), and can have up to 65,536 combinations of ones and zeros. The highest number it can represent is 65,535. A long word is four bytes (32 bits), and a very long word is 8 bytes (64 bits). The following table lists the names of the binary groupings we have discussed.
Group Name | Number of Bits | Highest number | ||
Bit | 1 | 1 | ||
Nybble | 4 | 15 | ||
Byte | 8 | 255 | ||
Word | 16 | 65,535 | ||
Long Word | 32 | 4,294,967,296 | ||
Very Long Word | 64 | 1.84X1019 |
Vocademy |