Vocademy |
The following descriptions of IBM display adapters discuss the most common operation modes. Less commonly used modes are often virtually unknown and are not discussed.
Early personal computers, such as CP/M-based computers and the IBM PC, could only display text since the video display was a video terminal (video teleprinter) or a matching video card and monitor that acted as a video terminal. The IBM Monochrome Display Adapter (MDA) could display only monochrome text using a text buffer stored in RAM and a character bitmap stored in ROM. The resolution of the MDA card was 720 x 350, with each character in a 9 x 14 matrix. This gave a display of 80 x 25 characters.
The character bitmaps of the MDA card were only 8 x 13 bits per character. This left one matrix column for spaces between characters and one row for line spacing. Most characters were only seven bits wide, but some, such as the lowercase m, were the full eight bits wide. Likewise, most characters were only 11 bits tall, leaving three empty bits for line spacing. However, characters with descenders, like lowercase g and y, extended two bits below other characters.
|
The MGA card could display pseudo-graphics using graphic characters stored in the ROM bitmap. The hardware duplicated the 8th column and 13th row to fill the 9 x 14 matrix reserved for each character. However, when displaying graphics characters, the 8th column and 13th row were duplicated. This eliminated gaps between the characters.
|
|||
|
|
In 1981, IBM released the Color Graphics Adapter (CGA), which added bitmapped graphics to the PC. The CGA card had a RAM frame buffer. Any arbitrary data pattern could be stored in this frame buffer, allowing the display of the corresponding pattern of dots (pixels) on the screen. In high-resolution mode, each byte controlled eight pixels.[2] It could also combine bits to designate different colors for each pixel, as described under CGA RAM requirements below.
|
CGA had four text modes. Modes 0 and 1 displayed 40 columns and 25 rows. Modes 2 and 3 displayed 80 columns and 25 rows. Modes 0 and 2 were grayscale, and modes 1 and 3 provided 16 colors. Both modes used an 8 x 8 matrix for each character. The grayscale modes only appeared as grayscale on composite monitors. The system removed the color burst[3] from the composite video signal, telling the monitor it was a grayscale signal. This caused the monitor to ignore the color overlay information. The grayscale modes were identical to the color modes on monitors with separate RGB inputs (those using the nine-pin D-sub connector).
Each character was stored in the text buffer as two bytes. One byte told which character to display. The second byte used four bits to indicate the foreground color (the character's color) and the other four bits to indicate the background color (the other pixels of the 8 x 8 matrix forming the character). Therefore, each character was displayed in an 8 x 8 block. Each character was displayed in one of 16 colors (eight colors and two intensities). The background color could be chosen from 8 available colors with no intensity specification. This left one bit to control the blink attribute. Since different font faces were unavailable, applications, particularly word processors, used different foreground and background colors to indicate such attributes as bold, italic, and underlined characters.
The color encoding scheme used three bits to represent red, blue, or green, with one bit to indicate the intensity of the color. Combining colors rendered cyan, magenta, yellow, and white/gray. All zeros in the color bits indicated black. Some documentation lists four shades on the black-gray-white spectrum: black, gray 1, gray 2, and white. However, other documentation indicates that the colors were white, low-intensity white (gray), black, and low-intensity black. This resulted in only 15 available colors because low-intensity black was meaningless. Therefore, the 16 color modes, as rendered in the CGA color scheme, only had 15 real colors.
The Color Graphics Adapter did not have the high-resolution text mode of the Monochrome Display Adapter. IBM designed the Color Graphics Adapter with the home computer market in mind. Therefore, the CGA card had a composite video output for color monitors or adaptation to television sets. A composite monitor or television could not display the high-resolution graphics of the MDA card, so IBM thought there was no point in providing this mode. This left a gap in the market for a card that could display both high-resolution text and color graphics.
In 1982, Hercules Computer Technology released the Hercules Graphics Card, which combined the capability of the MDA card with the CGA card. The Hercules card was the de facto standard for some time, with other manufacturers making compatible graphics cards.
The high cost of RAM limited early video system resolution and color depth. In high-resolution CGA graphics mode, the screen was 640 x 200 pixels for a total of 128,000 pixels. One byte of RAM was required for every eight pixels, so the memory requirement was 16,000 bytes.[4] The medium-resolution graphics mode (320 x 200) had larger pixels, but only 64,000 of them. In that case, the 16,000 bytes of memory could be utilized such that each pixel was controlled by two bits. This gave four states for each pixel, allowing four colors. Each pixel had the same four choices as the others, so only four colors were available for the entire screen.[5] In low-resolution mode, the screen was 160 x 100 for a total of 16,000 pixels. Four bits of each byte were used to designate color, so 16 colors were available.
Resolution | Pixels (X x Y) | Bits per pixel | Colors |
High | 640 x 200 | 1 | 1 |
Medium | 320 x 200 | 2 | 4 |
Low | 160 x 100 | 4 | 16 |
CGA resolution modes
The CGA signal to the monitor consisted of three wires for the RGB color information, one for intensity information, and two for vertical and horizontal sync (with two ground wires and one unused wire). The signals were TTL signals, so each color could be on or off with two intensities (often called RGBI), as encoded in the frame buffer. Some later monitors interpreted the signals to render four levels on the black-to-white gradient (interpreted as two levels of black and two levels of white) to render 16 colors.
This color and signal scheme explains why the CGA card didn't have 256 colors available in the lowest resolution mode (160 x 100). The memory was available, but TTL signals on four wires couldn't handle more than 16 colors. 256 colors would have required either four wires for the intensity signal, an analog signal on one intensity wire (sixteen levels of intensity), or three analog RGB signals as was done with VGA.
The nature of the composite video signal produced color artifacts (interference between adjacent pixels). Some programmers exploited this to create more colors than were theoretically available. This could create 16 colors in grayscale modes or 32 colors in color modes. This only worked with the composite signal, so it didn't work with CGA monitors that used the RGBI signal.
|
Introduced in 1984 with the PC/AT, the Enhanced Graphics Adapter (EGA) was an incremental improvement over the CGA standard. EGA supported all the modes of the MGA and CGA cards, along with higher-resolution text and graphic modes. EGA required a compatible monitor for the high-resolution modes but also worked with CGA graphics modes. However, some software bypassed the standard communication channels[6] with the CGA card and didn't work with the EGA card.
EGA added two 640 x 350 color graphics modes, one with two colors and the other with 16 colors. It also added an 80 x 25 text mode using an 8 x 14 character matrix and an 80 x 43 text mode using an 8 x 8 matrix. Some third-party boards added higher resolution graphics modes from 600 x 400 to 800 x 560.
In the 80 x 43 text mode and the 640 x 350 and graphics mode, the EGA monitor horizontal scan rate was 21.8 kHz. This departed from the standard NTSC (North American) television scan rate (17.75 kHz horizontal and 59.94 Hz vertical [7]) and required an EGA-compatible monitor.
EGA used the same 9-pin D-sub connector as CGA and sent compatible signals in CGA mode. The original EGA card had a light pen interface.
1987 IBM introduced Video Graphics Array (VGA) with the PS/2 line of computers. VGA added two 640 x 480 graphics modes, one monochrome and one with 16 colors.
VGA used a 15-pin D-sub connector. Three wires were used for analog RGB signals, two for vertical and horizontal sync, and five for ground. Each signal had its own ground, reducing external noise and crosstalk (some cables used the ground to shield the signal wires). The other four pins were unused.
The analog RGB signal originated from a digital signal. This means that the analog signals consisted of discreet voltage levels (and was arguably still a digital signal, but I won't go there). Let's assume each wire had three possible voltage levels: 0 volts, +2.5 volts, and +5 volts. With three wires, there are 27 possible states. This is more than enough to handle the 16 available colors with VGA. However, translating the 16 colors described in CGA above, +5 and +2.5 volts would not be used simultaneously. For example, you would not have +5 volts on the green wire and +2.5 volts on the blue wire at the same time. This is not a hardware limitation but a consequence of the available colors and how they are sent to the monitor; no available color required +5 volts on one wire and +2.5 volts on another. We end up with only 15 different combinations of voltages ever presented on the three wires. Therefore, just as CGA monitors could not render dark black, VGA would not send a signal representing dark black.[8]
After IBM released the VGA standard, third-party manufacturers began making hardware that exceeded the VGA standard. SVGA (Super VGA) is a broad term that covers any resolution greater than 640 x 480 or any color depth greater than 16 colors. The first SVGA hardware rendered a screen resolution of 800 x 600 with 256 colors.
With the advent of SVGA, manufacturers came up with new initialisms for each higher resolution. So now we have VGA, SVGA, WVGA, FWVGA, SXGA, QXGA, and the list goes on (and on). However, the resolution and color depth available are usually specified in the screen size, such as 1920 x 1080 or 1080p, and color depth, such as High Color (16 bits per pixel) or True Color (24 bits per pixel), etc.
In the early days of SVGA, it was important to know the amount of RAM available on a video card. Most video cards had only enough RAM to hold the currently displayed image. For example, to display an image at 800 x 600 pixels with 256 colors per pixel, the card needed 480,000 bytes of RAM. Since RAM amounts come in quantities of power of two, you needed a card with 512 kiB (524,288 bytes) of RAM to use that graphics mode.
Video cards now come with much more RAM than needed for a single image, so calculating image size compared to available RAM is no longer necessary. However, if asked,[9] do this by first calculating the number of pixels on the screen. Next, multiply that by the number of bytes required for each pixel. Here are some examples of the number of bytes required per pixel with various color modes.
Color mode | Bytes per pixel |
16 color | Four bits or ½ byte |
256 colors | One byte |
High color (16-bit color) | Two bytes |
24-bit TrueColor | Three bytes |
32-bit TrueColor | Four bytes |
These numbers are sufficient to calculate the RAM required for different image sizes. However, they don't exactly reflect real-world color schemes. That is a subject beyond the scope of this book.
To calculate the number of bytes of RAM required for a particular image size, multiply the number of bytes required for each pixel by the number of pixels in the image. Here are some examples of the RAM required for particular graphics modes:
Screen dimensions | Number of colors | Bytes per pixel | Bytes required | Rounded up |
800 x 600 | 256 colors | 1 | 480,000 | 512 kiB |
800 x 600 | 24-bit True color | 3 | 2,440,444 | 4 MiB |
1,280 x 1,024 | 24-bit True color | 3 | 3,932,160 | 4 MiB |
1,920 x 1,080 | 32-bit True color | 4 | 8,294,400 | 8 MiB |
Once computer video broke away from NTSC television standards and the cost of RAM fell, resolution and color depth increased rapidly. New methods of storing image data were also developed. Instead of storing two-dimensional information in a succession of bytes (a bitmap), modern video subsystems often store information in a database of color data and a two or three-dimensional position. This is more efficient than memory mapping, where much of the memory holds empty or redundant information. The graphics processor manipulates the information in this database to build a two or three-dimensional image, then processes this into a two-dimensional bitmap in a part of memory called the frame buffer for final display.[10] Video cards have sufficient memory to render one frame while displaying another.
Video cards and subsystems were much simpler in the early days of personal computers than they are today. Early video cards mapped into the memory subsystem and were treated as memory. The central processor performed image manipulation.
Typical video cards mapped in at address A0000 in the real mode memory space. This address was where the video ROM (video BIOS) resided. When the computer started, part of the cold boot routine was to attempt to execute instructions in ROM embedded in various plug-in cards. Thus, the CPU would find the video ROM at A0000 and execute instructions there. These instructions would initialize the video card, often displaying a message as it did so. Once the video card was initialized, control would return to the operating system, and the boot process would continue.
All video cards contain RAM. Motherboards with built-in video subsystems typically used some of the regular motherboard RAM as video RAM. The RAM on video cards may be regular RAM, as usually installed on motherboards, but may be specialized RAM suited for video operations. For example, dual-ported RAM can be written to by the CPU via one mechanism at the same time the video subsystem reads from the same RAM via another mechanism to generate the display.
Video RAM typically mapped in at C0000 to C7FFF. However, SVGA and later cards had much more memory than could be mapped into real mode. When used in real mode, the CPU could only access a portion of the available RAM at one time. The CPU could access portions of RAM as needed, and the video card could access all of the RAM to display images. How video RAM maps into protected mode depends on the card and its associated drivers.
Modern video cards are much more sophisticated than early cards. They contain sophisticated graphics processors that do the video manipulation. The CPU now tells the GPU what to do instead of directly generating graphics itself.[11] Many CPU dies now include a GPU.
Video cards have come in all expansion bus types, including AGP, which was for video cards only. Currently, all expansion cards connect to the PCI-Express (PCIe) bus. Therefore, ensuring a video card matches the available bus on the motherboard is no longer an issue.
Many video cards require more power than the expansion bus can deliver. These cards have one or more power connectors. These are 6 or 8-pin so-called PCI-e power connectors.
|
The computer power supply must have a current rating sufficient to deliver power to the video card. Be sure to check the video card's current requirements against the 12V rail of the power supply.
Motherboards can support multiple video cards, sending different images to different monitors. Some video cards have multiple outputs and can present different images to each output. Some video cards have multiple DVI or HDMI outputs. However, other video cards may have one DVI, one HDMI, and perhaps a VGA output also. Usually, even when the output connectors are of different types, the outputs can be used simultaneously with separate images for each output.
Windows supports up to eight monitors simultaneously, with the desktop spanning all the monitors. The monitor resolutions can be matched or mismatched. The monitors can be side-by-side, one above the other, or in an array.
—————————
Vocademy |