ad

Wednesday, July 22, 2015

Ram

RAM Random-access memory (RAM /ræm/) is a form of computer data storage. A random-access memory device allows data items to be read and written in approximately the same amount of time, regardless of the order in which data items are accessed.[1] In contrast, with other direct-access data storage media such as hard disks, CD-RWs, DVD-RWs and the older drum memory, the time required to read and write data items varies significantly depending on their physical locations on the recording medium, due to mechanical limitations such as media rotation speeds and arm movement delays.

Today, random-access memory takes the form of integrated circuits. RAM is normally associated with volatile types of memory (such asDRAM memory modules), where stored information is lost if power is removed, although many efforts have been made to develop non-volatile RAM chips.[2] Other types of non-volatile memory exist that allow random access for read operations, but either do not allow write operations or have limitations on them. These include most types of ROM and a type of flash memory called NOR-Flash.
Integrated-circuit RAM chips came into the market in the late 1960s, with the first commercially available DRAM chip, the Intel 1103, introduced in October 1970.[3]

Alternatively referred to as main memoryprimary memory, or system memory,Random Access Memory (RAM) is a hardware device that allows information to be stored and retrieved on a computer. RAM is usually associated with DRAM, which is a type of memory module. Because information is accessed randomly instead of sequentially like it is on a CD or hard drive, the computer can access the data much faster. However, unlike ROM or a the hard drive, RAM is a volatile memory and requires power to keep the data accessible; if power is lost all data contained in memory lost.

Additional information

As the computer boots, parts of the operating system and drivers are loaded into memory, which allows the CPU to process the instructions faster and speeds up the boot process. After the operating system has loaded, each program you open, such as thebrowser you're using to view this page, is loaded into memory while it is running. If too many programs are open the computer will swap the data in the memory between the RAM and the hard disk drive.
Over the evolution of the computer there have been different variations of RAM. Some of the more common examples are DIMMRIMMSIMMSO-DIMM, and SOO-RIMM. Below is an example image of a 512MB DIMM computer memory module, a typical piece of RAM found in desktop computers. This memory module would be installed into one of thememory slots on a motherboard.
Computer DIMM or dual-inline memory module
Tip: New users often confuse RAM with disk drive space. See our memory definition for a comparison between memory and storage.

Related pages

History[edit]

These IBM tabulating machinesfrom the 1930s used mechanical counters to store information
A portion of a core memory with a modern flash RAM SD card on top
1 Megabit chip – one of the last models developed by VEB Carl Zeiss Jena in 1989
Early computers used relaysmechanical counters[4] or delay lines for main memory functions. Ultrasonic delay lines could only reproduce data in the order it was written. Drum memory could be expanded at relatively low cost but efficient retrieval of memory items required knowledge of the physical layout of the drum to optimize speed. Latches built out of vacuum tube triodes, and later, out of discrete transistors, were used for smaller and faster memories such as registers. Such registers were relatively large and too costly to use for large amounts of data; generally only a few dozen or few hundred bits of such memory could be provided.
The first practical form of random-access memory was the Williams tube starting in 1947. It stored data as electrically charged spots on the face of a cathode ray tube. Since the electron beam of the CRT could read and write the spots on the tube in any order, memory was random access. The capacity of the Williams tube was a few hundred to around a thousand bits, but it was much smaller, faster, and more power-efficient than using individual vacuum tube latches. Developed at the University of Manchester in England, the Williams tube provided the medium on which the first electronically stored-memory program was implemented in theManchester Small-Scale Experimental Machine (SSEM) computer, which first successfully ran a program on 21 June 1948.[5] In fact, rather than the Williams tube memory being designed for the SSEM, the SSEM was a testbed to demonstrate the reliability of the memory.[6][7]
Magnetic-core memory was invented in 1947 and developed up until the mid-1970s. It became a widespread form of random-access memory, relying on an array of magnetized rings. By changing the sense of each ring's magnetization, data could be stored with one bit stored per ring. Since every ring had a combination of address wires to select and read or write it, access to any memory location in any sequence was possible.
Magnetic core memory was the standard form of memory system until displaced by solid-state memory in integrated circuits, starting in the early 1970s. Robert H. Dennard invented dynamic random-access memory (DRAM) in 1968; this allowed replacement of a 4 or 6-transistor latch circuit by a single transistor for each memory bit, greatly increasing memory density at the cost of volatility. Data was stored in the tiny capacitance of each transistor, and had to be periodically refreshed every few milliseconds before the charge could leak away.
Prior to the development of integrated read-only memory (ROM) circuits, permanent (or read-only) random-access memory was often constructed using diode matrices driven by address decoders, or specially wound core rope memory planes.

Types of RAM[edit]

The two main forms of modern RAM are static RAM (SRAM) and dynamic RAM (DRAM). In SRAM, a bit of data is stored using the state of a six transistor memory cell. This form of RAM is more expensive to produce, but is generally faster and requires less power than DRAM and, in modern computers, is often used as cache memory for the CPU. DRAM stores a bit of data using a transistor and capacitor pair, which together comprise a DRAM memory cell. The capacitor holds a high or low charge (1 or 0, respectively), and the transistor acts as a switch that lets the control circuitry on the chip read the capacitor's state of charge or change it. As this form of memory is less expensive to produce than static RAM, it is the predominant form of computer memory used in modern computers.
Both static and dynamic RAM are considered volatile, as their state is lost or reset when power is removed from the system. By contrast, read-only memory (ROM) stores data by permanently enabling or disabling selected transistors, such that the memory cannot be altered. Writeable variants of ROM (such as EEPROM and flash memory) share properties of both ROM and RAM, enabling data to persist without power and to be updated without requiring special equipment. These persistent forms of semiconductor ROM include USB flash drives, memory cards for cameras and portable devices, etc. ECC memory (which can be either SRAM or DRAM) includes special circuitry to detect and/or correct random faults (memory errors) in the stored data, using parity bits or error correction code.
In general, the term RAM refers solely to solid-state memory devices (either DRAM or SRAM), and more specifically the main memory in most computers. In optical storage, the term DVD-RAM is somewhat of a misnomer since, unlike CD-RW or DVD-RW it does not need to be erased before reuse. Nevertheless, a DVD-RAM behaves much like a hard disc drive if somewhat slower.

Memory hierarchy[edit]

Main article: Memory hierarchy
One can read and over-write data in RAM. Many computer systems have a memory hierarchy consisting of Processor registers, on-die SRAM caches, external cachesDRAM,paging systems and virtual memory or swap space on a hard drive. This entire pool of memory may be referred to as "RAM" by many developers, even though the various subsystems can have very different access times, violating the original concept behind the random access term in RAM. Even within a hierarchy level such as DRAM, the specific row, column, bank, rank, channel, or interleave organization of the components make the access time variable, although not to the extent that rotating storage media or a tape is variable. The overall goal of using a memory hierarchy is to obtain the higher possible average access performance while minimizing the total cost of the entire memory system (generally, the memory hierarchy follows the access time with the fast CPU registers at the top and the slow hard drive at the bottom).
In many modern personal computers, the RAM comes in an easily upgraded form of modules called memory modules or DRAM modules about the size of a few sticks of chewing gum. These can quickly be replaced should they become damaged or when changing needs demand more storage capacity. As suggested above, smaller amounts of RAM (mostly SRAM) are also integrated in the CPU and other ICs on the motherboard, as well as in hard-drives, CD-ROMs, and several other parts of the computer system.

Other uses of RAM[edit]

In addition to serving as temporary storage and working space for the operating system and applications, RAM is used in numerous other ways.

Virtual memory[edit]

Main article: virtual memory
Most modern operating systems employ a method of extending RAM capacity, known as "virtual memory". A portion of the computer's hard drive is set aside for a paging file or ascratch partition, and the combination of physical RAM and the paging file form the system's total memory. (For example, if a computer has 2 GB of RAM and a 1 GB page file, the operating system has 3 GB total memory available to it.) When the system runs low on physical memory, it can "swap" portions of RAM to the paging file to make room for new data, as well as to read previously swapped information back into RAM. Excessive use of this mechanism results in thrashing and generally hampers overall system performance, mainly because hard drives are far slower than RAM.

RAM disk[edit]

Main article: RAM drive
Software can "partition" a portion of a computer's RAM, allowing it to act as a much faster hard drive that is called a RAM disk. A RAM disk loses the stored data when the computer is shut down, unless memory is arranged to have a standby battery source.

Shadow RAM[edit]

Sometimes, the contents of a relatively slow ROM chip are copied to read/write memory to allow for shorter access times. The ROM chip is then disabled while the initialized memory locations are switched in on the same block of addresses (often write-protected). This process, sometimes called shadowing, is fairly common in both computers andembedded systems.
As a common example, the BIOS in typical personal computers often has an option called “use shadow BIOS” or similar. When enabled, functions relying on data from the BIOS’s ROM will instead use DRAM locations (most can also toggle shadowing of video card ROM or other ROM sections). Depending on the system, this may not result in increased performance, and may cause incompatibilities. For example, some hardware may be inaccessible to the operating system if shadow RAM is used. On some systems the benefit may be hypothetical because the BIOS is not used after booting in favor of direct hardware access. Free memory is reduced by the size of the shadowed ROMs.[8]

Recent developments[edit]

Several new types of non-volatile RAM, which will preserve data while powered down, are under development. The technologies used include carbon nanotubes and approaches utilizing Tunnel magnetoresistance. Amongst the 1st generation MRAM, a 128 KiB (128 × 210 bytes) chip was manufactured with 0.18 µm technology in the summer of 2003.[citation needed] In June 2004, Infineon Technologies unveiled a 16 MiB (16 × 220 bytes) prototype again based on 0.18 µm technology. There are two 2nd generation techniques currently in development: thermal-assisted switching (TAS)[9] which is being developed by Crocus Technology, and spin-transfer torque (STT) on which CrocusHynix,IBM, and several other companies are working.[10] Nantero built a functioning carbon nanotube memory prototype 10 GiB (10 × 230 bytes) array in 2004. Whether some of these technologies will be able to eventually take a significant market share from either DRAM, SRAM, or flash-memory technology, however, remains to be seen.
Since 2006, "solid-state drives" (based on flash memory) with capacities exceeding 256 gigabytes and performance far exceeding traditional disks have become available. This development has started to blur the definition between traditional random-access memory and "disks", dramatically reducing the difference in performance.
Some kinds of random-access memory, such as "EcoRAM", are specifically designed for server farms, where low power consumption is more important than speed.[11]

Memory wall[edit]

The "memory wall" is the growing disparity of speed between CPU and memory outside the CPU chip. An important reason for this disparity is the limited communication bandwidth beyond chip boundaries. From 1986 to 2000, CPU speed improved at an annual rate of 55% while memory speed only improved at 10%. Given these trends, it was expected that memory latency would become an overwhelming bottleneck in computer performance.[12]
CPU speed improvements slowed significantly partly due to major physical barriers and partly because current CPU designs have already hit the memory wall in some sense.Intel summarized these causes in a 2005 document.[13]
“First of all, as chip geometries shrink and clock frequencies rise, the transistor leakage current increases, leading to excess power consumption and heat... Secondly, the advantages of higher clock speeds are in part negated by memory latency, since memory access times have not been able to keep pace with increasing clock frequencies. Third, for certain applications, traditional serial architectures are becoming less efficient as processors get faster (due to the so-calledVon Neumann bottleneck), further undercutting any gains that frequency increases might otherwise buy. In addition, partly due to limitations in the means of producing inductance within solid state devices, resistance-capacitance (RC) delays in signal transmission are growing as feature sizes shrink, imposing an additional bottleneck that frequency increases don't address.”
The RC delays in signal transmission were also noted in Clock Rate versus IPC: The End of the Road for Conventional Microarchitectures which projects a maximum of 12.5% average annual CPU performance improvement between 2000 and 2014.
A different concept is the processor-memory performance gap, which can be addressed by 3D computer chips that reduce the distance between the logic and memory aspects that are further apart in a 2D chip.[14] Memory subsystem design requires a focus on the gap, which is widening over time.[15] The main method of bridging the gap is the use of caches; small amounts of high-speed memory that houses recent operations and instructions nearby the processor, speeding up the execution of those operations or instructions in cases where they are called upon frequently. Multiple levels of caching have been developed in order to deal with the widening of the gap, and the performance of high-speed modern computers are reliant on evolving caching techniques.[16] These can prevent the loss of performance that the processor has, as it takes less time to perform the computation it has been initiated to complete.[17] There can be up to a 53% difference between the growth in speed of processor speeds and the lagging speed of main memory access.[18]

See also[edit]

Monitor

History

Early electronic computers were fitted with a panel of light bulbs where the state of each particular bulb would indicate the on/off state of a particular register bit inside the computer. This allowed the engineers operating the computer to monitor the internal state of the machine, so this panel of lights came to be known as the 'monitor'. As early monitors were only capable of displaying a very limited amount of information, and were very transient, they were rarely considered for programme output. Instead, a line printer was the primary output device, while the monitor was limited to keeping track of the programme's operation.
As technology developed it was realized that the output of a CRT display was more flexible than a panel of light bulbs and eventually, by giving control of what was displayed to the programme itself, the monitor itself became a powerful output device in its own right.

Technologies

Multiple technologies have been used for computer monitors. Until the 21st century most used cathode ray tubes but they have largely been superseded by LCD monitors.

Cathode ray tube

The first computer monitors used cathode ray tubes (CRTs). Prior to the advent of home computers in the late 1970s, it was common for a video display terminal (VDT) using a CRT to be physically integrated with a keyboard and other components of the system in a single large chassis. The display was monochrome and far less sharp and detailed than on a modern flat-panel monitor, necessitating the use of relatively large text and severely limiting the amount of information that could be displayed at one time. High-resolutionCRT displays were developed for specialized military, industrial and scientific applications but they were far too costly for general use.
Some of the earliest home computers (such as the TRS-80 and Commodore PET) were limited to monochrome CRT displays, but color display capability was already a standard feature of the pioneering Apple II, introduced in 1977, and the specialty of the more graphically sophisticated Atari 800, introduced in 1979. Either computer could be connected to the antenna terminals of an ordinary color TV set or used with a purpose-made CRT color monitor for optimum resolution and color quality. Lagging several years behind, in 1981 IBM introduced the Color Graphics Adapter, which could display four colors with a resolution of 320 x 200 pixels, or it could produce 640 x 200 pixels with two colors. In 1984 IBM introduced the Enhanced Graphics Adapter which was capable of producing 16 colors and had a resolution of 640 x 350.[1]
By the end of the 1980s color CRT monitors that could clearly display 1024 x 768 pixels were widely available and increasingly affordable. During the following decade maximum display resolutions gradually increased and prices continued to fall. CRT technology remained dominant in the PC monitor market into the new millennium partly because it was cheaper to produce and offered viewing angles close to 180 degrees.[2] CRTs still offer some image quality advantages over LCD displays but improvements to the latter have made them much less obvious. The dynamic range of early LCD panels was very poor, and although text and other motionless graphics were sharper than on a CRT, an LCD characteristic known as pixel lag caused moving graphics to appear noticeably smeared and blurry.

Liquid crystal display

There are multiple technologies that have been used to implement liquid crystal displays (LCD). Throughout the 1990s, the primary use of LCD technology as computer monitors was in laptops where the lower power consumption, lighter weight, and smaller physical size of LCDs justified the higher price versus a CRT. Commonly, the same laptop would be offered with an assortment of display options at increasing price points: (active or passive) monochrome, passive color, or active matrix color (TFT). As volume and manufacturing capability have improved, the monochrome and passive color technologies were dropped from most product lines.
TFT-LCD is a variant of LCD which is now the dominant technology used for computer monitors.[3]
The first standalone LCD displays appeared in the mid-1990s selling for high prices. As prices declined over a period of years they became more popular, and by 1997 were competing with CRT monitors. Among the first desktop LCD computer monitors was the Eizo L66 in the mid-1990s, the Apple Studio Display in 1998, and the Apple Cinema Display in 1999. In 2003, TFT-LCDs outsold CRTs for the first time, becoming the primary technology used for computer monitors.[2] The main advantages of LCDs over CRT displays are that LCDs consume less power, take up much less space, and are considerably lighter. The now common active matrix TFT-LCD technology also has less flickering than CRTs, which reduces eye strain.[4] On the other hand, CRT monitors have superior contrast, have superior response time, are able to use multiple screen resolutions natively, and there is no discernible flicker if the refresh rate is set to a sufficiently high value. LCD monitors have now very high temporal accuracy and can be used for vision research.[5]

Organic light-emitting diode

Organic light-emitting diode (OLED) monitors provide higher contrast and better viewing angles than LCDs but they require more power when displaying documents with white or bright backgrounds. In 2011, a 25-inch (64 cm) OLED monitor cost $7500, but the prices are expected to drop.[6]

Measurements of performance

The performance of a monitor is measured by the following parameters:
  • Luminance is measured in candelas per square meter (cd/m2 also called a Nit).
  • Aspect ratio is the ratio of the horizontal length to the vertical length. Monitors usually have the aspect ratio 4:35:416:10 or 16:9.
  • Viewable image size is usually measured diagonally, but the actual widths and heights are more informative since they are not affected by the aspect ratio in the same way. For CRTs, the viewable size is typically 1 in (25 mm) smaller than the tube itself.
  • Display resolution is the number of distinct pixels in each dimension that can be displayed. For a given display size, maximum resolution is limited by dot pitch.
  • Dot pitch is the distance between sub-pixels of the same color in millimeters. In general, the smaller the dot pitch, the sharper the picture will appear.
  • Refresh rate is the number of times in a second that a display is illuminated. Maximum refresh rate is limited by response time.
  • Response time is the time a pixel in a monitor takes to go from active (white) to inactive (black) and back to active (white) again, measured in milliseconds. Lower numbers mean faster transitions and therefore fewer visible image artifacts.
  • Contrast ratio is the ratio of the luminosity of the brightest color (white) to that of the darkest color (black) that the monitor is capable of producing.
  • Power consumption is measured in watts.
  • Delta-E: Color accuracy is measured in delta-E; the lower the delta-E, the more accurate the color representation. A delta-E of below 1 is imperceptible to the human eye. Delta-Es of 2 to 4 are considered good and require a sensitive eye to spot the difference.
  • Viewing angle is the maximum angle at which images on the monitor can be viewed, without excessive degradation to the image. It is measured in degrees horizontally and vertically.

Size

Main article: Display size
For any rectangular section on a round tube, the diagonal measurement is also the diameter of the tube.
The area, height and width of displays with identical diagonal measurements vary dependent on aspect ratio.
On two-dimensional display devices such as computer monitors the display size or viewable image size is the actual amount of screen space that is available to display a picturevideo or working space, without obstruction from the case or other aspects of the unit's design. The main measurements for display devices are: width, height, total area and the diagonal.
The size of a display is usually by monitor manufacturers given by the diagonal, i.e. the distance between two opposite screen corners. This method of measurement is inherited from the method used for the first generation of CRT television, when picture tubes with circular faces were in common use. Being circular, only their diameter was needed to describe their size. Since these circular tubes were used to display rectangular images, the diagonal measurement of the rectangle was equivalent to the diameter of the tube's face. This method continued even when cathode ray tubes were manufactured as rounded rectangles; it had the advantage of being a single number specifying the size, and was not confusing when the aspect ratio was universally 4:3.
The estimation of the monitor size by the distance between opposite corners does not take into account the display aspect ratio, so that for example a 16:9 21-inch (53 cm) widescreen display has less area, than a 21-inch (53 cm) 4:3 screen. The 4:3 screen has dimensions of 16.8 in × 12.6 in (43 cm × 32 cm) and area 211 sq in (1,360 cm2), while the widescreen is 18.3 in × 10.3 in (46 cm × 26 cm), 188 sq in (1,210 cm2).

Aspect ratio

Main article: Display aspect ratio
Until about 2003, most computer monitors had a 4:3 aspect ratio and some had 5:4. Between 2003 and 2006, monitors with 16:9 and mostly 16:10 (8:5) aspect ratios became commonly available, first in laptops and later also in standalone monitors. Reasons for this transition was productive uses for such monitors, i.e. besides widescreen computer game play and movie viewing, are the word processor display of two standard letter pages side by side, as well as CAD displays of large-size drawings and CAD application menus at the same time.[7][8] In 2008 16:10 became the most common sold aspect ratio for LCD monitors and the same year 16:10 was the mainstream standard for laptops andnotebook computers.[9]
In 2010 the computer industry started to move over from 16:10 to 16:9 because 16:9 was chosen to be the standard high-definition television display size, and because they were cheaper to manufacture. Eventually, monitors with non-HD resolutions such as 1920x1200 were no longer produced.[10]
In 2011 non-widescreen displays with 4:3 aspect ratios were only being manufactured in small quantities. According to Samsung this was because the "Demand for the old 'Square monitors' has decreased rapidly over the last couple of years," and "I predict that by the end of 2011, production on all 4:3 or similar panels will be halted due to a lack of demand."[10]

Resolution

Main article: Display resolution
The resolution for computer monitors has increased over time. From 320x200 during the early 1980s, to 800x600 during the late 1990s. Since 2009, the most commonly sold resolution for computer monitors is 1920x1080.[11] Before 2013 top-end consumer products were limited to 2560x1600 at 30 in (76 cm), excluding Apple products.[12] Apple introduced 2880x1800 with Retina MacBook Pro at 15.4 in (39 cm) on June 12, 2012, and introduced a 5120x2880 Retina iMac at 27 in (69 cm) on October 16, 2014. By 2015 all major display manufactuers had released 3840x2160 resolution displays.

Additional features

Power saving

Most modern monitors will switch to a power-saving mode if no video-input signal is received. This allows modern operating systems to turn off a monitor after a specified period of inactivity. This also extends the monitor's service life.
Some monitors will also switch themselves off after a time period on standby.
Most modern laptops provide a method of screen dimming after periods of inactivity or when the battery is in use. This extends battery life and reduces wear.

Integrated accessories

Many monitors have other accessories (or connections for them) integrated. This places standard ports within easy reach and eliminates the need for another separate hub,cameramicrophone, or set of speakers. These monitors have advanced microprocessors which contain codec information, Windows Interface drivers and other small software which help in proper functioning of these functions.

Glossy screen

Main article: Glossy display
Some displays, especially newer LCD monitors, replace the traditional anti-glare matte finish with a glossy one. This increases color saturation and sharpness but reflections from lights and windows are very visible. Anti-reflective coatings are sometimes applied to help reduce reflections, although this only mitigates the effect.

Curved designs

In about 2009, NEC/Alienware together with Ostendo Technologies (based in Carlsbad, CA) were offering a curved (concave) 43-inch (110 cm) monitor that allows better viewing angles near the edges, covering 75% of peripheral vision. This monitor had 2880x900 resolution, LED backlight and was marketed as suitable both for gaming and office work, while for $6499 it was rather expensive.[13] As of 2013, the monitor is no longer available. Ostendo Technologies is no longer pursuing curved monitor technology.

Directional screen

Narrow viewing angle screens are used in some security conscious applications.

3D

Main article: Stereo display
Newer monitors are able to display a different image for each eye, often with the help of special glasses, giving the perception of depth.
Active shutter
Main article: Active shutter 3D system
Polarized
Main article: Polarized 3D system
Autostereoscopic
Main article: Autostereoscopy
A directional screen which generates 3D images without headgear.

Touch screen

Main article: Touchscreen
These monitors use touching of the screen as an input method. Items can be selected or moved with a finger, and finger gestures may be used to convey commands. The screen will need frequent cleaning due to image degradation from fingerprints.

Tablet screens

A combination of a monitor with a graphics tablet. Such devices are typically unresponsive to touch without the use of one or more special tools' pressure. Newer models however are now able to detect touch from any pressure and often have the ability to detect tilt and rotation as well.
Touch and tablet screens are used on LCD displays as a substitute for the light pen, which can only work on CRTs.

Mounting

Computer monitors are provided with a variety of methods for mounting them depending on the application and environment.

Desktop

A desktop monitor is typically provided with a stand from the manufacturer which lifts the monitor up to a more ergonomic viewing height. The stand may be attached to the monitor using a proprietary method or may use, or be adaptable to, a Video Electronics Standards Association, VESA, standard mount. Using a VESA standard mount allows the monitor to be used with an after-market stand once the original stand is removed. Stands may be fixed or offer a variety of features such as height adjustment, horizontal swivel, and landscape or portrait screen orientation.

VESA mount

The Flat Display Mounting Interface (FDMI), also known as VESA Mounting Interface Standard (MIS) or colloquially as a VESA mount, is a family of standards defined by the Video Electronics Standards Association for mounting flat panel monitorsTVs, and other displays to stands or wall mounts.[14] It is implemented on most modern flat-panel monitors and TVs.
For Computer Monitors, the VESA Mount typically consists of four threaded holes on the rear of the display that will mate with an adapter bracket.

Rack mount

Rack mount computer monitors are available in two styles and are intended to be mounted into a 19-inch rack:
A fixed 19-inch (48 cm), 4:3 rack mount LCD monitor
Fixed
A fixed rack mount monitor is mounted directly to the rack with the LCD visible at all times. The height of the unit is measured in rack units (RU) and 8U or 9U are most common to fit 17-inch or 19-inch LCD displays. The front sides of the unit are provided with flanges to mount to the rack, providing appropriately spaced holes or slots for the rack mounting screws. A 19-inch diagonal LCD is the largest size that will fit within the rails of a 19-inch rack. Larger LCDs may be accommodated but are 'mount-on-rack' and extend forward of the rack. There are smaller display units, typically used in broadcast environments, which fit multiple smaller LCD displays side by side into one rack mount.
A 1U stowable clamshell 19-inch (48 cm), 4:3 rack mount LCD monitor with keyboard
Stowable
A stowable rack mount monitor is 1U, 2U or 3U high and is mounted on rack slides allowing the display to be folded down and the unit slid into the rack for storage. The display is visible only when the display is pulled out of the rack and deployed. These units may include only a display or may be equipped with a keyboard creating a KVM (Keyboard Video Monitor). Most common are systems with a single LCD display but there are systems providing two or three displays in a single rack mount system.
A panel mount 19-inch (48 cm), 4:3 rack mount LCD monitor

Panel mount

A panel mount computer monitor is intended for mounting into a flat surface with the front of the display unit protruding just slightly. They may also be mounted to the rear of the panel. A flange is provided around the LCD display, sides, top and bottom, to allow mounting. This contrasts with a rack mount display where the flanges are only on the sides. The flanges will be provided with holes for thru-bolts or may have studs welded to the rear surface to secure the unit in the hole in the panel. Often a gasket is provided to provide a water-tight seal to the panel and the front of the LCD will be sealed to the back of the front panel to prevent water and dirt contamination.

Open frame

An open frame monitor provides the LCD monitor and enough supporting structure to hold associated electronics and to minimally support the LCD. Provision will be made for attaching the unit to some external structure for support and protection. Open frame LCD displays are intended to be built in to some other piece of equipment. An arcade video game would be a good example with the display mounted inside the cabinet. There is usually an open frame display inside all end-use displays with the end-use display simply providing an attractive protective enclosure. Some rack mount LCD display manufacturers will purchase desk-top displays, take them apart, and discard the outer plastic parts, keeping the inner open-frame LCD display for inclusion into their product.

Security vulnerabilities

According to an NSA document leaked to Der Spiegel, the NSA sometimes swaps the monitor cables on targeted computers with a bugged monitor cable in order to allow the NSA to remotely see what's displayed on the targeted computer monitor.[15]
Van Eck phreaking is the process of remotely displaying the contents of a CRT or LCD display by detecting its electromagnetic emissions. It is named after Dutch computer researcher Wim van Eck, who in 1985 published the first paper on it, including proof of concept. Phreaking is the process of exploiting telephone networks, used here because of its connection to eavesdropping.