You are on page 1of 7

1

The Clock
Timing is essential in PC operations. Without some means of synchronization, chaos would ensue. Timing allows the electronic devices in the computer to coordinate and execute all internal commands in the proper order. Timing is achieved by placing a special conductor in the CPU and pulsing it with voltage. Each pulse of voltage received by this conductor is called a "clock cycle." All the switching activity in the computer occurs while the clock is sending a pulse. This process somewhat resembles several musicians using a metronome to synchronize their playing, with all the violinists moving their bows at the same time. Thanks to this synchronization, you get musical phrasing instead of a jumble of notes. Virtually every computer command needs at least two clock cycles. Some commands might require hundreds of clock cycles to process. Figure 4.5 shows an external data bus with a CPU and two devices. Notice that the crystal or clock is attached to the CPU to generate the timing.

Figure 4.5 CPU with clock

Bus Speed
A bus is simply a circuit that connects one part of the motherboard to another. The more data a bus can handle at one time, the faster it allows information to travel. The speed of the bus, measured in megahertz (MHz), refers to how much data can move across the bus simultaneously. Bus speed usually refers to the speed of the front side bus (FSB), which connects the CPU to the northbridge. FSB speeds can range from 66 MHz to over 800 MHz. Since the CPU reaches the memory controller though the northbridge, FSB speed can dramatically affect a computer's performance. Here are some of the other busses found on a motherboard:

2 The back side bus connects the CPU with the level 2 (L2) cache, also known as secondary or external cache. The processor determines the speed of the back side bus. The memory bus connects the northbridge to the memory. The IDE or ATA bus connects the southbridge to the disk drives. The AGP bus connects the video card to the memory and the CPU. The speed of the AGP bus is usually 66 MHz. The PCI bus connects PCI slots to the southbridge. On most systems, the speed of the PCI bus is 33 MHz. Also compatible with PCI is PCI Express, which is much faster than PCI but is still compatible with current software and operating systems. PCI Express is likely to replace both PCI and AGP busses.

y y y y y

The faster a computer's bus speed, the faster it will operate -- to a point. A fast bus speed cannot make up for a slow processor or chipset.

The External Data Bus


The external data bus (also known as the external bus or simply data bus) is the primary route for data in a PC. All data-handling components or optional data devices are connected to it; therefore, any information (code) placed on that bus is available to all devices connected to the computer. Figure 4.1 shows a CPU attached to its motherboard. The motherboard is the main circuit board, which contains the external data bus and connection for expansion devices that are not part of the board's basic design. The expansion slots act as "on ramps" to the external bus. Expansion cards, once commonly known as "daughter cards," are placed in slots on the motherboard. Other forms of onramp are the slots that hold memory or the sets of pins used to attach drive cables. Connectors on the motherboard grant access to the data bus for keyboards, mouse devices, and peripheral devices like modems and printers through the use of COM and LPT ports. To understand how a computer moves data between components, visualize each device on the data bus (including the CPU) connected to the bus by means of a collection of on/off switches. By "looking at" which conductors have power and which ones do not, the device can read the data as it is sent by another device. The on-off state of a line gives the value of 0 (on) or 1 (off). The wires "spell out" a code of binary numbers that the computer interprets and then routes to

another system component or to the user by means of an output device such as a monitor or printer. Communication occurs when voltage is properly applied to, or read from, any of the conductors by the system. Figure 4.2 illustrates a data bus connected to a CPU and a device.

Figure 4.2 External data bus Coded messages can be sent into or out of any device connected to the external data bus. Think of the data bus as a large highway with parallel lanes. Extending that analogy, bits are like cars traveling side by side-each carries part of a coded message. Microprocessors are used to turn the coded messages into data that performs a meaningful task for the computer's user.

Address Bus
The word "location" is italicized in the last paragraph to underscore the importance of location in PC memory operations. The content of RAM is changing all the time, as programs and the computer itself use portions of it to note, calculate, and hold results of actions. It is essential for the system to know what memory is assigned to which task and when that memory is free for a new use. To do so, the system has to have a way to address segments of memory and to quickly change the holdings in that position. The portion of the PC that does this is the address bus. Think of the address bus as a large, virtual table in which the columns are individual bits (like letters) and each row contains a string of bits (making up a word). The actual lengths of these words will vary depending on the number of bits the address bus can handle in a single pass. Figure 4.6 shows a table containing 1s and 0s. Each segment is given an address, just like the one that identifies a home or post office box. The system uses this address to send data to or retrieve data from memory. Like all the other buses in a PC, this one is a collection of conductors. It links the physical memory to the system and moves signals as memory is used. The number of conductors in the address bus determines the maximum amount of memory that can be used (memory that is addressable) by the CPU. Remember that computers count in binary notation. Each binary digitin this case, a conductor-that is added to the left will double the number of possible combinations.

Figure 4.6 Memory spreadsheet Early data buses used eight conductors and, therefore, 256 (28) combinations of code where possible. The maximum number of patterns a system can generate determines how much RAM the data bus can address. The 8088 used 20 address conductors and could address up to 1,048,576 bytes of memory locations, or 220. Today's PCs can address a lot more than that, and, in many cases, the actual limiting factor is not the number of patterns, but the capacity of the motherboard to socket memory chips. In all cases, the total amount of memory is the factor of 2X, where X = the number of connectors. The CPU does not directly connect to the memory bus, but sends requests and obtains results using the system's memory controller. This circuitry acts as both postmaster and translator, providing the proper strings of data in the right order, at the right time, and in a form the CPU can use. As mentioned before, any write or read action will require at least two clock cycles to execute. (It can require more clock cycles on systems that do not have memory tuned to the maximum system clock speed. In that case, the PC will have to use additional clock cycles while it waits for the memory to be ready for the next part of the operation.) Figure 4.7 shows a diagram of the process with the CPU and RAM stack on the external data bus. The address bus is connected to the memory controller. It fetches and places data in memory.

Figure 4.7 CPU and RAM

Cache Memory
A cache is a component that transparently stores data so that future requests for that data can be served faster. The data that is stored within a cache might be values that have been computed earlier or duplicates of original values that are stored elsewhere. If requested data is contained in the cache (cache hit), this request can be served by simply reading the cache, which is comparatively faster. Otherwise (cache miss), the data has to be recomputed or fetched from its original storage location, which is comparatively slower. Hence, the greater the number of requests that can be served from the cache, the faster the overall system performance becomes. To be cost efficient and to enable an efficient use of data, caches are relatively small. Nevertheless, caches have proven themselves in many areas of computing because access patterns in typical computer applications have locality of reference. References exhibit temporal locality if data is requested again that has been recently requested already. References exhibit spatial locality if data is requested that is physically stored close to data that has been requested already.

Level 1 (L1) Cache Memory


The Level 1 cache, or primary cache, is on the CPU and is used for temporary storage of instructions and data organised in blocks of 32 bytes. Primary cache is the fastest form of storage. Because its built in to the chip with a zero wait-state (delay) interface to the processors execution unit, it is limited in size. Level 1 cache is implemented using Static RAM (SRAM) and until recently was traditionally 16KB in size. SRAM uses two transistors per bit and can hold data without external assistance, for as long as power is supplied to the circuit. The second transistor controls the output of the first: a circuit known as a flip-flop so-called because it has two stable states which it can flip between. This is contrasted to dynamic RAM (DRAM), which must be refreshed many times per second in order to hold its data contents. SRAM is manufactured in a way rather similar to how processors are: highly integrated transistor patterns photo-etched into silicon. Each SRAM bit is comprised of between four and six transistors, which is why SRAM takes up much more space compared to DRAM, which uses only one (plus a capacitor). This, plus the fact that SRAM is also several times the cost of DRAM, explains why it is not used more extensively in PC systems.

L2(Level 2) cache memory


Level 2 cache typically comes in two sizes, 256KB or 512KB, and can be found, or soldered onto the motherboard, in a Card Edge Low Profile (CELP) socket or, more recently, on a COAST (cache on a stick) module. The latter resembles a SIMM but is a little shorter and plugs into a COAST socket, which is normally located close to the processor and resembles a

PCI expansion slot. The Pentium Pro deviated from this arrangement, siting the Level 2 cache on the processor chip itself. The aim of the Level 2 cache is to supply stored information to the processor without any delay (wait-state). For this purpose, the bus interface of the processor has a special transfer protocol called burst mode. A burst cycle consists of four data transfers where only the address of the first 64 are output on the address bus. The most common Level 2 cache is synchronous pipeline burst To have a synchronous cache a chipset, such as Triton, is required to support it. It can provide a 3-5% increase in PC performance because it is timed to a clock cycle. This is achieved by use of specialised SRAM technology which has been developed to allow zero wait-state access for consecutive burst read cycles. Pipelined Burst Static RAM (PB SRAM) has an access time in the range 4.5 to 8 nanoseconds (ns) and allows a transfer timing of 3-1-1-1 for bus speeds up to 133MHz.

L3(Level 3) cache memory

Level 3 or L3 cache is specialized memory that works hand-in-hand with L1 and L2 cache to improve computer performance. L1, L2 and L3 cache are computer processing unit (CPU) caches, verses other types of caches in the system such as hard disk cache. CPU cache caters to the needs of the microprocessor by anticipating data requests so that processing instructions are provided without delay. CPU cache is faster than random access memory (RAM), and is designed to prevent bottlenecks in performance. When a request is made of the system the CPU requires instructions for executing that request. The CPU works many times faster than system RAM, so to cut down on delays, L1 cache has bits of data at the ready that it anticipates will be needed. L1 cache is very small, which allows it to be very fast. If the instructions arent present in L1 cache, the CPU checks L2, a slightly larger pool of cache, with a little longer latency. With each cache miss it looks to the next level of cache. L3 cache can be far larger than L1 and L2, and even though its also slower, its still a lot faster than fetching from RAM.

Cooling Fans and Heat Sinks


A computer fan is any fan inside, or attached to, a computer case used for cooling purposes, and may refer to fans that draw cooler air into the case from the outside, expel warm air from inside, or move air across a heatsink to cool a particular component. The use of fans to cool a computer is an example of active cooling. Heatsinks allow you to guarantee the best temperature conditions for electronic components. Heatsinks intensify the heat exchange between cooled elements, such as the CPU or video chipset, and their environment. A heatsink must have good thermal conductance and thermal resistance. Thermal conductance determines the speed of heat propagation. For a heatsink,

thermal conductance must be as high as possible, because the area of the cooled object is often several times smaller than the area of the heatsink base. If thermal conductance is low, the heat from the cooled object won't be distributed evenly over the volume of the heatsink, including all its fins. . As a result, the heat will be dispersed with equal efficiency from the entire heatsink area. This means one part of the heatsink will never be very hot while another part remains cold

Liquid Cooling
Over the past few years, CPU speeds have been increasing at a dramatic rate. In order to generate the new speeds, CPUs have more transistors, are drawing more power and have higher clock rates. This leads to greater heat produced by the CPU in the computer. CPU heat sinks have been added to all modern PC CPUs to help try to alleviate some of the heat from the processor into the surrounding environment, but as the fans get louder and larger new solutions are being looked into, namely liquid cooling. Liquid cooling is essentially a radiator for the CPU inside of the computer. Just like a radiator for a car, a liquid cooling system circulates a liquid through a heat sink attached to the processor inside of the computer. As the liquid passes through the heat sink, heat is transferred from the hot processor to the cooler liquid. The hot liquid then moves out to a radiator at the back of the case and transfers the heat to the ambient air outside of the case. The cooled liquid then travels back through the system to the CPU to continue the process.

Thermal Compounds
Over the years, heat has become a greater problem for computer components. As the speeds of the processors have increased, the amount of waste heat produced by the circuits has required a more active approach to dissipating the heat. In the early days of the Pentium CPUs from Intel, the only additional cooling required was an aluminum heatseat that was attached to the processor through a thermal tape or epoxy. Eventually this was insufficient to properly cool the processors so active cooling was applied by adding a fan to the heatsink to increase the rate of dissipation of the cooling solution. One of the problems with transferring the heat between the processor and the heatsink has to do with the thermal interface. Neither the heatsink nor the processor has a completely smooth surface. This introduces air pockets that exist between the two materials and air has a very high thermal resistance or a poor heat conductor. Many modern double paned windows use this fact of air's poor heat conductivity as an insulator, but that is the opposite effect you want to happen with computer components. To help alleviate this problem, thermal compounds are used to fill in the gaps between the two surfaces. There are 4 types of thermal compounds used: thermal tape, thermal pads, thermal grease and thermal epoxy.
Abhishek.p.k Mob : +918089425310 +919746023346 +919526909309 Email id : abhishekdreamstore@gmail.com abhishekdreamshore@yahoo.com abhishekpk@hotmail.com

You might also like