You are on page 1of 7

Difference Between RAM and ROM RAM vs ROM RAM and ROM are both different types of memories

used in any computer to make it fast and to enable it to access information stored in the computer. Every computer comes with a certain amount of physical memory which is actually chips that hold data. This memory is referred to as Random Access Memory or RAM. RAM is a part of hardware that stores operating system s application programs and currently running processes that can be accessed randomly, i.e. in any order that the user desires. Data in RAM stays for only as long as the computer is running, and gets deleted as soon as computer is switched off. RAM usually comes in the form of microchips of different sizes such as 256MB, 512MB, 1GB, 2GB etc. Computers are so designed that this RAM can be increased up to a certain capacity. ROM, on the other hand refers to Read Only Memory. Every computer comes fitted with this memory that holds instructions for starting up the computer. This is a memory that has data written permanently on it and is not reusable. However, there are certain kinds of read only memory that can be rewritten but they are called Erasable Programmable Read Only Memory, or EPROM. These are generally in the form of CD-ROM or Floppy Disk that can load the OS to the RAM. RAM has a lot more flexibility then ROM due to its random structure. Data can be accessed from any part of the memory and also more than one piece of data can be accessed simultaneously. ROM does not allow this flexibility and is mainly used for firmware chip devices. Since it is read-only, ROM is a very safe means of storage due since it is protected from any alteration. Similarities between RAM and ROM end up with both being types of memories. There are glaring differences between the RAM and the ROM.

Difference between RAM and ROM RAM is Random Access Memory, while ROM stands for Read Only Memory. RAM is volatile and is erased when the computer is switched off. ROM is non-volatile and generally cannot be written to. RAM is used for both read and write while ROM is used only for reading. RAM needs electricity to flow to retain information while ROM is permanent. RAM is analogous to a blackboard on which information can be written with a chalk and erased any number of times, while ROM is permanent and can only be read. One example is BIOS (basic input output system) that runs when computer is switched on and it prepares disk drives and processor to load OS from disk.

Bits, Bytes, and Words: Each 0 or 1 in the binary system is called a bit abbreviation of binary digit. The bit is the basic unit for storing data in computer memory 0 means off, 1 means on. Notice that since a bit is always either on or off, a bit in computer memory is always storing some kind of data. Since single bits by themselves cannot store all the number, letters, and special character (such as $ and %) that a computer must process, the bits are put together in a group called a byte (pronounced "bite"). There are usually 8 bits in a byte. Each byte usually represents one character of letter, digit, or special character. Computer manufacturers express the capacity of memory and storage in terms of the number of bytes it can hold. The number of bytes can be expressed as kilobytes. Kilo represents 2 to the tenth power, or 1024. Kilobyte is abbreviated KB or simply K. (Sometimes K is used casually to mean 1000, as in "I earned 300$ last year"). A kilobyte is 1024 bytes. Thus, the memory of a 640K computer can store 6401024 bytes. Memory capacity may also be expressed in tern of megabytes (10241024 bytes). One megabyte abbreviated MB, means roughly, one million bytes. With storage devices, manufacturers sometimes express memory amounts in terms of gigabytes (abbreviated GB) billions of bytes.
y y y

Bit: Abbreviation of binary digits (0 or 1), the smallest unit of data storage. One bit occupies one store location. Byte: A group of combination of 8 bits is called a byte. One byte can store one one character. Word: A complete word is a combination of one or more bytes handled together as a single unit for processing and may thus be of 8, 16, 32, or 64 bits. The length of word varies from machine to machine but it predetermined for each machine. A computer reads and processes all the bites of the word at a time. Storage Units: Bit 8 bits 1024 Bytes 1024 KB 1024 MB 1024 GB = = = = = = 0,1 Byte 1 Kilobyte 1 Megabyte 1 Gigabyte 1 Terabyte

Definitions Bit = Binary digIT = 0 or 1 Byte = a sequence of 8 bits = 00000000, 00000001, ..., or 11111111 Word = a sequence of N bits where N = 16, 32, 64 depending on the computer

Why Do Computers Use Binary? Computer systems exist in only one of two states: on or off and computer electronics use voltage levels to indicate their present state. These patterns of "on" and "off" stored inside the computer are used to encode numbers using the binary number system, a system is a method of storing ordinary numbers as patterns of 1's and 0's. Basically, binary simplifies information processing. Because there must always be at least two symbols for a processing system to be able to distinguish significance or purpose, binary is the smallest numbering system that can be used. The computer's CPU need only recognise two states, on or off, but (with just a touch of Leibniz' mysticism) from this on-off, yes-no state all things flow - in the same way as a switch must always be open or closed, or an electrical flow on or off, a binary digit must always be one or zero. If switches are then arranged along boolean guidelines, these two simple digits can create circuits capable of performing both logical and mathematical operations. The reduction of decimal to binary does increase the length of the number, a lot, but this is more than made up for in the increase in speed, memory and utilisation. Especially utilisation. Remember, computers aren't always dealing with pure numbers or logic. Pictures and sound must first be reduced to numerical equivalents that, in turn, have to be decoded again for the end result. There are many advantages to binary. Here are four (somewhat overlapping) important reasons for using binary: 1.Simple; easy to build. 2.Unambiguous signals (hence noise immunity). 3.Flawless copies can be made. 4.Anything that can be represented with some sort of pattern can be represented with patterns of bits.

CPU cache A CPU cache is a cache used by the central processing unit of a computer to reduce the average time to access memory. The cache is a smaller, faster memory which stores copies of the data from the most frequently used main memory locations. As long as most memory accesses are cached memory locations, the average latency of memory accesses will be closer to the cache latency than to the latency of main memory. When the processor needs to read from or write to a location in main memory, it first checks whether a copy of that data is in the cache. If so, the processor immediately reads from or writes to the cache, which is much faster than reading from or writing to main memory. Most modern desktop and server CPUs have at least three independent caches: an instruction cache to speed up executable instruction fetch, a data cache to speed up data fetch and store, and a translation lookaside buffer (TLB) used to speed up virtual-to-physical address translation for both executable instructions and data. Data cache is usually organized as a hierarchy of more cache levels (L1, L2, etc). Cache memory is random access memory (RAM) that a computer microprocessor can access more quickly than it can access regular RAM. As the microprocessor processes data, it looks first in the cache memory and if it finds the data there (from a previous reading of data), it does not have to do the more time-consuming reading of data from larger memory. Cache memory is sometimes described in levels of closeness and accessibility to the microprocessor. An L1 cache is on the same chip as the microprocessor. (For example, the PowerPC 601 processor has a 32 kilobyte level-1 cache built into its chip.) L2 is usually a separate static RAM (SRAM) chip. The main RAM is usually a dynamic RAM (DRAM) chip. In addition to cache memory, one can think of RAM itself as a cache of memory for hard disk storage since all of RAM's contents come from the hard disk initially when you turn your computer on and load the operating system (you are loading it into RAM) and later as you start new applications and access new data. RAM can also contain a special area called a disk cache that contains the data most recently read in from the hard disk.

Programming language A programming language is an artificial language designed to communicate instructions to a machine, particularly a computer. Programming languages can be used to create programs that control the behavior of a machine and/or to express algorithms precisely. The earliest programming languages predate the invention of the computer, and were used to direct the behavior of machines such as Jacquard looms and player pianos. Thousands of different programming languages have been created, mainly in the computer field, with many more being created every year. Most programming languages describe computation in an imperative style, i.e., as a sequence of commands, although some languages, such as those that support functional programming or logic programming, use alternative forms of description. A programming language is usually split into the two components of syntax (form) and semantics (meaning). Some languages are defined by a specification document (for example, the C programming language is specified by an ISO Standard), while other languages, such as Perl, have a dominant implementation that is used as a reference. Machine Language refers to the "ones and zeroes" that digital processors use as instructions. Give it one pattern of bits (such as 11001001) and it will add two numbers, give it a different pattern (11001010) and it will instead subtract one from the other. In as little as a billionth of second. The instruction sets within a CPU family are usually compatible, but not between product lines. For example, Intel's x86/Pentium language and Motorola's PPC/Gx language are completely incompatible. Machine Language is painfully difficult to work with, and almost never worth the effort anymore. Instead programmers use the higher-level languages below, which are either compiled or interpretted into machine language by the computer itself.

Assembly Language is as close as you can come to writing in machine language, but has the advantage that it's also human-readable... using a small vocabulary of words with one syllable. Each written instruction (such as MOV A,B) typically corresponds to a single machine-language instruction (such as 11001001). An assembler makes the translation before the program is executed. Back when CPU speed was measured in kiloHertz and storage space was measured in kiloBytes, Assembly was the most cost-efficient way to implement a program. It's used less often now (with all those kilo's replaced by mega's or giga's, and even tera's on the horizon, it seems no one cares anymore about efficiency), but if you need speed and/or compactness above all else, Assembly is the solution.

Data structure In computer science, a data structure is a particular way of storing and organizing data in a computer so that it can be used efficiently.[1][2] Different kinds of data structures are suited to different kinds of applications, and some are highly specialized to specific tasks. For example, B-trees are particularly well-suited for implementation of databases, while compiler implementations usually use hash tables to look up identifiers. Data structures are used in almost every program or software system. Data structures provide a means to manage huge amounts of data efficiently, such as large databases and internet indexing services. Usually, efficient data structures are a key to designing efficient algorithms. Some formal design methods and programming languages emphasize data structures, rather than algorithms, as the key organizing factor in software design. A data structure is a way of organizing data that considers not only the items stored, but also their relationship to each other. Advance knowledge about the relationship between data items allows designing of efficient algorithms for the manipulation of data. Importance Data structure is important since it dictates the types of operations we can perform on the data and how efficiently they can be carried out. It also dictates how dynamic we can be in dealing with our data; for example it dictates if we can add additional data on the fly or if we need to know about all of the data up front. We determine which data structures to use to store our data only after we've carefully analyzed the problem and know at least what it is we hope to do with the data; for example if we'll require random access, or sequential access, or the ability to move both forward and backward through the data.

MS Windows - x86 processors (desktops, laptops, workstations, and some servers) Mac OS X: x86 processors. Older versions support PowerPC processors as well. Linux- pretty much anything. x86 Desktops and laptops, Cell systems (IBM servers and PlayStation 3s), POWER (PowerPC Macs, more IBM servers), Sparc (Sun/Oracle servers and workstations), ARM (cell phones and portable media players) Solaris: x86 computers and Sparc computers AIX: x86 and IBM computers (POWER)

You might also like