You are on page 1of 67

Assembly language

From Wikipedia, the free encyclopedia


See the terminology section below for information regarding inconsistent
use of the terms assembly and assembler.

An assembly language is a low-level programming


language for computers, microprocessors, microcontrollers, and
other integrated circuits. It implements a symbolic representation of the
binarymachine codes and other constants needed to program a
given CPU architecture. This representation is usually defined by the
hardware manufacturer, and is based on mnemonics that symbolize
processing steps (instructions), processor registers, memory locations, and
other language features. An assembly language is thus specific to a certain
physical (or virtual) computer architecture. This is in contrast to most high-
level programming languages, which are ideally portable.

A utility program called an assembler is used to translate assembly language


statements into the target computer's machine code. The assembler
performs a more or less isomorphic translation (a one-to-one mapping)
from mnemonic statements into machine instructions and data. This is in
contrast with high-level languages, in which a single statement generally
results in many machine instructions.

Many sophisticated assemblers offer additional mechanisms to facilitate


program development, control the assembly process, and aid debugging. In
particular, most modern assemblers include amacro facility (described
below), and are called macro assemblers.

Contents
[hide]

• 1 Key concepts

○ 1.1 Assembler

○ 1.2 Assembly language

• 2 Language design

○ 2.1 Basic elements

 2.1.1 Opcode mnemonics and pseudo-


opcodes

 2.1.2 Data sections

 2.1.3 Assembly directives

○ 2.2 Macros

○ 2.3 Support for structured programming

• 3 Use of assembly language

○ 3.1 Historical perspective

○ 3.2 Current usage

○ 3.3 Typical applications

• 4 Related terminology

• 5 List of assemblers for different computer architectures

• 6 Further details

• 7 Example listing of assembly language source code

• 8 See also

• 9 References

• 10 Further reading

• 11 External links

[edit]Key concepts
[edit]Assembler

Compare with: Microassembler.

Typically a modern assembler creates object code by translating


assembly instruction mnemonics into opcodes, and by
resolving symbolic names for memory locations and other entities.
[1]
The use of symbolic references is a key feature of assemblers,
saving tedious calculations and manual address updates after program
modifications. Most assemblers also include macro facilities for
performing textual substitution—e.g., to generate common short
sequences of instructions as inline, instead of called subroutines, or
even generate entire programs or program suites.
Assemblers are generally simpler to write than compilers for high-level
languages, and have been available since the 1950s. Modern
assemblers, especially for RISC based architectures, such asMIPS,
Sun SPARC, and HP PA-RISC, as well as x86(-64),
optimize instruction scheduling to exploit the CPU pipeline efficiently.

There are two types of assemblers based on how many passes


through the source are needed to produce the executable program.

 One-pass assemblers go through the source code once and


assume that all symbols will be defined before any instruction that
references them.

 Two-pass assemblers create a table with all symbols and their


values in the first pass, then use the table in a second pass to
generate code. The assembler must at least be able to determine
the length of each instruction on the first pass so that the
addresses of symbols can be calculated.

The advantage of a one-pass assembler is speed, which is not as


important as it once was with advances in computer speed and
abilities. The advantage of the two-pass assembler is that symbols can
be defined anywhere in program source code. This lets programs be
defined in more logical and meaningful ways, making two-pass
assembler programs easier to read and maintain.[2]

More sophisticated high-level assemblers provide language


abstractions such as:

 Advanced control structures

 High-level procedure/function declarations and invocations

 High-level abstract data types, including structures/records,


unions, classes, and sets

 Sophisticated macro processing (although available on ordinary


assemblers since late 1960s for IBM/360, amongst other
machines)

 Object-oriented programming features such


as encapsulation, polymorphism, inheritance, interfaces
See Language design below for more details.

Note that, in normal professional usage, the term assembler is often


used ambiguously: It is frequently used to refer to an assembly
language itself, rather than to the assembler utility. Thus: "CP/CMS
was written in S/360 assembler" as opposed to "ASM-H was a widely-
used S/370 assembler."[citation needed]

[edit]Assembly language
A program written in assembly language consists of a series
of instructions--mnemonics that correspond to a stream of executable
instructions, when translated by an assembler, that can be loaded into
memory and executed.

For example, an x86/IA-32 processor can execute the following binary


instruction ('MOV') as expressed in machine language (see x86
assembly language):

Hexadecimal: B0 61 (Binary: 10110000


01100001)

The equivalent assembly language representation is easier to


remember (example in Intel syntax, more mnemonic):

MOV AL, 61h

This instruction means:

 Move (really a copy) the hexadecimal value '61' into the processor
register named "AL". (The h-suffix means hexadecimal; 61h = 97
in decimal)

The mnemonic "mov" represents the opcode 10110000 which


actually copies the value in the second operand into the register
indicated by the first operand. The mnemonic was chosen by the
designer of the instruction set to abbreviate "move", making it easier for
programmers to remember. Typical of an assembly language
statement, a comma-separated list of arguments or parameters follows
the opcode.
The mnemonic "mov" may refer to a family of numeric opcodes that do
the same thing, but imply different registers. The opcode 10110000
specifically copies an 8-bit value into the register AL. The opcode
10100001 is also denoted by the mnemonic "mov", but instead copies
a 16-bit value into the register AX, and gets it by reading from system
memory (the second operand says where) instead of copying the value
of the operand itself.

Transforming assembly into machine language is performed by an


assembler, and the (partial) reverse by a disassembler. Unlike high-
level languages, there is usually a one-to-one correspondencebetween
simple assembly statements and machine language instructions.
However, in some cases, an assembler may
provide pseudoinstructions (essentially macros) which expand into
several machine language instructions to provide commonly needed
functionality. For example, for a machine that lacks a "branch if greater
or equal" instruction, an assembler may provide a pseudoinstruction
that expands to the machine's "set if less than" and "branch if zero (on
the result of the set instruction)". Most full-featured assemblers also
provide a rich macro language (discussed below) which is used by
vendors and programmers to generate more complex code and data
sequences.

Each computer architecture and processor architecture usually has its


own machine language. On this level, each instruction is simple
enough to be executed using a relatively small number of electronic
circuits. Computers differ by the number and type of operations they
support. For example, a new 64-bit machine would have different
circuitry from a 32-bit machine. They may also have different sizes and
numbers of registers, and different representations of data types in
storage. While most general-purpose computers are able to carry out
essentially the same functionality, the ways they do so differ; the
corresponding assembly languages reflect these differences.

Multiple sets of mnemonics or assembly-language syntax may exist for


a single instruction set, typically instantiated in different assembler
programs. In these cases, the most popular one is usually that supplied
by the manufacturer and used in its documentation.
[edit]Language design
[edit]Basic elements
There is a large degree of diversity in the way the authors of
assemblers categorize statements and in the nomenclature that they
use. In particular, some describe anything other than a machine
mnemonic or extended mnemonic as a pseudo-operation (pseudo-op).
A typical assembly language consists of 3 types of instruction
statements that are used to define program operations:

 Opcode mnemonics

 Data sections

 Assembly directives
[edit]Opcode mnemonics and pseudo-opcodes

Instructions (statements) in assembly language are generally very


simple, unlike those in high-level language. Generally, a mnemonic is a
symbolic name for a single executable machine language instruction
(an opcode), and there is at least one opcode mnemonic defined for
each machine language instruction. Each instruction typically consists
of an operation or opcode plus zero or moreoperands. Most
instructions refer to a single value, or a pair of values. Operands can
be immediate (typically one byte values, coded in the instruction itself),
registers specified in the instruction, implied or the addresses of data
located elsewhere in storage. This is determined by the underlying
processor architecture: the assembler merely reflects how this
architecture works. Extended mnemonics are often used to specify a
combination of an opcode with a specific operand, e.g., the
System/360 assemblers use B as an extended mnemonic for BC with
a mask of 15 and NOP for BC with a mask of 0.

Pseudo-opcodes are often used within the instruction set to support


alternative mnemonics for instructions that the CPU designer did not
specifically include. For example, many older CPUs, such as the 8086,
do not have a true nop (no operation) instruction.[citation needed] But often
there is another instruction that can be used instead with the same
effect as a nop. In 8086 CPUs the instructionxchg ax,ax is used
for nop. With nop being a pseudo-opcode to encode the
instruction xchg ax,ax. Some disassemblers, for example Ollydbg,
recognise this and will decode the xchg ax,ax instruction as nop.

Some assemblers also support pseudo-instructions, which generate


two or more machine instructions. For instance, with some Z80
assemblers the instruction ld hl,bc is recognised to generate ld
l,cfollowed by ld h,b.[3]

[edit]Data sections

There are instructions used to define data elements to hold data and
variables. They define the type of data, the length and the alignment of
data. These instructions can also define whether the data is available
to outside programs (programs assembled separately) or only to the
program in which the data section is defined. Some assemblers
classify these as pseudo-ops.

[edit]Assembly directives

Assembly directives, also called pseudo opcodes, pseudo-operations


or pseudo-ops, are instructions that are executed by an assembler at
assembly time, not by a CPU at run time. They can make the assembly
of the program dependent on parameters input by a programmer, so
that one program can be assembled different ways, perhaps for
different applications. They also can be used to manipulate
presentation of a program to make it easier to read and maintain.

(For example, directives would be used to reserve storage areas and


optionally their initial contents.) The names of directives often start with
a dot to distinguish them from machine instructions.

Symbolic assemblers let programmers associate arbitrary names


(labels or symbols) with memory locations. Usually, every constant and
variable is given a name so instructions can reference those locations
by name, thus promoting self-documenting code. In executable code,
the name of each subroutine is associated with its entry point, so any
calls to a subroutine can use its name. Inside
subroutines, GOTO destinations are given labels. Some assemblers
support local symbols which are lexically distinct from normal symbols
(e.g., the use of "10$" as a GOTO destination).

Most[dubious – discuss] assemblers provide flexible symbol management,


letting programmers manage different namespaces, automatically
calculate offsets within data structures, and assign labels that refer to
literal values or the result of simple computations performed by the
assembler. Labels can also be used to initialize constants and
variables with relocatable addresses.

Assembly languages, like most other computer languages, allow


comments to be added to assembly source code that are ignored by
the assembler. Good use of comments is even more important with
assembly code than with higher-level languages, as the meaning and
purpose of a sequence of instructions is harder to decipher from the
code itself.

Wise use of these facilities can greatly simplify the problems of coding
and maintaining low-level code. Raw assembly source code as
generated by compilers or disassemblers—code without any
comments, meaningful symbols, or data definitions—is quite difficult to
read when changes must be made.

[edit]Macros

Many assemblers support predefined macros, and others


support programmer-defined (and repeatedly re-definable) macros
involving sequences of text lines in which variables and constants are
embedded. This sequence of text lines may include opcodes or
directives. Once a macro has been defined its name may be used in
place of a mnemonic. When the assembler processes such a
statement, it replaces the statement with the text lines associated with
that macro, then processes them as if they existed in the source code
file (including, in some assemblers, expansion of any macros existing
in the replacement text).

Since macros can have 'short' names but expand to several or indeed
many lines of code, they can be used to make assembly language
programs appear to be far shorter, requiring fewer lines of source code,
as with higher level languages. They can also be used to add higher
levels of structure to assembly programs, optionally introduce
embedded debugging code via parameters and other similar features.

Many assemblers have built-in (or predefined) macros for system calls
and other special code sequences, such as the generation and storage
of data realized through advanced bitwise and booleanoperations used
in gaming, software security, data management, and cryptography.

Macro assemblers often allow macros to take parameters. Some


assemblers include quite sophisticated macro languages, incorporating
such high-level language elements as optional parameters, symbolic
variables, conditionals, string manipulation, and arithmetic operations,
all usable during the execution of a given macro, and allowing macros
to save context or exchange information. Thus a macro might generate
a large number of assembly language instructions or data definitions,
based on the macro arguments. This could be used to generate
record-style data structures or "unrolled" loops, for example, or could
generate entire algorithms based on complex parameters. An
organization using assembly language that has been heavily extended
using such a macro suite can be considered to be working in a higher-
level language, since such programmers are not working with a
computer's lowest-level conceptual elements.

Macros were used to customize large scale software systems for


specific customers in the mainframe era and were also used by
customer personnel to satisfy their employers' needs by making
specific versions of manufacturer operating systems. This was done,
for example, by systems programmers working with IBM's
Conversational Monitor System / Virtual Machine (CMS/VM) and with
IBM's "real time transaction processing" add-ons, CICS, Customer
Information Control System, and ACP/TPF, the airline/financial system
that began in the 1970s and still runs many large computer
reservations systems (CRS) and credit card systems today.

It was also possible to use solely the macro processing abilities of an


assembler to generate code written in completely different languages,
for example, to generate a version of a program in COBOL using a
pure macro assembler program containing lines of COBOL code inside
assembly time operators instructing the assembler to generate arbitrary
code.

This was because, as was realized in the 1970s, the concept of "macro
processing" is independent of the concept of "assembly", the former
being in modern terms more word processing, text processing, than
generating object code. The concept of macro processing appeared,
and appears, in the C programming language, which supports
"preprocessor instructions" to set variables, and make conditional tests
on their values. Note that unlike certain previous macro processors
inside assemblers, the C preprocessor was not Turing-
complete because it lacked the ability to either loop or "go to", the latter
allowing programs to loop.

Despite the power of macro processing, it fell into disuse in many high
level languages (a major exception being C/C++) while remaining a
perennial for assemblers. This was because many programmers were
rather confused by macro parameter substitution and did not
disambiguate macro processing from assembly and
execution[dubious – discuss].

Macro parameter substitution is strictly by name: at macro processing


time, the value of a parameter is textually substituted for its name. The
most famous class of bugs resulting was the use of a parameter that
itself was an expression and not a simple name when the macro writer
expected a name. In the macro: foo: macro a load a*b the

intention was that the caller would provide the name of a variable, and
the "global" variable or constant b would be used to multiply "a". If foo
is called with the parameter a-c, the macro expansion of load a-
c*b occurs. To avoid any possible ambiguity, users of macro
processors can parenthesize formal parameters inside macro
definitions, or callers can parenthesize the input parameters.[4]

PL/I and C/C++ feature macros, but this facility can only manipulate
text. On the other hand, homoiconic languages, such as Lisp, Prolog,
and Forth, retain the power of assembly language macros because
they are able to manipulate their own code as data.

[edit]Support for structured programming


Some assemblers have incorporated structured programming elements
to encode execution flow. The earliest example of this approach was in
the Concept-14 macro set, originally proposed by Dr. H.D. Mills
(March, 1970), and implemented by Marvin Kessler at IBM's Federal
Systems Division, which extended the S/360 macro assembler with
IF/ELSE/ENDIF and similar control flow blocks.[5]This was a way to
reduce or eliminate the use of GOTO operations in assembly code,
one of the main factors causing spaghetti code in assembly language.
This approach was widely accepted in the early 80s (the latter days of
large-scale assembly language use).

A curious design was A-natural, a "stream-oriented" assembler for


8080/Z80 processors[citation needed] from Whitesmiths Ltd. (developers of
the Unix-like Idris operating system, and what was reported to be the
first commercial C compiler). The language was classified as an
assembler, because it worked with raw machine elements such as
opcodes, registers, and memory references; but it incorporated an
expression syntax to indicate execution order. Parentheses and other
special symbols, along with block-oriented structured programming
constructs, controlled the sequence of the generated instructions. A-
natural was built as the object language of a C compiler, rather than for
hand-coding, but its logical syntax won some fans.

There has been little apparent demand for more sophisticated


assemblers since the decline of large-scale assembly language
development.[6] In spite of that, they are still being developed and
applied in cases where resource constraints or peculiarities in the
target system's architecture prevent the effective use of higher-level
languages.[7]

[edit]Use of assembly language


[edit]Historical perspective
Assembly languages were first developed in the 1950s, when they
were referred to as second generation programming languages. They
eliminated much of the error-prone and time-consuming first-
generation programming needed with the earliest computers, freeing
programmers from tedium such as remembering numeric codes and
calculating addresses. They were once widely used for all sorts of
programming. However, by the 1980s (1990s on microcomputers),
their use had largely been supplanted by high-level languages[citation
, in the search for improved programming productivity. Today,
needed]

although assembly language is almost always handled and generated


by compilers, it is still used for direct hardware manipulation, access to
specialized processor instructions, or to address critical performance
issues. Typical uses are device drivers, low-level embedded systems,
and real-time systems.

Historically, a large number of programs have been written entirely in


assembly language. Operating systems were almost exclusively written
in assembly language until the widespread acceptance ofC in the
1970s and early 1980s. Many commercial applications were written in
assembly language as well, including a large amount of the IBM
mainframe software written by large
corporations.COBOL and FORTRAN eventually displaced much of this
work, although a number of large organizations retained assembly-
language application infrastructures well into the 90s.

Most early microcomputers relied on hand-coded assembly language,


including most operating systems and large applications. This was
because these systems had severe resource constraints, imposed
idiosyncratic memory and display architectures, and provided limited,
buggy system services. Perhaps more important was the lack of first-
class high-level language compilers suitable for microcomputer use. A
psychological factor may have also played a role: the first generation of
microcomputer programmers retained a hobbyist, "wires and pliers"
attitude.

In a more commercial context, the biggest reasons for using assembly


language were minimal bloat (size), minimal overhead, greater speed,
and reliability.

Typical examples of large assembly language programs from this time


are IBM PC DOS operating systems and early applications such as
the spreadsheet program Lotus 1-2-3, and almost all popular games
for the Atari 800 family of home computers. Even into the 1990s, most
console video games were written in assembly, including most games
for the Mega Drive/Genesis and the Super Nintendo Entertainment
System[citation needed]. According to some industry insiders, the assembly
language was the best computer language to use to get the best
performance out of the Sega Saturn, a console that was notoriously
challenging to develop and program games for .[8] The popular arcade
game NBA Jam (1993) is another example. On the Commodore 64,
Amiga, Atari ST, as well as ZX Spectrum home computers, assembler
has long been the primary development language. This was in large
part because BASIC dialects on these systems offered insufficient
execution speed, as well as insufficient facilities to take full advantage
of the available hardware on these systems. Some systems, most
notably Amiga, even have IDEs with highly advanced debugging and
macro facilities, such as the freeware ASM-One assembler,
comparable to that of Microsoft Visual Studio facilities (ASM-One
predates Microsoft Visual Studio).

The Assembler for the VIC-20 was written by Don French and
published by French Silk. At 1639 bytes in length, its author believes it
is the smallest symbolic assembler ever written. The assembler
supported the usual symbolic addressing and the definition
of character strings or hex strings. It also allowed address expressions
which could be combined
with addition, subtraction, multiplication,division, logical AND, logical
OR, and exponentiation operators.[9]

[edit]Current usage
There have always been debates over the usefulness and performance
of assembly language relative to high-level languages. Assembly
language has specific niche uses where it is important; see below. But
in general, modern optimizing compilers are claimed[citation needed] to
render high-level languages into code that can run as fast as hand-
written assembly, despite the counter-examples that can be found .[10]
[11][12]
The complexity of modern processors and memory sub-system
makes effective optimization increasingly difficult for compilers, as well
as assembly programmers .[13][14]Moreover, and to the dismay of
efficiency lovers, increasing processor performance has meant that
most CPUs sit idle most of the time, with delays caused by predictable
bottlenecks such as I/Ooperations and paging. This has made raw
code execution speed a non-issue for many programmers.

There are some situations in which practitioners might choose to use


assembly language, such as when:

 a stand-alone binary executable is required, i.e. one that must


execute without recourse to the run-time components
or libraries associated with a high-level language; this is perhaps
the most common situation. These are embedded programs that
store only a small amount of memory and the device is intended to
do single purpose tasks. Such examples consist of telephones,
automobile fuel and ignition systems, air-conditioning control
systems, security systems, and sensors.

 interacting directly with the hardware, for example in device


drivers and interrupt handlers.

 using processor-specific instructions not exploited by or available


to the compiler. A common example is the bitwise
rotation instruction at the core of many encryption algorithms.

 creating vectorized functions for programs in higher-level


languages such as C. In the higher-level language this is
sometimes aided by compiler intrinsic functions which map directly
to SIMD mnemonics, but nevertheless result in a one-to-one
assembly conversion specific for the given vector processor.

 extreme optimization is required, e.g., in an inner loop in a


processor-intensive algorithm. Game programmers take
advantage of the abilities of hardware features in systems,
enabling games to run faster. Also large scientific simulations
require highly optimized algorithms, e.g. linear
algebra with BLAS[15][10] or discrete cosine
transformation (e.g. SIMD assembly version from x264[16])

 a system with severe resource constraints (e.g., an embedded


system) must be hand-coded to maximize the use of limited
resources; but this is becoming less common as processor price
decreases and performance improves.

 no high-level language exists, on a new or specialized processor,


for example.

 writing real-time programs that need precise timing and responses,


such as simulations, flight navigation systems, and medical
equipment. For example, in a fly-by-wire system, telemetry must
be interpreted and acted upon within strict time constraints. Such
systems must eliminate sources of unpredictable delays, which
may be created by (some) interpreted languages,
automatic garbage collection, paging operations, or preemptive
multitasking. However, some higher-level languages incorporate
run-time components and operating system interfaces that can
introduce such delays. Choosing assembly or lower-level
languages for such systems gives programmers greater visibility
and control over processing details.

 complete control over the environment is required, in extremely


high security situations where nothing can be taken for granted.

 writing computer viruses, bootloaders, certain device drivers, or


other items very close to the hardware or low-level operating
system.

 writing instruction set simulators for monitoring, tracing


and debugging where additional overhead is kept to a minimum

 reverse-engineering existing binaries that may or may not have


originally been written in a high-level language, for example when
cracking copy protection of proprietary software.

 reverse engineering and modifying video games (also


termed ROM hacking), which is possible via several methods. The
most widely employed is altering program code at the assembly
language level.

 writing self modifying code, to which assembly language lends


itself well.

 writing games and other software for graphing calculators.[17]


 writing compiler software that generates assembly code, and the
writers should therefore be expert assembly language
programmers themselves.

 writing cryptographic algorithms that must always take strictly the


same time to execute, preventing timing attacks.

Nevertheless, assembly language is still taught in most computer


science and electronic engineering programs. Although few
programmers today regularly work with assembly language as a tool,
the underlying concepts remain very important. Such fundamental
topics as binary arithmetic, memory allocation, stack
processing, character set encoding, interrupt processing,
and compiler design would be hard to study in detail without a grasp of
how a computer operates at the hardware level. Since a computer's
behavior is fundamentally defined by its instruction set, the logical way
to learn such concepts is to study an assembly language. Most modern
computers have similar instruction sets. Therefore, studying a single
assembly language is sufficient to learn: i) the basic concepts; ii) to
recognize situations where the use of assembly language might be
appropriate; and iii) to see how efficient executable code can be
created from high-level languages. [18]

[edit]Typical applications
Hard-coded assembly language is typically used in a system's boot
ROM (BIOS on IBM-compatible PC systems). This low-level code is
used, among other things, to initialize and test the system hardware
prior to booting the OS, and is stored in ROM. Once a certain level of
hardware initialization has taken place, execution transfers to other
code, typically written in higher level languages; but the code running
immediately after power is applied is usually written in assembly
language. The same is true of most boot loaders.

Many compilers render high-level languages into assembly first before


fully compiling, allowing the assembly code to be viewed
for debugging and optimization purposes. Relatively low-level
languages, such as C, often provide special syntax to embed assembly
language directly in the source code. Programs using such facilities,
such as the Linux kernel, can then construct abstractions using
different assembly language on each hardware platform. The
system's portable code can then use these processor-specific
components through a uniform interface.

Assembly language is also valuable in reverse engineering, since


many programs are distributed only in machine code form, and
machine code is usually easy to translate into assembly language and
carefully examine in this form, but very difficult to translate into a
higher-level language. Tools such as the Interactive
Disassembler make extensive use of disassembly for such a purpose.

One niche that makes use of assembly language is the demoscene.


Certain competitions require contestants to restrict their creations to a
very small size (e.g. 256B, 1KB, 4KB or 64 KB), and assembly
language is the language of choice to achieve this goal.[19] When
resources, especially CPU processing-constrained systems, like the
earlier Amiga models, and the Commodore 64, are a concern,
assembler coding is a must. Optimized assembler code is written "by
hand" and instructions are sequenced manually by programmers in an
attempt to minimize the number of CPU cycles used. The CPU
constraints are so great that every CPU cycle counts. However, using
such methods has enabled systems like the Commodore 64 to produce
real-time 3D graphics with advanced effects, a feat which might be
considered unlikely or even impossible for a system with a
0.99MHz processor.[citation needed]

[edit]Related terminology
 Assembly language or assembler language is commonly
called assembly, assembler, ASM, or symbolic machine code.
A generation of IBM mainframe programmers called
it BAL for Basic Assembly Language.

Note: Calling the language assembler is of course potentially confusing


and ambiguous, since this is also the name of the utility program that
translates assembly language statements into machine code. Some may
regard this as imprecision or error. However, this usage has been
common among professionals and in the literature for decades.
[20]
Similarly, some early computers called their assembler their assembly
program.[21])

 The computational step where an assembler is run, including


all macro processing, is termed assembly time.

 The use of the word assembly dates from the early years of
computers (cf. short code, speedcode).

 A cross assembler (see cross compiler) is functionally just


an assembler. This term is used to stress that the assembler
is run on a different computer than the target system, the
system on which the resulting code is run. Because
nowadays assemblers are written portably in a high level
language like C, this is largely irrelevant. Cross assembling
may be necessary if the target system lacks the capacity to
run an assembler itself. This is typically the case for small
embedded systems. The most important distinguishing
feature of a cross assembler is that it provides for or
interfaces to facilities to transport the code to the target
processor, e.g. to reside in flash or EPROM. It generates a
binary image, or Intel HEX file rather than an object file.

 An assembler directive or pseudo-opcode is a command


given to an assembler. These directives may do anything
from telling the assembler to include other source files, to
telling it to allocate memory for constant data.
[edit]List
of assemblers for different
computer architectures

The following page has a list of different assemblers for the


different computer architectures, along with any associated
information for that specific assembler:

 List of assemblers
[edit]Further details

For any given personal computer, mainframe, embedded system,


and game console, both past and present, at least one —
possibly dozens — of assemblers have been written. For some
examples, see the list of assemblers.

On Unix systems, the assembler is traditionally called as,


although it is not a single body of code, being typically written
anew for each port. A number of Unix variants use GAS.

Within processor groups, each assembler has its own dialect.


Sometimes, some assemblers can read another assembler's
dialect, for example, TASM can read old MASM code, but not the
reverse.FASM and NASM have similar syntax, but each support
different macros that could make them difficult to translate to
each other. The basics are all the same, but the advanced
features will differ.[22]

Also, assembly can sometimes be portable across different


operating systems on the same type of CPU. Calling
conventions between operating systems often differ slightly or not
at all, and with care it is possible to gain some portability in
assembly language, usually by linking with a C library that does
not change between operating systems. An instruction set
simulator (which would ideally be written in an assembler
language) can, in theory, process the object
code/ binary of any assembler to achieve portability even
across platforms (with an overhead no greater than a typical
bytecode interpreter). This is essentially what microcode
achieves when a hardware platform changes internally.

For example, many things in libc depend on the preprocessor to


do OS-specific, C-specific things to the program before
compiling. In fact, some functions and symbols are not even
guaranteed to exist outside of the preprocessor. Worse, the size
and field order of structs, as well as the size of
certain typedefs such as off_t, are entirely unavailable in
assembly language without help from a configure script, and
differ even between versions of Linux, making it impossible to
portably call functions in libc other than ones that only take
simple integers and pointers as parameters. To address this
issue,FASMLIB project provides a portable assembly library for
Win32 and Linux platforms, but it is yet very incomplete.[23]

Some higher level computer languages, such as C and Borland


Pascal, support inline assembly where relatively brief sections of
assembly code can be embedded into the high level language
code. The Forth languge commonly contains an assembler used
in CODE words.

Many people use an emulator to debug assembly-language


programs.

[edit]Example listing of assembly language


source code

Instruction (AT&T
Address Label Object code[24]
syntax)

.begin

.org 2048

a_start .equ 3000

2048 ld length,%

00000010 10000000 00000000


2064 be done
00000110

10000010 10000000 01111111


2068 addcc %r1,-4,%r1
11111100

10001000 10000000 01000000


2072 addcc %r1,%r2,%r4
00000010

2076 ld %r4,%r5 11001010 00000001 00000000


00000000

00010000 10111111 11111111


2080 ba loop
11111011

10000110 10000000 11000000


2084 addcc %r3,%r5,%r3
00000101

10000001 11000011 11100000


2088 done: jmpl %r15+4,%r0
00000100

00000000 00000000 00000000


2092 length: 20
00010100

00000000 00000000 00001011


2096 address: a_start
10111000

.org a_start

3000 a:

Example of a selection of instructions (for a virtual computer[25])


with the corresponding address in memory where each
instruction will be placed. These addresses are not static,
see memory management. Accompanying each instruction is the
generated (by the assembler) object code that coincides with the
virtual computer's architecture (or ISA).

[edit]See also
 Compiler

 Disassembler

 Instruction set

 Little man computer – an educational computer model with a


base-10 assembly language
 Microassembler

 Typed assembly language


[edit]References

1. ^ David Salomon (1993). Assemblers and Loaders

2. ^ Beck, Leland L. (1996). "2". System Software: An


Introduction to Systems Programming. Addison Wesley.

3. ^ http://www.z80.de/z80/z80code.htm

4. ^ "Macros (C/C++), MSDN Library for Visual Studio 2008".


Microsoft Corp.. Retrieved 2010-06-22.

5. ^ "Concept 14 Macros". MVS Software. Retrieved May 25,


2009.

6. ^ Answers.com. "assembly language: Definition and Much


More from Answers.com". Retrieved 2008-06-19.

7. ^ NESHLA: The High Level, Open Source, 6502 Assembler


for the Nintendo Entertainment System

8. ^ Eidolon's Inn : SegaBase Saturn

9. ^ Jim Lawless (2004-05-21). "Speaking with Don French :


The Man Behind the French Silk Assembler Tools".

Retrieved 2008-07-25.

10. ^ a b "Writing the Fastest Code, by Hand, for Fun: A Human


Computer Keeps Speeding Up Chips". New York Times,
John Markoff. 2005-11-28. Retrieved 2010-03-04.

11. ^ "Bit-field-badness". hardwarebug.org. 2010-01-30.


Retrieved 2010-03-04.

12. ^ "GCC makes a mess". hardwarebug.org. 2009-05-13.


Retrieved 2010-03-04.

13. ^ Randall Hyde. "The Great Debate". Retrieved 2008-07-


03.

14. ^ "Code sourcery fails again". hardwarebug.org. 2010-01-


30. Retrieved 2010-03-04.
15. ^ "BLAS Benchmark-August2008". eigen.tuxfamily.org.
2008-08-01. Retrieved 2010-03-04.

16. ^ "x264.git/common/x86/dct-32.asm". git.videolan.org.


2010-09-29. Retrieved 2010-09-29.

17. ^ "68K Programming in Fargo II". Retrieved 2008-07-03.

18. ^ Hyde, Randall (1996-09-30). "Foreword ("Why would


anyone learn this stuff?"), op. cit.". Retrieved 2010-03-05.

19. ^ "256bytes demos archives". Retrieved 2008-07-03.

20. ^ Stroustrup, Bjarne, The C++ Programming Language,


Addison-Wesley, 1986, ISBN 0-201-12078-X: "C++ was

primarily designed so that the author and his friends would

not have to program in assembler, C, or various modern

high-level languages. [use of the term assembler to

meanassembly language]"

21. ^ Saxon, James, and Plette, William, Programming the


IBM 1401, Prentice-Hall, 1962, LoC 62-20615. [use of the

termassembly program]

22. ^ Randall Hyde. "Which Assembler is the Best?". Retrieved


2007-10-19.

23. ^ "vid". "FASMLIB: Features". Retrieved 2007-10-19.

24. ^ Murdocca, Miles J.; Vincent P. Heuring (2000). Principles


of Computer Architecture. Prentice-Hall. ISBN 0-201-

43664-7.

25. ^ Principles of Computer Architecture (POCA) – ARCTools


virtual computer available for download to execute

referenced code, accessed August 24, 2005

[edit]Further reading
 ASM Community Book "An online book full of helpful ASM
info, tutorials and code examples" by the ASM Community

 Jonathan Bartlett: Programming from the Ground Up. Bartlett


Publishing, 2004. ISBN 0-9752838-4-7
Also available online as PDF
 Robert Britton: MIPS Assembly Language Programming.
Prentice Hall, 2003. ISBN 0-13-142044-5

 Paul Carter: PC Assembly Language. Free ebook, 2001.


Website

 Jeff Duntemann: Assembly Language Step-by-Step. Wiley,


2000. ISBN 0-471-37523-3

 Randall Hyde: The Art of Assembly Language. No Starch


Press, 2003. ISBN 1-886411-97-2
Draft versions available online as PDF and HTML

 Peter Norton, John Socha, Peter Norton's Assembly


Language Book for the IBM PC, Brady Books, NY: 1986.

 Michael Singer, PDP-11. Assembler Language Programming


and Machine Organization, John Wiley & Sons, NY: 1980.

 Dominic Sweetman: See MIPS Run. Morgan Kaufmann


Publishers, 1999. ISBN 1-55860-410-3

 John Waldron: Introduction to RISC Assembly Language


Programming. Addison Wesley, 1998. ISBN 0-201-39828-1
[edit]External links
This article's use of external links may not follow
Wikipedia's policies or guidelines. Please improve this article by
removing excessive and inappropriate external links. (January 2010)

Look up assembly
language inWiktionary, the
free dictionary.

Wikibooks has a book on the


topic of

Subject:Assembly languages

 FASMARM 1.13 - FASM for ARM processors 04-Nov-2008

 Randall Hyde's The Art of Assembly Language as HTML and


PDF version
 Machine language for beginners

 Introduction to assembly language

 The ASM Community, a programming resource about


assembly including an ASM Book

 Intel Assembly 80x86 CodeTable (a cheat sheet reference)

 Unix Assembly Language Programming

 IBM z/Architecture Principles of Operation IBM manuals


on mainframe machine language and internals.

 IBM High Level Assembler IBM manuals on mainframe


assembler language.

 PPR: Learning Assembly Language

 An Introduction to Writing 32-bit Applications Using the x86


Assembly Language

 Assembly Language Programming Examples

 Authoring Windows Applications In Assembly Language

 Information on Linux assembly programming

 x86 Instruction Set Reference

 Iczelion's Win32 Assembly Tutorial

 Assembly Optimization Tips by Mark Larson

 NASM Manual

 8086 assembly coding by F.A. Smit

 Microchip PIC assembly coding basics

 Z80/Z180/8085 Assembler
[hide]
v•d•e
Types of programming lan

Array · Aspect-oriented · Assembly · Class-based · Compiled · Concatenative · Concurrent · Data-structured · Dataflow · Declarative · Do
Logic · Low-level · Machine · Macro · Metaprogramming · Multi-paradigm · Non-English-based · Object-based · Object-oriented · Off-s
level · Visual

Categories: Assembly languages | Assemblers | Programming


language implementation

• New features
• Log in / create account
• Article
• Discussion
• Read
• Edit
• View history
Top of Form

Bottom of Form
• Main page
• Contents
• Featured content
• Current events
• Random article
• Donate
Interaction
• About Wikipedia
• Community portal
• Recent changes
• Contact Wikipedia
• Help
Toolbox
Print/export
Languages
• ‫العربية‬
• বাংলা
• Bosanski
• Български
• Català
• Česky
• Dansk
• Deutsch
• Eesti
• Ελληνικά
• Español
• Esperanto
• ‫فارسی‬
• Français
• 한국어
• िहनदी
• Hrvatski
• Bahasa Indonesia
• Íslenska
• Italiano
• ‫עברית‬
• ქართული
• Latviešu
• Lietuvių
• Magyar
• Bahasa Melayu
• Nederlands
• 日本語
• Norsk (bokmål)
• Polski
• Português
• Română
• Русский
• Shqip
• සිංහල
• Simple English
• Slovenščina
• Српски / Srpski
• Srpskohrvatski / Српскохрватски
• Suomi
• Svenska
• ไทย
• Türkçe
• Українська
• Tiếng Việt
• 中文

• This page was last modified on 14 October 2010 at 07:26.


• Text is available under the Creative Commons Attribution-ShareAlike

License; additional terms may apply. See Terms of Use for details.
Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc.,
a non-profit organization.
• Contact us

• Privacy policy

• About Wikipedia

• Disclaimers


Flat Assembler 1.56
Added: August 05, 2008 | Visits: 505
The flat assembler is a fast and efficient self-assembling 80x86 assembler for DOS, Windows and Linux
operating systems. Currently it supports all 8086-80486/Pentium instructions with MMX, SSE, SSE2,
SSE3 and 3DNow! extensions, can produce output in binary, MZ, PE, COFF or ELF format. It includes...

Platforms: Windows XP, Unix, Linux

License: Freeware Cost: $0.00 USD Size: 143 KB Download (104): Flat Assembler Download

B::Assembler 5.8.8
Added: January 19, 2010 | Visits: 60
B::Assembler is a Perl module created to assemble Perl bytecode. SYNOPSIS use B::Assembler
qw(newasm endasm assemble); newasm(&printsub); # sets up for assembly assemble($buf); # assembles
one line endasm(); # closes down use B::Assembler qw(assemble_fh); assemble_fh($fh, &printsub); #...

Platforms: Linux

License: Freeware Cost: $0.00 USD Size: 12 MB Download (4): B::Assembler Download

MiniDV Assembler 0.96


Added: January 19, 2010 | Visits: 125
MiniDV Assembler application is particularly useful for owners of digital MiniDV cameras and others who
use the Sony MiniDV format for high quality video production. MiniDV Assembler allows you to append
multiple MiniDV files with nice looking transition effects for both audio and video....

Platforms: Linux

License: Freeware Cost: $0.00 USD Size: 24 KB Download (7): MiniDV Assembler Download

GoAsm 0.43
Added: August 07, 2008 | Visits: 555
Win32+assembler is becoming more and more popular, you can use a low level language (assembler)
together with a very high level language (the Windows API) - a perfect combination! GoAsm is a fast, free,
assembler for producing Win32 files. It has a particularly clean syntax with a number of...

Platforms: Windows XP

License: Freeware Cost: $0.00 USD Size: 200 KB Download (117): GoAsm Download
DV Video
Assembler 0.93
Added: January 19, 2010 | Visits: 98
This application is particularly useful for owners of digital DV video cameras and other people who use the
Sony DV video format for high quality video production. DV Assembler allows you to append multiple Sony
DV files with nice looking transition effects for both audio and video. Multiple...

Platforms: Linux

License: Freeware Cost: $0.00 USD Size: 23 KB Download (7): DV Video Assembler Download
Gif Assembler beta .96
Added: January 19, 2010 | Visits: 118
Gif Assembler is a Web-based frontend for Gifsicle which allows users to create GIF animations from
existing GIF images. With Gif Assembler you can upload up to 99 GIF images as the frames for the
animation. Whats New in This Release: · Changes some syntax for better path detection, changes...

Platforms: Linux

License: Freeware Cost: $0.00 USD Size: 4 KB Download (20): Gif Assembler beta Download

Pyastra 0.0.4.1-preview
Added: January 19, 2010 | Visits: 114
Pyastra is a python to assembler translator. The project takes source file written in python and, if the code
contains no errors, generates an assembler file. Then you may comile it to hex-file using your favourite
PIC assembler (gpasm, mpasm, or any other compatible with them). Goals: · to...

Platforms: Linux

License: Freeware Cost: $0.00 USD Size: 194 KB Download (8): Pyastra Download
Advanced
Assembler 0.9.0
Added: January 19, 2010 | Visits: 90
Aasm is an advanced assembler designed to support several target architectures. It has been designed to
be easily extended and, should be considered as a good alternative to monolithic assembler development
for each new target CPUs and binary file formats. Aasm should make assembly programming...

Platforms: Linux

License: Freeware Cost: $0.00 USD Size: 30 KB Download (8): Advanced Assembler Download

binfmtc 0.10
Added: January 19, 2010 | Visits: 85
binfmtc implements handlers for C and other languages, which are usually compiled. The program utilizes
the Linux binfmt-misc feature to dynamically compile and execute C programs as if they were scripts.
binfmtc project supports C, C++, Java, Pascal, Fortran, and assembler. Whats New in...

Platforms: Linux

License: Freeware Cost: $0.00 USD Size: 90 KB Download (5): binfmtc Download
emu8086 2.02
Released: August 08, 2002 | Added: December 22, 2006 | Visits: 4.501
Everything for learning assembly language in one pack! emu8086 combines an advanced source editor,
assembler, disassembler, software emulator (Virtual PC) with debugger, and step by step tutorials.

Platforms: Windows 95, Windows 98, Windows Me, Windows 2000, Windows XP

License: Shareware Cost: $12.50 USD Size: 2 MB Download (624): emu8086 Download

GNU 8085 Simulator 1.3


Added: January 19, 2010 | Visits: 520
GNUSim8085 is a graphical simulator for the Intel 8085 microprocessor. GNUSim8085 is a simulator and
assembler for the Intel 8085 Microprocessor, in GNOME environment. GNU 8085 Simulator contains an
inline assembler and a debugger. Whats New in This Release: · New: Use gtksourceview as...

Platforms: Linux

License: Freeware Cost: $0.00 USD Size: 80 KB Download (49): GNU 8085 Simulator Download

Yasm 0.6.1
Added: January 19, 2010 | Visits: 67
Yasm is a complete rewrite of the NASM assembler under the "new" BSD License (some portions are
under other licenses, see COPYING for details). Yasm project is designed from the ground up to allow for
multiple assembler syntaxes to be supported (eg, TASM, GAS, NASM etc.) in addition to multiple...

Platforms: Linux

License: Freeware Cost: $0.00 USD Size: 1 MB Download (8): Yasm Download
asfpga 1.00e
Added: January 19, 2010 | Visits: 62
asfpga is an assembler written for use in FPGA design. It can be easily modified for your instruction set.
The ultimate goal of this software is to allow a FPGA designer to easily write assembly code for a custom
instruction set. The current version allows to create a listing file, a memory...

Platforms: Linux

License: Freeware Cost: $0.00 USD Size: 7 KB Download (3): asfpga Download
SX-IDE 0.08
Added: January 19, 2010 | Visits: 79
This is an application to compile assembler files and transfer them from Linux to the XGS (SX28/52
microcontrollers) with the SX-Key. It requires QT 4, WINE and the SASM assembler. The 0.02 version
contains just the transfer part. The newer versions also contain the rest of the IDE to compile...

Platforms: Linux

License: Freeware Cost: $0.00 USD Size: 86 KB Download (6): SX-IDE Download
SVK-Protector
1.1
Released: July 01, 2002 | Added: December 22, 2006 | Visits: 2.425
SVK-Protector is a powerful tool offering both software developers and distributors a protection of their
software products against unauthorized copying, use and distribution. SVK-Protector is programmed in
Assembler, which is a synonym for high speed and special programming techniques that...

Platforms: Windows 95, Windows 98, Windows Me, Windows NT 4.x, Windows 2000, Windows XP

License: Commercial Cost: $89.00 USD Size: 1 MB Download (127): SVK-Protector Download

Square Assembler 1.6


Released: November 04, 2002 | Added: December 22, 2006 | Visits: 1.777
Square Assembler contains three different puzzle games united by common interface and idea - to take
out all squares from the board by matching colors. Addictive one-player mode, as well as playing against
the computer are supported.Various skins are provided. Helpful hints and information will...

Platforms: Windows 95, Windows 98, Windows Me, Windows NT 4.x, Windows XP
License: Shareware Cost: $12.50 USD Size: 787 KB Download (89): Square Assembler Download

My Personal Translator 1.46


Added: August 07, 2008 | Visits: 619
My Personal Translator is an automatic translator of all text types in more than 12 languages. The program
works as much with webpages, Word documents as with any text format (including texts without
formats).The translation is rapid and comfortable, the most important quality in this type of...

Platforms: Windows 98, Windows Me, Windows NT 4.x, Windows 2000, Windows XP

License: Shareware Cost: $0.00 USD Size: 4 KB Download (99): My Personal Translator Download

nonpareil 0.16
Added: January 25, 2010 | Visits: 62
nonpareil is a micro-assembler and simulator package for the calculators written originally for Linux by Eric
Smith.nonpareil for Mac OS X is pre-release software (beta version).nonpareil is made available under the
terms of the Free Software Foundation's General Public License, Version 2.

Platforms: Macintosh

License: Freeware Cost: $0.00 USD Size: 46 KB Download (4): nonpareil Download
nwbintools
0.1.1
Added: January 19, 2010 | Visits: 53
nwbintools is a machine code toolchain containing an assembler and various related development tools.
The project will thus be similar to GNU binutils, but no attempts are made to duplicate its functionality,
interfaces, or organization. nwbintools has been under development (on and off) since...

Platforms: Linux

License: Freeware Cost: $0.00 USD Size: 94 KB Download (2): nwbintools Download

Course Course Course Assignments Software Class Pseudocode


SIC/XE Course
Menu Description Outline & Doc Specs Summary Notes ModulesReference Supplement

CNW Home

Assembler Pseudocode.
2 pass assembler for SIC/XE
Pass 1:
BEGIN
initialize Scnt, Locctr, ENDval, and Errorflag to 0
WHILE Sourceline[Scnt] is a comment
BEGIN
increment Scnt
END {while}
Breakup Sourceline[Scnt]
IF Opcode = 'START' THEN
BEGIN
convert Operand from hex and save in Locctr and ENDval
IF Label not NULL THEN
Insert (Label, Locctr) into Symtab
ENDIF
increment Scnt
Breakup Sourceline[Scnt]
END
ENDIF
WHILE Opcode <> 'END'
BEGIN
IF Sourceline[Scnt] is not a comment THEN
BEGIN
IF Label not NULL THEN
Xsearch Symtab for Label
IF not found
Insert (Label, Locctr) into Symtab
ELSE
set errors flag in Errors[Scnt]
ENDIF
ENDIF
Xsearch Opcodetab for Opcode
IF found THEN
DO CASE
1. Opcode is 'RESW' or 'RESB'
BEGIN
increment Locctr by Storageincr
IF error THEN
set errors flag in Errors[Scnt]
ENDIF
END {case 1 (RESW or RESB)}
2. Opcode is 'WORD' or 'BYTE' THEN
BEGIN
increment Locctr by Storageincr
IF error THEN
set errors flag in Errors[Scnt]
ENDIF
END {case 2 (WORD or BYTE)}
3. OTHERWISE
BEGIN
increment Locctr by Opcodeincr
IF error THEN
set errors flag in Errors[Scnt]
ENDIF {case 3 (default)}
END
ENDCASE
ELSE
/* directives such as BASE handled here or */
set errors flag in Errors[Scnt]
ENDIF
END {IF block}
ENDIF
increment Scnt
Breakup Sourceline[Scnt]
END {while}
IF Label not NULL THEN
Xsearch Symtab for Label
IF not found
Insert (Label, Locctr) into Symtab
ELSE
set errors flag in Errors[Scnt]
ENDIF
ENDIF
IF Operand not NULL
Xsearch Symtab for Operand
IF found
install in ENDval
ENDIF
ENDIF
END {of Pass 1}

Pass 2:
BEGIN
initialize Scnt, Locctr, Skip, and Errorflag to 0
write assembler report headings
WHILE Sourceline[Scnt] is a comment
BEGIN
append to assembler report
increment Scnt
END {while}
Breakup Sourceline[Scnt]
IF Opcode = 'START' THEN
BEGIN
convert Operand from hex and save in Locctr
append to assembler report
increment Scnt
Breakup Sourceline[Scnt]
END
ENDIF
format and place the load point on object code array
format and place ENDval on object code array, index ENDloc
WHILE Opcode <> 'END'
BEGIN
IF Sourceline[Scnt] is not a comment THEN
BEGIN
Xsearch Opcodetab for Opcode
IF found THEN
DO CASE
1. Opcode is 'RESW' or 'RESB'
BEGIN
increment Locctr by Storageincr
place '!' on object code array
replace the value at index ENDloc with loader address
format and place Locctr on object code array
format and place ENDval on object code array, index ENDloc
set Skip to 1
END
2. Opcode is 'WORD' or 'BYTE'
BEGIN
increment Locctr by Storageincr
Dostorage to get Objline
IF error THEN
set errors flag in Errors[Scnt]
ENDIF
END
3. OTHERWISE
BEGIN
increment Locctr by Opcodeincr
Doinstruct to get Objline
IF error THEN
set errors flag in Errors[Scnt]
ENDIF
END
ENDCASE
ELSE
/* directives such as BASE handled here or */
set errors flag in Errors[Scnt]
ENDIF
END
ENDIF
append to assembler report
IF Errors[Scnt] <> 0 THEN
BEGIN
set Errorflag to 1
append error report to assembler report
END
ENDIF
IF Errorflag = 0 and Skip = 0 THEN
BEGIN
place Objline on object code array
END
ENDIF
IF Skip = 1 THEN
set Skip to 0
ENDIF
increment Scnt
Breakup Sourceline[Scnt]
END {while}
place '!' on object code array
IF Errorflag = 0 THEN
transfer object code array to file
ENDIF
END {of Pass 2}
How the assembler work
Meta commands
S2 Assembly language
S2 instruction format
Pass1
Pass2
Extended instructions
Syntax of the assembly language
Meta commands (.s .a .c .w .e)
There are three sections which can occur in any sequence: define symbols, code
section, data section. Each section starts with a meta command: .s for symbol section,
.c for code section, .w for data section. Each section ends with any meta command.
Other meta commands are: .a sets the current address, .e ends the assembly file. .e
must be the last line. The ';;' starts the comment to the end of the current line.
Comments are not interpreted by the assembler.
In symbol definition section, symbols are defined with their associated values. The
data section defined constant values. Lables can be defined in any section and they
can be referred to by other assembly instructions.
;; comment
.s ;; define symbol
symbol n ;; n is value
. . .
.a n ;; set address to n
.c ;; code segment
:label op opr1 opr2 ...
. . .
.w ;; data segment
v v ... ;; v is number or sym
.e ;; end of program

.s .a .c .w can occur in any sequence. .e is the last line of program.


S2 Assembly language
op opr1 opr2 ...
where
opr -> v #v @v +v
v -> n | sym

The convention for operand ordering is: op dest source. The operands are written in
such a way to simplify the assembler using prefix to identify the addressing mode.
ld r1, 10(r2) is written as ld r1 @10 r2 ;; displacement
ld r1, (r2+r3) " ld r1 +r2 r3 ;; index
ld r1, #200 " ld r1 #200 ;; immediate
add r1, r2, r3 " add r1 r2 r3 ;; reg-reg
add r1, r2, #20 add r1 r2 #20 ;; reg-immediate

The assembler does not check for all possible illegal combination of opcode,
addressing mode and operands. The forms of assembly language for each S2
instruction are:
ld rd source
st source rd
aop rd rs1 rs2
aop rd rs #n
sop rd rs
jmp cond dest
jal rd dest
jr rs
trap num rs

where
rd is r1..r31
rs is r0..r31
source -> absolute | disp | index | immediate (as shown above)
aop -> add | sub | mul | div | and | or | xor (ALU op)
sop -> shl | shr (shift op)
cond -> always | eq | neq | le | lt | gt | ge (conditional)
dest -> label | number

S2 instruction format (field:length)


L-format op:5 r1:5 ads:22
D-format op:5 r1:5 r2:5 disp:17
X-format op:5 r1:5 r2:5 r3:5 xop:12

The object code:


l op num num
d op num num num
x op num num num xop

ads and disp will be sign extended to 32-bit.

The assembler
The assembler works in two passes:
pass1
input scanning, collect symbols, generate token list
pass2
generate object code from the token list
input scanning
symbol table
The predefined symbols are: opcode, r0..r31, conditional. opcode are ld st jmp jal jr
add sub mul div and or xor shl shr trap. conditional are: always eq neq lt le ge gt.
pass 1
collect symbols and resolve reference
build symbol table
store token list
token list is an array of token. Each token stores type, mode, reference and line
number (refer to source code line number). line number is used in reporting error.
Type is: sym num op dot. Mode is addressing mode: absolute, displacement, index,
immediate, reg-reg, reg-imm, special.
For example ld r1 @lv1 base will generate the list of four tokens:
( notation : {type,mode,ref} )
{ {op,disp,ld}, {sym,reg,r1}, {sym,disp,lv1}, {sym,reg,base} }

pass 2
generate code from token list
output format is suitable for a loader of the simulator
a num set address
{l,d,x} num+ instruction
w num defined word
e end of file

4 December 2001

Extended instructions
To enable creation of new instructions, three extended instructions aer provided: xl,
xd, xx, associated with three instruction formats: L, D, X. The assembly language can
not have the notation of addressing as usual because the meaning of instruction will be
defined by users. Therefore the operands of the instruction have to be written out
without any decoration:
XL op r1 disp:22
XD op r1 r2 disp:17
XX op r1 r2 r3 xop:12

where op/xop are user defined, disp can be a symbol.


If the new instruction will have different format than the existing three, then users can
use .w to put a 32-bit value directly into the code section.
Example To add a new instruction "inc r1 r2 value" using D-format, where inc is
assigned the opcode number 14, it can be written:
.s
inc 14
value 1
.c
xd inc r1 r2 value
.e

The generated object code will be:


d 14 1 2 1

The simulator must be extended accordingly to interpret this new instruction. See
more example on assembly form of extended instruction in the file "as2\testx.txt":
;; test extended instruction
.s
inc 14
ldd 15
addx 16
addx2 17
.a 10
.c
xd inc r1 r2 1 ;; new instruction D-format
xl ldd r7 data ;; L-format
xx addx r3 r4 r5 addx2 ;; X-format
.w 48230 ;; raw 32-bit
.c ;; back to code
add r1 r3 #4
add r1 r2 r3
:data ;; data segment
.w 11 22 33
.e

Compiler
From Wikipedia, the free encyclopedia
This article is about the computing term. For the anime, see Compiler (anime).
This article needs attention from an expert on the subject. See the talk page for
details. WikiProject Computer science or the Computer science Portal may be able to
help recruit an expert. (December 2008)

A diagram of the operation of a typical multi-language, multi-target compiler.

A compiler is a computer program (or set of programs) that transforms source code written in a programming
language (the source language) into another computer language (the target language, often having a binary
form known as object code). The most common reason for wanting to transform source code is to create
an executable program.

The name "compiler" is primarily used for programs that translate source code from a high-level programming
language to a lower level language (e.g., assembly language or machine code). If the compiled program can
only run on a computer whose CPU or operating system is different from the one on which the compiler runs
the compiler is known as a cross-compiler. A program that translates from a low level language to a higher
level one is a decompiler. A program that translates between high-level languages is usually called alanguage
translator, source to source translator, or language converter. A language rewriter is usually a program that
translates the form of expressions without a change of language.

A compiler is likely to perform many or all of the following operations: lexical analysis, preprocessing, parsing,
semantic analysis(Syntax-directed translation), code generation, and code optimization.
Program faults caused by incorrect compiler behavior can be very difficult to track down and work around and
compiler implementors invest a lot of time ensuring the correctness of their software.

The term compiler-compiler is sometimes used to refer to a parser generator, a tool often used to help create
the lexer and parser.

Contents
[hide]

• 1 History

○ 1.1 Compilers in education

• 2 Compilation

○ 2.1 Structure of compiler

• 3 Compiler output

○ 3.1 Compiled versus interpreted

languages

○ 3.2 Hardware compilation

• 4 Compiler design

○ 4.1 One-pass versus multi-pass

compilers

○ 4.2 Front end

○ 4.3 Back end

• 5 Compiler correctness

• 6 Related techniques

• 7 International conferences and

organizations

• 8 See also

• 9 Notes

• 10 References

• 11 External links
Structure of compiler
he diagram below gives a simplified view of what compiler does

Compilers bridge source programs in high-level languages with the underlying hardwares. A compiler
requires 1) to recognize legitimacy of programs, 2) to generate correct and efficient code, 3) run-time
organization, 4) to format output according to assembler or linker conventions. A compiler consists of
three main parts: frontend, middle-end, and backend.

Frontend checks whether the program is correctly written in terms of the programming language syntax
and semantics. Here legal and illegal programs are recognized. Errors are reported, if any, in a useful
way. Type checking is also performed by collecting type information. Frontend generates IR (intermediate
representation) for the middle-end. Optimization of this part is almost complete so much are already
automated. There are efficient algorithms typically in O(n) or O(n log n).

Middle-end is where the optimizations for performance take place. Typical transformations for
optimization are removal of useless or unreachable code, discovering and propagating constant values,
relocation of computation to a less frequently executed place (e.g., out of a loop), or specializing a
computation based on the context. Middle-end generates IR for the following backend. Most optimization
efforts are focused on this part.
Backend is responsible for translation of IR into the target assembly code. The target instruction(s) are
chosen for each IR instruction. Variables are also selected for the registers. Backend utilizes the
hardware by figuring out how to keep parallel FUs busy, filling delay slots, and so on. Although most
algorithms for optimization are in NP, heuristic techniques are well-developed.

Compiler design

This section does not cite any references or sources.


Please help improve this article by adding citations to reliable sources. Unsourced material may
be challenged and removed. (September 2010)

In the early days, the approach taken to compiler design used to be directly affected by the complexity of
the processing, the experience of the person(s) designing it, and the resources available.

A compiler for a relatively simple language written by one person might be a single, monolithic piece of
software. When the source language is large and complex, and high quality output is required the design
may be split into a number of relatively independent phases. Having separate phases means
development can be parceled up into small parts and given to different people. It also becomes much
easier to replace a single phase by an improved one, or to insert new phases later (eg, additional
optimizations).

The division of the compilation processes into phases was championed by the Production Quality
Compiler-Compiler Project (PQCC) at Carnegie Mellon University. This project introduced the termsfront
end, middle end, and back end.

All but the smallest of compilers have more than two phases. However, these phases are usually
regarded as being part of the front end or the back end. The point at which these two ends meet is open
to debate. The front end is generally considered to be where syntactic and semantic processing takes
place, along with translation to a lower level of representation (than source code).

The middle end is usually designed to perform optimizations on a form other than the source code or
machine code. This source code/machine code independence is intended to enable generic optimizations
to be shared between versions of the compiler supporting different languages and target processors.

The back end takes the output from the middle. It may perform more analysis, transformations and
optimizations that are for a particular computer. Then, it generates code for a particular processor and
OS.
This front-end/middle/back-end approach makes it possible to combine front ends for
different languages with back ends for different CPUs. Practical examples of this approach are the GNU
Compiler Collection, LLVM, and the Amsterdam Compiler Kit, which have multiple front-ends, shared
analysis and multiple back-ends.
[edit]One-pass versus multi-pass compilers
Classifying compilers by number of passes has its background in the hardware resource limitations of
computers. Compiling involves performing lots of work and early computers did not have enough memory
to contain one program that did all of this work. So compilers were split up into smaller programs which
each made a pass over the source (or some representation of it) performing some of the required
analysis and translations.

The ability to compile in a single pass has classically been seen as a benefit because it simplifies the job
of writing a compiler and one pass compilers generally compile faster than multi-pass compilers. Thus,
partly driven by the resource limitations of early systems, many early languages were specifically
designed so that they could be compiled in a single pass (e.g., Pascal).

In some cases the design of a language feature may require a compiler to perform more than one pass
over the source. For instance, consider a declaration appearing on line 20 of the source which affects the
translation of a statement appearing on line 10. In this case, the first pass needs to gather information
about declarations appearing after statements that they affect, with the actual translation happening
during a subsequent pass.

The disadvantage of compiling in a single pass is that it is not possible to perform many of the
sophisticated optimizations needed to generate high quality code. It can be difficult to count exactly how
many passes an optimizing compiler makes. For instance, different phases of optimization may analyse
one expression many times but only analyse another expression once.

Splitting a compiler up into small programs is a technique used by researchers interested in producing
provably correct compilers. Proving the correctness of a set of small programs often requires less effort
than proving the correctness of a larger, single, equivalent program.

While the typical multi-pass compiler outputs machine code from its final pass, there are several other
types:

 A "source-to-source compiler" is a type of compiler that takes a high level language as its input and
outputs a high level language. For example, an automatic parallelizing compiler will frequently take in
a high level language program as an input and then transform the code and annotate it with parallel
code annotations (e.g. OpenMP) or language constructs (e.g. Fortran's DOALLstatements).
 Stage compiler that compiles to assembly language of a theoretical machine, like
some Prolog implementations

 This Prolog machine is also known as the Warren Abstract Machine (or WAM).

 Bytecode compilers for Java, Python, and many more are also a subtype of this.

 Just-in-time compiler, used by Smalltalk and Java systems, and also by Microsoft .NET's Common
Intermediate Language (CIL)

 Applications are delivered in bytecode, which is compiled to native machine code just prior to
execution.
[edit]Front end
The front end analyzes the source code to build an internal representation of the program, called
the intermediate representation or IR. It also manages the symbol table, a data structure mapping each
symbol in the source code to associated information such as location, type and scope. This is done over
several phases, which includes some of the following:

1. Line reconstruction. Languages which strop their keywords or allow arbitrary spaces within
identifiers require a phase before parsing, which converts the input character sequence to a
canonical form ready for the parser. The top-down, recursive-descent, table-driven parsers used
in the 1960s typically read the source one character at a time and did not require a separate
tokenizing phase. Atlas Autocode, and Imp (and some implementations of Algol and Coral66) are
examples of stropped languages whose compilers would have a Line Reconstruction phase.

2. Lexical analysis breaks the source code text into small pieces called tokens. Each token is a
single atomic unit of the language, for instance a keyword, identifier or symbol name. The token
syntax is typically a regular language, so a finite state automaton constructed from a regular
expression can be used to recognize it. This phase is also called lexing or scanning, and the
software doing lexical analysis is called a lexical analyzer or scanner.

3. Preprocessing. Some languages, e.g., C, require a preprocessing phase which


supports macro substitution and conditional compilation. Typically the preprocessing phase
occurs before syntactic or semantic analysis; e.g. in the case of C, the preprocessor manipulates
lexical tokens rather than syntactic forms. However, some languages such as Scheme support
macro substitutions based on syntactic forms.

4. Syntax analysis involves parsing the token sequence to identify the syntactic structure of the
program. This phase typically builds a parse tree, which replaces the linear sequence of tokens
with a tree structure built according to the rules of a formal grammar which define the language's
syntax. The parse tree is often analyzed, augmented, and transformed by later phases in the
compiler.

5. Semantic analysis is the phase in which the compiler adds semantic information to the parse
tree and builds the symbol table. This phase performs semantic checks such as type
checking(checking for type errors), or object binding (associating variable and function
references with their definitions), or definite assignment (requiring all local variables to be
initialized before use), rejecting incorrect programs or issuing warnings. Semantic analysis
usually requires a complete parse tree, meaning that this phase logically follows
the parsing phase, and logically precedes the code generation phase, though it is often possible
to fold multiple phases into one pass over the code in a compiler implementation.
[edit]Back end
The term back end is sometimes confused with code generator because of the overlapped functionality of
generating assembly code. Some literature uses middle end to distinguish the generic analysis and
optimization phases in the back end from the machine-dependent code generators.

The main phases of the back end include the following:

1. Analysis: This is the gathering of program information from the intermediate representation
derived from the input. Typical analyses are data flow analysis to build use-define
chains, dependence analysis, alias analysis, pointer analysis, escape analysis etc. Accurate
analysis is the basis for any compiler optimization. The call graph and control flow graph are
usually also built during the analysis phase.

2. Optimization: the intermediate language representation is transformed into functionally equivalent


but faster (or smaller) forms. Popular optimizations are inline expansion, dead code
elimination,constant propagation, loop transformation, register allocation and even automatic
parallelization.

3. Code generation: the transformed intermediate language is translated into the output language,
usually the native machine language of the system. This involves resource and storage
decisions, such as deciding which variables to fit into registers and memory and the selection
and scheduling of appropriate machine instructions along with their associated addressing
modes (see alsoSethi-Ullman algorithm).

Compiler analysis is the prerequisite for any compiler optimization, and they tightly work together. For
example, dependence analysis is crucial for loop transformation.

In addition, the scope of compiler analysis and optimizations vary greatly, from as small as a basic
block to the procedure/function level, or even over the whole program (interprocedural optimization).
Obviously, a compiler can potentially do a better job using a broader view. But that broad view is not free:
large scope analysis and optimizations are very costly in terms of compilation time and memory space;
this is especially true for interprocedural analysis and optimizations.

Interprocedural analysis and optimizations are common in modern commercial compilers


from HP, IBM, SGI, Intel, Microsoft, and Sun Microsystems. The open source GCC was criticized for a
long time for lacking powerful interprocedural optimizations, but it is changing in this respect. Another
open source compiler with full analysis and optimization infrastructure is Open64, which is used by many
organizations for research and commercial purposes.

Due to the extra time and space needed for compiler analysis and optimizations, some compilers skip
them by default. Users have to use compilation options to explicitly tell the compiler which optimizations
should be enabled.

Lexical analysis
From Wikipedia, the free encyclopedia

In computer science, lexical analysis is the process of converting a sequence of characters into a sequence of
tokens. A program or function which performs lexical analysis is called a lexical analyzer, lexer or scanner. A
lexer often exists as a single function which is called by a parser or another function.

Contents
[hide]

• 1 Lexical grammar

• 2 Token

• 3 Scanner

• 4 Tokenizer

• 5 Lexer generator
• 6 Lexical analyzer

generators

• 7 See also

• 8 References

[edit]Lexical grammar
The specification of a programming language will often include a set of rules which defines the lexer. These
rules are usually called regular expressions and they define the set of possible character sequences that are
used to form tokens or lexemes, whitespace, (i.e. characters that are ignored), are also defined in the regular
expressions.

[edit]Token
A token is a string of characters, categorized according to the rules as a symbol (e.g. IDENTIFIER, NUMBER,
COMMA, etc.). The process of forming tokens from an input stream of characters is called tokenization and
the lexer categorizes them according to a symbol type. A token can look like anything that is useful for
processing an input text stream or text file.

A lexical analyzer generally does nothing with combinations of tokens, a task left for a parser. For example, a
typical lexical analyzer recognizes parenthesis as tokens, but does nothing to ensure that each '(' is matched
with a ')'.

Consider this expression in the C programming language:

sum=3+2;

Tokenized in the following table:

lexeme token type

sum Identifier

Assignment
=
operator

3 Number

+ Addition operator
2 Number

; End of statement

Tokens are frequently defined by regular expressions, which are understood by a lexical analyzer
generator such as lex. The lexical analyzer (either generated automatically by a tool like lex, or hand-
crafted) reads in a stream of characters, identifies the lexemes in the stream, and categorizes them into
tokens. This is called "tokenizing." If the lexer finds an invalid token, it will report an error.

Following tokenizing is parsing. From there, the interpreted data may be loaded into data structures for
general use, interpretation, or compiling.

[edit]Scanner
The first stage, the scanner, is usually based on a finite state machine. It has encoded within it
information on the possible sequences of characters that can be contained within any of the tokens it
handles (individual instances of these character sequences are known as lexemes). For instance,
an integer token may contain any sequence of numerical digit characters. In many cases, the first non-
whitespace character can be used to deduce the kind of token that follows and subsequent input
characters are then processed one at a time until reaching a character that is not in the set of characters
acceptable for that token (this is known as the maximal munch rule, or longest match rule). In some
languages[which?] the lexeme creation rules are more complicated and may involvebacktracking over
previously read characters.

[edit]Tokenizer
Tokenization is the process of demarcating and possibly classifying sections of a string of input
characters. The resulting tokens are then passed on to some other form of processing. The process can
be considered a sub-task of parsing input.

Take, for example,

The quick brown fox jumps over the lazy dog

The string isn't implicitly segmented on spaces, as an English speaker would do. The raw input, the
43 characters, must be explicitly split into the 9 tokens with a given space delimiter (i.e. matching
the string " " or regular expression /\s{1}/.
The tokens could be represented in XML,

<sentence>
<word>The</word>
<word>quick</word>
<word>brown</word>
<word>fox</word>
<word>jumps</word>
<word>over</word>
<word>the</word>
<word>lazy</word>
<word>dog</word>
</sentence>

Or an s-expression,

(sentence . (The quick brown fox jumps over the lazy dog))

A lexeme, however, is only a string of characters known to be of a certain kind (e.g., a string literal,
a sequence of letters). In order to construct a token, the lexical analyzer needs a second stage,
theevaluator, which goes over the characters of the lexeme to produce a value. The lexeme's type
combined with its value is what properly constitutes a token, which can be given to a parser. (Some
tokens such as parentheses do not really have values, and so the evaluator function for these can
return nothing. The evaluators for integers, identifiers, and strings can be considerably more
complex. Sometimes evaluators can suppress a lexeme entirely, concealing it from the parser,
which is useful for whitespace and comments.)

For example, in the source code of a computer program the string

net_worth_future = (assets - liabilities);

might be converted (with whitespace suppressed) into the lexical token stream:

NAME "net_worth_future"
EQUALS
OPEN_PARENTHESIS
NAME "assets"
MINUS
NAME "liabilities"
CLOSE_PARENTHESIS
SEMICOLON

Though it is possible and sometimes necessary[specify] to write a lexer by hand, lexers are often
generated by automated tools. These tools generally accept regular expressions that describe
the tokens allowed in the input stream. Each regular expression is associated with a
production in the lexical grammar of the programming language that evaluates the lexemes
matching the regular expression. These tools may generate source code that can be compiled
and executed or construct a state table for a finite state machine (which is plugged into
template code for compilation and execution).

Regular expressions compactly represent patterns that the characters in lexemes might
follow. For example, for an English-based language, a NAME token might be any English
alphabetical character or an underscore, followed by any number of instances of any ASCII
alphanumeric character or an underscore. This could be represented compactly by the
string [a-zA-Z_][a-zA-Z_0-9]*. This means "any character a-z, A-Z or _, followed by 0

or more of a-z, A-Z, _ or 0-9".

Regular expressions and the finite state machines they generate are not powerful enough to
handle recursive patterns, such as "n opening parentheses, followed by a statement, followed
by n closing parentheses." They are not capable of keeping count, and verifying that n is the
same on both sides — unless you have a finite set of permissible values for n. It takes a full-
fledged parser to recognize such patterns in their full generality. A parser can push
parentheses on a stack and then try to pop them off and see if the stack is empty at the end.
(see example in the SICP book)

The Lex programming tool and its compiler is designed to generate code for fast lexical
analysers based on a formal description of the lexical syntax. It is not generally considered
sufficient for applications with a complicated set of lexical rules and severe performance
requirements; for instance, the GNU Compiler Collection uses hand-written lexers.

[edit]Lexer generator
Lexical analysis can often be performed in a single pass if reading is done a character at a
time. Single-pass lexers can be generated by tools such as the classic flex.

The lex/flex family of generators uses a table-driven approach which is much less efficient
than the directly coded approach[dubious – discuss]. With the latter approach the generator produces
an engine that directly jumps to follow-up states via goto statements. Tools
like re2c and Quex have proven (e.g. RE2C - A More Versatile Scanner Generator (1994)) to
produce engines that are between two to three times faster than flex produced engines.[citation
needed]
It is in general difficult to hand-write analyzers that perform better than engines
generated by these latter tools.

The simple utility of using a scanner generator should not be discounted, especially in the
developmental phase, when a language specification might change daily. The ability to
express lexical constructs as regular expressions facilitates the description of a lexical
analyzer. Some tools offer the specification of pre- and post-conditions which are hard to
program by hand. In that case, using a scanner generator may save a lot of development
time.

[edit]Lexical analyzer generators


 ANTLR - ANTLR generates predicated-LL(k) lexers.

 Flex - Alternative variant of the classic 'lex' (C/C++).

 JFlex - a rewrite of JLex.

 Ragel - A state machine and lexical scanner generator with output support for C, C++,
Objective-C, D, Java and Ruby source code.

The following lexical analysers can handle Unicode:

 JLex - A Lexical Analyzer Generator for Java.

 Quex - (or 'Queχ') A Fast Universal Lexical Analyzer Generator for C++.

Preprocessor
From Wikipedia, the free encyclopedia
(Redirected from Preprocessing)

In computer science, a preprocessor is a program that processes its input data to produce output that is used
as input to another program. The output is said to be a preprocessed form of the input data, which is often
used by some subsequent programs like compilers. The amount and kind of processing done depends on the
nature of the preprocessor; some preprocessors are only capable of performing relatively simple textual
substitutions and macro expansions, while others have the power of fully-fledged programming languages.

A common example from computer programming is the processing performed on source code before the next
step of compilation. In some computer languages (e.g., C and PL/I ) there is a phase oftranslation known
as preprocessing.

Contents
[hide]

• 1 Lexical preprocessors

○ 1.1 C preprocessor

○ 1.2 Other lexical


preprocessors

• 2 Syntactic preprocessors

○ 2.1 Customizing

syntax

○ 2.2 Extending a

language

○ 2.3 Specializing a

language

• 3 General purpose

preprocessor

• 4 See also

• 5 References

• 6 External links

[edit]Lexical preprocessors

Lexical preprocessors are the lowest-level of preprocessors, insofar as they only require lexical analysis, that
is, they operate on the source text, prior to any parsing, by performing simple substitution
of tokenized character sequences for other tokenized character sequences, according to user-defined rules.
They typically perform macro substitution, textual inclusion of other files, and conditionalcompilation or
inclusion.

[edit]C preprocessor
The most common example of this is the C preprocessor, which takes lines beginning with '#' as directives.
Because it knows nothing about the underlying language, its use has been criticized and many of its features
built directly into other languages. For example, macros replaced with aggressive inlining and templates,
includes with compile-time imports(this requires the preservation of type information in the object code, making
this feature impossible to retrofit into a language); conditional compilation is effectively accomplished with if-
then-else and dead code elimination in some languages.

[edit]Other lexical preprocessors


Other lexical preprocessors include the general-purpose m4, most commonly used in cross-platform build
systems such as autoconf, and GEMA, an open source macro processor which operates on patterns of
context.
[edit]Syntactic preprocessors

Syntactic preprocessors were introduced with the Lisp family of languages. Their role is to transform syntax
trees according to a number of user-defined rules. For some programming languages, the rules are written in
the same language as the program (compile-time reflection). This is the case with Lisp and OCaml. Some other
languages rely on a fully external language to define the transformations, such as the XSLT preprocessor
for XML, or its statically typed counterpart CDuce.

Syntactic preprocessors are typically used to customize the syntax of a language, extend a language by adding
new primitives, or embed a Domain-Specific Programming Language inside a general purpose language.

[edit]Customizing syntax
A good example of syntax customization is the existence of two different syntaxes in the Objective
Caml programming language.[1] Programs may be written indifferently using the "normal syntax" or the "revised
syntax", and may be pretty-printed with either syntax on demand.

Similarly, a number of programs written in OCaml customize the syntax of the language by the addition of new
operators.

[edit]Extending a language
The best examples of language extension through macros are found in the Lisp family of languages. While the
languages, by themselves, are simple dynamically-typed functional cores, the standard distributions
of Scheme or Common Lisp permit imperative or object-oriented programming, as well as static typing. Almost
all of these features are implemented by syntactic preprocessing, although it bears noting that the "macro
expansion" phase of compilation is handled by the compiler in Lisp. This can still be considered a form of
preprocessing, since it takes place before other phases of compilation.

Similarly, statically-checked, type-safe regular expressions or code generation may be added to the syntax and
semantics of OCaml through macros, as well as micro-threads (also known ascoroutines or fibers), monads or
transparent XML manipulation.

[edit]Specializing a language
One of the unusual features of the Lisp family of languages is the possibility of using macros to create an
internal Domain-Specific Programming Language. Typically, in a large Lisp-based project, a module may be
written in a variety of such minilanguages, one perhaps using a SQL-based dialect of Lisp, another written in a
dialect specialized for GUIs or pretty-printing, etc. Common Lisp's standard library contains an example of this
level of syntactic abstraction in the form of the LOOP macro, which implements an Algol-like minilanguage to
describe complex iteration, while still enabling the use of standard Lisp operators.
The MetaOCaml preprocessor/language provides similar features for external Domain-Specific Programming
Languages. This preprocessor takes the description of the semantics of a language (i.e. an interpreter) and, by
combining compile-time interpretation and code generation, turns that definition into a compiler to
the OCaml programming language—and from that language, either to bytecode or to native code.`

[edit]General purpose preprocessor

Most preprocessors are specific to a particular data processing task (e.g., compiling the C language). A
preprocessor may be promoted as being general purpose, meaning that it is not aimed at a specific usage or
programming language, and is intended to be used for a wide variety of text processing tasks.

M4 is probably the most well known example of such a general purpose preprocessor, although
the C preprocessor is sometimes used in a non-C specific role. Examples:

 using C preprocessor for Javascript preprocessing [2].

 using M4 (see on-article example) or C preprocessor [3] as a template engine, to HTML generation.

 imake, a make interface using the C preprocessor, used in the X Window System but now deprecated in
favour of automake.

 grompp, a preprocessor for simulation input files for GROMACS (a fast, free, open-source code for some
problems in computational chemistry which calls the system C preprocessor (or other preprocessor as
determined by the simulation input file) to parse the topology, using mostly the #define and #include
mechanisms to determine the effective topology at grompp run time.

Parsing
From Wikipedia, the free encyclopedia
"Parse" redirects here. For the ice hockey player, see Scott Parse. For the company, see Parsé Semiconductor
Co..

"Parser" redirects here. For the computer programming language, see Parser (CGI language).

This article needs additional citations for verification.


Please help improve this article by adding reliable references. Unsourced material may
be challenged and removed. (August 2008)

In computer science and linguistics, parsing, or, more formally, syntactic analysis, is the process of analyzing
a text, made of a sequence of tokens (for example, words), to determine its grammatical structure with respect
to a given (more or less) formal grammar. Parsing can also be used as a linguistic term, especially in reference
to how phrases are divided up in garden path sentences.
Parsing is also an earlier term for the diagramming of sentences of natural languages, and is still used for the
diagramming of inflected languages, such as the Romance languages or Latin. The term parsing comes from
Latin pars (ōrātiōnis), meaning part (of speech).[1][2]

Parsing is a common term used in psycholinguistics when describing language comprehension. In this context,
parsing refers to the way that human beings, rather than computers, analyze a sentence or phrase (in spoken
language or text) "in terms of grammatical constituents, identifying the parts of speech, syntactic relations,
etc." [3] This term is especially common when discussing what linguistic cues help speakers to parse garden-
path sentences.

Parser
In computing, a parser is one of the components in an interpreter or compiler, which checks for correct
syntax and builds a data structure (often some kind of parse tree, abstract syntax tree or other
hierarchical structure) implicit in the input tokens. The parser often uses a separate lexical analyser to
create tokens from the sequence of input characters. Parsers may be programmed by hand or may be
(semi-)automatically generated (in some programming languages) by a tool.
Overview of process
The following example demonstrates the common case of parsing a computer language with two levels of
grammar: lexical and syntactic.

The first stage is the token generation, or lexical analysis, by which the input character stream is split into
meaningful symbols defined by a grammar of regular expressions. For example, a calculator program
would look at an input such as "12*(3+4)^2" and split it into the tokens 12, *, (, 3, +, 4, ), ^, and 2,
each of which is a meaningful symbol in the context of an arithmetic expression. The lexer would contain
rules to tell it that the characters *, +, ^, ( and ) mark the start of a new token, so meaningless tokens
like "12*" or "(3" will not be generated.

The next stage is parsing or syntactic analysis, which is checking that the tokens form an allowable
expression. This is usually done with reference to a context-free grammarwhich recursively defines
components that can make up an expression and the order in which they must appear. However, not all
rules defining programming languages can be expressed by context-free grammars alone, for example
type validity and proper declaration of identifiers. These rules can be formally expressed with attribute
grammars.

The final phase is semantic parsing or analysis, which is working out the implications of the expression
just validated and taking the appropriate action. In the case of a calculator or interpreter, the action is to
evaluate the expression or program; a compiler, on the other hand, would generate some kind of code.
Attribute grammars can also be used to define these actions.

Examples of parsers
[edit]Top-down parsers
Some of the parsers that use top-down parsing include:

 Recursive descent parser

 LL parser (Left-to-right, Leftmost derivation)

 Earley parser

 X-SAIGA - eXecutable SpecificAtIons of GrAmmars. Contains publications related to top-down


parsing algorithm that supports left-recursion and ambiguity in polynomial time and space.
[edit]Bottom-up parsers
Some of the parsers that use bottom-up parsing include:

 Precedence parser

 Operator-precedence parser

 Simple precedence parser

 BC (bounded context) parsing

 LR parser (Left-to-right, Rightmost derivation)

 Simple LR (SLR) parser

 LALR parser

 Canonical LR (LR(1)) parser

 GLR parser

 CYK parser
Syntax-directed translation
From Wikipedia, the free encyclopedia

This article is an orphan, as few or no other articles link to it. Please introduce
links to this page from related articles; suggestions are available. (January 2010)

In computer programming, Syntax-directed translation (SDT) is a method of translating a string into a


sequence of actions by attaching one such action to each rule of a grammar[1]. Thus, parsing a string of the
grammar produces a sequence of rule applications. And SDT provides a simple way to attach semantics to any
such syntax.

[edit]General Explanation

Syntax-directed translation fundamentally works by adding actions to the productions in a context-free


grammar. Actions are steps or procedures that will be carried out when that production is used in a derivation.
A grammar specification embedded with actions to be performed is called a syntax-directed translation
scheme[2] (sometimes simply called a 'translation scheme'.)

Each symbol in the grammar can have an attribute, which is a value that is to be associated with the symbol.
Common attributes could include a variable type, the value of an expression, etc. Given a symbol X, with an
attribute t, that attribute is referred to as X.t

Thus, given actions and attributes, the grammar can be used for translating strings from its language by
applying the actions and carrying information through each symbol's attribute.

Code generation (compiler)


From Wikipedia, the free encyclopedia

This article does not cite any references or sources.


Please help improve this article by adding citations to reliable sources. Unsourced material may
be challenged and removed. (November 2006)

This article is about machine code generation with a compiler. For other uses, see Code generation
(disambiguation).
In computer science, code generation is the process by which a compiler's code generator converts
some intermediate representation of source code into a form (e.g., machine code) that can be readily executed
by a machine (often a computer).

Sophisticated compilers typically perform multiple passes over various intermediate forms. This multi-stage
process is used because many algorithms for code optimization are easier to apply one at a time, or because
the input to one optimization relies on the processing performed by another optimization. This organization also
facilitates the creation of a single compiler that can target multiple architectures, as only the last of the code
generation stages (the backend) needs to change from target to target. (For more information on compiler
design, see Compiler.)

The input to the code generator typically consists of a parse tree or an abstract syntax tree. The tree is
converted into a linear sequence of instructions, usually in an intermediate language such asthree address
code. Further stages of compilation may or may not be referred to as "code generation", depending on whether
they involve a significant change in the representation of the program. (For example, a peephole
optimization pass would not likely be called "code generation", although a code generator might incorporate a
peephole optimization pass.)

Contents
[hide]

• 1 Major tasks in code

generation

• 2 Runtime code generation

• 3 Related concepts

○ 3.1 Reflection

[edit]Major tasks in code generation

In addition to the basic conversion from an intermediate representation into a linear sequence of machine
instructions, a typical code generator tries to optimize the generated code in some way. The generator may try
to use faster instructions, use fewer instructions, exploit available registers, and avoid redundant computations.

Tasks which are typically part of a sophisticated compiler's "code generation" phase include:

 Instruction selection: which instructions to use.

 Instruction scheduling: in which order to put those instructions. Scheduling is a speed optimization that can
have a critical effect on pipelined machines.

 Register allocation: the allocation of variables to processor registers.


Instruction selection is typically carried out by doing a recursive postorder traversal on the abstract syntax tree,
matching particular tree configurations against templates; for example, the tree W :=
ADD(X,MUL(Y,Z)) might be transformed into a linear sequence of instructions by recursively generating the
sequences for t1 := X and t2 := MUL(Y,Z), and then emitting the instruction ADD W, t1, t2.

In a compiler that uses an intermediate language, there may be two instruction selection stages — one to
convert the parse tree into intermediate code, and a second phase much later to convert the intermediate code
into instructions from the instruction set of the target machine. This second phase does not require a tree
traversal; it can be done linearly, and typically involves a simple replacement of intermediate-language
operations with their corresponding opcodes. However, if the compiler is actually a language translator (for
example, one that converts Eiffel to C), then the second code-generation phase may involve building a tree
from the linear intermediate code.

[edit]Runtime code generation

When code generation occurs at runtime, as in just-in-time compilation (JIT), it is important that the entire
process be efficient with respect to space and time. For example, when regular expressionsare interpreted and
used to generate code at runtime, a non-determistic finite state machine is often generated instead of a
deterministic one, because usually the former can be created more quickly and occupies less memory space
than the latter. Despite its generally generating less efficient code, JIT code generation can take advantage
of profiling information that is available only at runtime.

[edit]Related concepts

The fundamental task of taking input in one language and producing output in a non-trivially different language
can be understood in terms of the core transformational operations of formal language theory. Consequently,
some techniques that were originally developed for use in compilers have come to be employed in other ways
as well. For example, YACC (Yet Another Compiler Compiler) takes input in Backus-Naur form and converts it
to a parser in C. Though it was originally created for automatic generation of a parser for a compiler, yacc is
also often used to automate writing code that needs to be modified each time specifications are changed. (For
example, see [1].)

Many integrated development environments (IDEs) support some form of automatic source code generation,
often using algorithms in common with compiler code generators, although commonly less complicated. (See
also: Program transformation, Data transformation.)

[edit]Reflection

In general, a syntax and semantic analyzer tries to retrieve the structure of the program from the source code,
while a code generator uses this structural information (e.g., data types) to produce code. In other words, the
former adds information while the latter loses some of the information. One consequence of this information
loss is that reflection becomes difficult or even impossible. To counter this problem, code generators often
embed syntactic and semantic information in addition to the code necessary for execution.

Program optimization
From Wikipedia, the free encyclopedia
(Redirected from Code optimization)

For algorithms to solve other optimization problems, see Optimization (mathematics).

It has been suggested


that Algorithmic_efficiency#Optimization_techniques be merged into this article or
section. (Discuss)

In computer science, program optimization or software optimization is the process of modifying a software
system to make some aspect of it work more efficiently or use fewer resources.[1] In general, a computer
program may be optimized so that it executes more rapidly, or is capable of operating with less memory
storage or other resources, or draw less power.

Although the word "optimization" shares the same root as "optimal," it is rare for the process of
optimization to produce a truly optimal system. The optimized system will typically only be optimal in one
application or for one audience. One might reduce the amount of time that a program takes to perform
some task at the price of making it consume more memory. In an application where memory space is at a
premium, one might deliberately choose a slower algorithm in order to use less memory. Often there is no
“one size fits all” design which works well in all cases, so engineers make trade-offs to optimize the
attributes of greatest interest. Additionally, the effort required to make a piece of software completely
optimal—incapable of any further improvement— is almost always more than is reasonable for the
benefits that would be accrued; so the process of optimization may be halted before a completely optimal
solution has been reached. Fortunately, it is often the case that the greatest improvements come early in
the process.
[edit]"Levels" of optimization
Optimization can occur at a number of "levels":

 Design level

At the highest level, the design may be optimized to make best use of the available resources. The
implementation of this design will benefit from a good choice of efficient algorithms and the
implementation of these algorithms will benefit from writing good quality code. The architectural design of
a system overwhelmingly affects its performance. The choice of algorithm affects efficiency more than
any other item of the design and, since the choice of algorithm usually is the first thing that must be
decided, arguments against early or "premature optimization" may be hard to justify.

In some cases, however, optimization relies on using more elaborate algorithms, making use of 'special
cases' and special 'tricks' and performing complex trade-offs. A 'fully optimized' program might be more
difficult to comprehend and hence may contain more faults than unoptimized versions.

 Source code level

Avoiding poor quality coding can also improve performance, by avoiding obvious 'slowdowns'. After that,
however, some optimizations are possible that actually decrease maintainability. Some, but not all,
optimizations can nowadays be performed by optimizing compilers.

 Compile level

Use of an optimizing compiler tends to ensure that the executable program is optimized at least as much
as the compiler can predict.

 Assembly level

At the lowest level, writing code using an assembly language, designed for a particular hardware platform
can produce the most efficient and compact code if the programmer takes advantage of the full repertoire
of machine instructions. Many operating systems used on embedded systems have been traditionally
written in assembler code for this reason; when efficiency and size are less important large parts may be
written in a high-level language.

With more modern optimizing compilers and the greater complexity of recent CPUs, it is more difficult to
write code that is optimized better than the compiler itself generates, and few projects need resort to this
'ultimate' optimization step.

However, a large amount of code written today is still compiled with the intent to run on the greatest
percentage of machines possible. As a consequence, programmers and compilers don't always take
advantage of the more efficient instructions provided by newer CPUs or quirks of older models.
Additionally, assembly code tuned for a particular processor without using such instructions might still be
suboptimal on a different processor, expecting a different tuning of the code.

 Run time

Just in time compilers and Assembler programmers may be able to perform run time optimization
exceeding the capability of static compilers by dynamically adjusting parameters according to the actual
input or other factors.
Platform dependent and independent optimizations
Code optimization can be also broadly categorized as platform-dependent and platform-independent
techniques. While the latter ones are effective on most or all platforms, platform-dependent techniques
use specific properties of one platform, or rely on parameters depending on the single platform or even on
the single processor. Writing or producing different versions of the same code for different processors
might therefore be needed. For instance, in the case of compile-level optimization, platform-independent
techniques are generic techniques (such as loop unrolling, reduction in function calls, memory efficient
routines, reduction in conditions, etc.), that impact most CPU architectures in a similar way. Generally,
these serve to reduce the total Instruction path length required to complete the program and/or reduce
total memory usage during the process. On the other hand, platform-dependent techniques involve
instruction scheduling, instruction-level parallelism, data-level parallelism, cache optimization techniques
(i.e. parameters that differ among various platforms) and the optimal instruction scheduling might be
different even on different processors of the same architecture.

Interpreter (computing)
From Wikipedia, the free encyclopedia
In computer science, an interpreter normally means a computer program that executes, i.e. performs,
instructions written in a programming language. An interpreter may be a program that either

1. executes the source code directly

2. translates source code into some efficient intermediate representation (code) and immediately
executes this

3. explicitly executes stored precompiled code[1] made by a compiler which is part of the interpreter
system

Perl, Python, MATLAB, and Ruby are examples of type 2, while UCSD Pascal and Java are type 3: Source
programs are compiled ahead of time and stored as machine independent code, which is then linked at run-
time and executed by an interpreter and/or compiler (for JIT systems). Some systems, such
as Smalltalk, BASIC and others, may also combine 2 and 3.

While interpreting and compiling are the two main means by which programming languages are implemented,
these are not fully distinct categories, one of the reasons being that most interpreting systems also perform
some translation work, just like compilers. The terms "interpreted language" or "compiled language" merely
mean that the canonical implementation of that language is an interpreter or a compiler; a high level language
is basically an abstraction which is (ideally) independent of particular implementations.
Bytecode interpreters
Main article: Bytecode

There is a spectrum of possibilities between interpreting and compiling, depending on the amount of
analysis performed before the program is executed. For example, Emacs Lisp is compiled tobytecode,
which is a highly compressed and optimized representation of the Lisp source, but is not machine code
(and therefore not tied to any particular hardware). This "compiled" code is then interpreted by a bytecode
interpreter (itself written in C). The compiled code in this case is machine code for a virtual machine,
which is implemented not in hardware, but in the bytecode interpreter. The same approach is used with
the Forth code used in Open Firmware systems: the source language is compiled into "F code" (a
bytecode), which is then interpreted by a virtual machine.

Control tables - that do not necessarily ever need to pass through a compiling phase - dictate appropriate
algorithmic control flow via customized interpreters in similar fashion to bytecode interpreters.

Advantages and disadvantages of using interpreters


Programmers usually write programs in high level code which the CPU cannot execute. So this source
code has to be converted into machine code. This conversion is done by a compiler or an interpreter. A
compiler makes the conversion just once, while an interpreter typically converts it every time a program is
executed (or in some languages like early versions of BASIC, every time a single instruction is executed).

[edit]Development cycle
During program development, the programmer makes frequent changes to source code. A compiler
needs to make a compilation of the altered source files, and link the whole binary code before the
program can be executed. An interpreter usually just needs to translate to an intermediate representation
or not translate at all, thus requiring less time before the changes can be tested.

This often makes interpreted languages generally easier to learn and find bugs and correct problems.
Thus simple interpreted languages tend to have a friendlier environment for beginners.
[edit]Distribution

A compiler converts source code into binary instruction for a specific processor's architecture, thus
making it less portable. This conversion is made just once, on the developer's environment, and after that
the same binary can be distributed to the user's machines where it can be executed without further
translation.

An interpreted program can be distributed as source code. It needs to be translated in each final machine,
which takes more time but makes the program distribution independent to the machine's architecture.
[edit]Execution environment
An interpreter will make source translations during runtime. This means every line has to be converted
each time the program runs. This process slows down the program execution and is a major
disadvantage of interpreters over compilers. Another main disadvantage of interpreter is that it must be
present on the machine as additional software to run the program.

You might also like