You are on page 1of 84

CHAPTER 1

INTRODUCTION TO VLSI
1.1GENERAL
VLSI stands for "Very Large Scale Integration". This is the field which involves packing more
and more logic devices into smaller and smaller areas. VLSI, circuits that would have taken boardfuls
of space can now be put into a small space few millimeters across! This has opened up a big
opportunity to do things that were not possible before. VLSI circuits are everywhere .your computer,
your car, your brand new state-of-the-art digital camera, the cell-phones, and what have you. All this
involves a lot of expertise on many fronts within the same field, which we will look at in later sections.
VLSI has been around for a long time, but as a side effect of advances in the world of computers, there
has been a dramatic proliferation of tools that can be used to design VLSI circuits. Alongside, obeying
Moore's law, the capability of an IC has increased exponentially over the years, in terms of computation
power, utilisation of available area, yield. The combined effect of these two advances is that people can
now put diverse functionality into the IC's, opening up new frontiers. Examples are embedded systems,
where intelligent devices are put inside everyday objects, and ubiquitous computing where small
computing devices proliferate to such an extent that even the shoes you wear may actually do
something useful like monitoring your heartbeats. Integrated circuit (IC) technology is the enabling
technology for a whole host of innovative devices

and systems that have changed the way we live.

Jack Kilby and Robert Noyce received the 2000 Nobel Prize in Physics for their invention of the
integrated circuit; without the integrated circuit, neither transistors nor computers would be as
important as they are today. VLSI systems are much smaller and consume less power than the discrete
components used to build electronic systems before the 1960s. Integration allows us to build systems
with many more transistors, allowing much more computing power to be applied to solving a problem.
Integrated circuits are also much easier to design and manufacture and are more reliable than discrete
systems; that makes it possible to develop special-purpose systems that are more efficient than generalpurpose computers for the task at hand.

Page
1

Three Categories:
1. Analog:
Small transistor count precision circuits such as Amplifiers, Data converters, filters, Phase
Locked, sensors etc

2. ASICS or Application Specific Integrated Circuits:


Progress in the fabrication of IC's has enabled us to create fast and powerful circuits in smaller
and smaller devices. This also means that we can pack a lot more of functionality into the same area.
The biggest application of this ability is found in the design of ASIC's. These are IC's that are created
for specific purposes - each device is created to do a particular job, and do it well. The most common
application area for this is DSP - signal filters, image compression, etc. To go to extremes, consider the
fact that the digital wristwatch normally consists of a single IC doing all the time-keeping jobs as well
as

extra

features

like

games,

calendar,etc.

3. SoC or Systems on a chip:


These are highly complex mixed signal circuits (digital and analog all on the same chip). A
network processor chip or a wireless radio chip is an example of an SoC.

1.2 APPLICATIONS OF VLSI


Electronic systems now perform a wide variety of tasks in daily life. Electronic systems in some
cases have replaced mechanisms that operated mechanically, hydraulically, or by other means;
Page
2

electronics are usually smaller, more flexible, and easier to service. In other cases electronic systems
have created totally new applications. Electronic systems perform a variety of tasks, some of them
visible, some more hidden:
Personal entertainment systems such as portable MP3 players and DVD players perform
sophisticated algorithms with remarkably little energy.
Electronic systems in cars operate stereo systems and displays; they also control fuel injection
systems, adjust suspensions to varying terrain, and perform the control functions required for
anti-lock braking (ABS) systems.
Digital electronics compress and decompress video, even at high definition data rates, on-thefly in consumer electronics.
Low-cost terminals for Web browsing still require sophisticated electronics, despite their
dedicated function.
Personal computers and workstations provide word-processing, financial analysis, and games.
Computers include both central processing units (CPUs) and special-purpose hardware for disk
access, faster screen display, etc.
Medical electronic systems measure bodily functions and perform complex processing algorithms to
warn about unusual conditions. The availability of these complex systems, far from overwhelming
consumers, only creates demand for even more complex systems. The growing sophistication of
applications continually pushes the design and manufacturing of integrated circuits and electronic
systems to new levels of complexity. And perhaps the most amazing characteristic of this collection of
systems is its variety as systems become more complex, we build not a few general-purpose computers
but an ever wider range of special-purpose systems. Our ability to do so is a testament to our growing
mastery of both integrated circuit manufacturing and design, but the increasing demands of customers
continue to test the limits of design and manufacturing.

1.3 ADVANTAGES OF VLSI

Page
3

While we will concentrate on integrated circuits in this book, the properties of integrated
circuits what we can and cannot efficiently put in an integrated circuitlargely determine the
architecture of the entire system. Integrated circuits improve system characteristics in several critical
ways. ICs have three key advantages over digital circuits built from discrete components:
Size. Integrated circuits are much smallerboth transistors and wires are shrunk to micrometer sizes,
compared to the millimeter or centimeter scales of discrete components. Small size leads to advantages
in speed and power consumption, since smaller components have smaller parasitic resistances,
capacitances, and inductances.
Speed. Signals can be switched between logic 0 and logic 1 much quicker within a chip than they can
between chips. Communication within a chip can occur hundreds of times faster than communication
between chips on a printed circuit board. The high speed of circuits on-chip is due to their small size
smaller components and wires have smaller parasitic capacitances to slow down the signal.
Power consumption. Logic operations within a chip also take much less power. Once again, lower
power consumption is largely due to the small size of circuits on the chipsmaller parasitic
capacitances and resistances require less power to drive them.

1.4 VLSI AND EMBEDDED SYSTEMS


These advantages of integrated circuits translate into advantages at the system level:
Smaller physical size. Smallness is often an advantage in itselfconsider portable televisions or
handheld cellular telephones.
Lower power consumption. Replacing a handful of standard parts with a single chip reduces total
power consumption. Reducing power consumption has a ripple effect on the rest of the system: a
smaller, cheaper power supply can be used; since less power consumption means less heat, a fan may
no longer be necessary; a simpler cabinet with less shielding for electromagnetic shielding may be
feasible, too.
Page
4

Reduced cost. Reducing the number of components, the power supply requirements, cabinet costs,
and so on, will inevitably reduce system cost. The ripple effect of integration is such that the cost of a
system built from custom ICs can be less, even though the individual ICs cost more than the standard
parts they replace. Understanding why integrated circuit technology has such profound influence on the
design of digital systems requires understanding both the technology of IC manufacturing and the
economics of ICs and digital systems.

1.5 FIELD-PROGRAMMABLE GATE ARRAY (FPGA)


A field-programmable gate array (FPGA) is an integrated circuit designed to be configured by
the customer or designer after manufacturing, hence "field-programmable". The FPGA configuration is
generally specified using a hardware description language (HDL), similar to that used for an
application-specific integrated circuit (ASIC) (circuit diagrams were previously used to specify the
configuration, as they were for ASICs, but this is increasingly rare). FPGAs can be used to implement
any logical function that an ASIC could perform. The ability to update the functionality after shipping,
partial re-configuration of the portion of the design and the low non-recurring engineering costs relative
to an ASIC design (notwithstanding the generally higher unit cost), offer advantages for many
applications. FPGAs contain programmable logic components called "logic blocks", and a hierarchy of
reconfigurable interconnects that allow the blocks to be "wired together" somewhat like many
(changeable) logic gates that can be inter-wired in (many) different configurations. Logic blocks can be
configured to perform complex combinational functions, or merely simple logic gates like AND and
XOR. In most FPGAs, the logic blocks also include memory elements, which may be simple flip-flops
or more complete blocks of memory. In addition to digital functions, some FPGAs have analog
features. The most common analog feature is programmable slew rate and drive strength on each output
pin, allowing the engineer to set slow rates on lightly loaded pins that would otherwise ring
unacceptably, and to set stronger, faster rates on heavily loaded pins on high-speed channels that would
otherwise run too slow. Another relatively common analog feature is differential comparators on input
Page
5

pins designed to be connected to differential signaling channels. A few "mixed signal FPGAs" have
integrated peripheral Analog-to-Digital Converters (ADCs) and Digital-to-Analog Converters (DACs)
with analog signal conditioning blocks allowing them to operate as a system-on-a-chip.[5] Such devices
blur the line between an FPGA, which carries digital ones and zeros on its internal programmable
interconnect fabric, and field-programmable analog array (FPAA), which carries analog values on its
internal programmable interconnect fabric.

1.5.1 FOUNDATION OF FPGA


The FPGA industry sprouted from programmable read-only memory (PROM) and
programmable logic devices (PLDs). PROMs and PLDs both had the option of being programmed in
batches in a factory or in the field (field programmable), however programmable logic was hard-wired
between logic gates. In the late 1980s the Naval Surface Warfare Department funded an experiment
proposed by Steve Casselman to develop a computer that would implement 600,000 reprogrammable
gates. Casselman was successful and a patent related to the system was issued in 1992. Some of the
industrys foundational concepts and technologies for programmable logic arrays, gates, and logic
blocks are founded in patents awarded to David W. Page and LuVerne R. Peterson in 1985.Xilinx CoFounders, Ross Freeman and Bernard Vonderschmitt, invented the first commercially viable field
programmable gate array in 1985 the XC2064. The XC2064 had programmable gates and
programmable interconnects between gates, the beginnings of a new technology and market. The
XC2064 boasted a mere 64 configurable logic blocks (CLBs), with two 3-input lookup tables (LUTs).
More than 20 years later, Freeman was entered into the National Inventors Hall of Fame for his
invention. Xilinx continued unchallenged and quickly growing from 1985 to the mid-1990s, when
competitors sprouted up, eroding significant market-share. By 1993, Actel was serving about 18
percent of the market. The 1990s were an explosive period of time for FPGAs, both in sophistication
and the volume of production. In the early 1990s, FPGAs were primarily used in telecommunications
and networking. By the end of the decade, FPGAs found their way into consumer, automotive, and
industrial applications. FPGAs got a glimpse of fame in 1997, when Adrian Thompson, a researcher
working at the University of Sussex, merged genetic algorithm technology and FPGAs to create a
sound recognition device. Thomsons algorithm configured an array of 10 x 10 cells in a Xilinx FPGA
Page
6

chip to discriminate between two tones, utilising analogue features of the digital chip. The application
of genetic algorithms to the configuration of devices like FPGAs is now referred to as Evolvable
hardware .

1.5.2 MODERN DEVELOPMENTS


A recent trend has been to take the coarse-grained architectural approach a step further by
combining the logic blocks and interconnects of traditional FPGAs with embedded microprocessors
and related peripherals to form a complete "system on a programmable chip". This work mirrors the
architecture by Ron Perlof and Hana Potash of Burroughs Advanced Systems Group which combined a
reconfigurable CPU architecture on a single chip called the SB24. That work was done in 1982.
Examples of such hybrid technologies can be found in the Xilinx Virtex-II PRO and Virtex-4 devices,
which include one or more PowerPC processors embedded within the FPGA's logic fabric. The Atmel
FPSLIC is another such device, which uses an AVR processor in combination with Atmel's
programmable logic architecture. The Actel SmartFusion devices incorporate an ARM architecture
Cortex-M3 hard processor core (with up to 512kB of flash and 64kB of RAM) and analog peripherals
such as a multi-channel ADC and DACs to their flash-based FPGA fabric.In 2010, an extensible
processing platform was introduced for FPGAs that fused features of an ARM high-end microcontroller
(hard-core implementations of a 32-bit processor, memory, and I/O) with an FPGA fabric to make
FPGAs easier for embedded designers to use. By incorporating the ARM processor-based platform into
a 28 nm FPGA family, the extensible processing platform enables system architects and embedded
software developers to apply a combination of serial and parallel processing to address the challenges
they face in designing today's embedded systems, which must meet ever-growing demands to perform
highly complex functions. By allowing them to design in a familiar ARM environment, embedded
designers can benefit from the time-to-market advantages of an FPGA platform compared to more
traditional design cycles associated with ASICs. An alternate approach to using hard-macro processors
is to make use of soft processor cores that are implemented within the FPGA logic. MicroBlaze and
Nios II are examples of popular softcore processors.As previously mentioned, many modern FPGAs
have the ability to be reprogrammed at "run time," and this is leading to the idea of reconfigurable
computing or reconfigurable systems CPUs that reconfigure themselves to suit the task at
Page
7

hand.Additionally, new, non-FPGA architectures are beginning to emerge. Software-configurable


microprocessors such as the Stretch S5000 adopt a hybrid approach by providing an array of processor
cores and FPGA-like programmable cores on the same chip.

1.5.3 FPGA COMPARISONS


Historically, FPGAs have been slower, less energy efficient and generally achieved less
functionality than their fixed ASIC counterparts. A study has shown that designs implemented on
FPGAs need on average 18 times as much area, draw 7 times as much dynamic power, and are 3 times
slower than the corresponding ASIC implementations. An Altera Cyclone II FPGA, on an Altera
teraSIC DE1 Prototyping board.Advantages include the ability to re-program in the field to fix bugs,
and may include a shorter time to market and lower non-recurring engineering costs.[citation needed] Vendors
can also take a middle road by developing their hardware on ordinary FPGAs, but manufacture their
final version so it can no longer be modified after the design has been committed.
Xilinx claims that several market and technology dynamics are changing the ASIC/FPGA paradigm:

Integrated circuit costs are rising aggressively

ASIC complexity has lengthened development time

R&D resources and headcount are decreasing

Revenue losses for slow time-to-market are increasing

Financial constraints in a poor economy are driving low-cost technologies


These trends make FPGAs a better alternative than ASICs for a larger number of higher-volume

applications than they have been historically used for, to which the company attributes the growing
number of FPGA design starts (see History). Some FPGAs have the capability of partial reconfiguration that lets one portion of the device be re-programmed while other portions continue
running.
Page
8

1.6 COMPLEX PROGRAMMABLE LOGIC DEVICES


The primary differences between CPLDs (Complex Programmable Logic Devices) and FPGAs
are architectural. A CPLD has a somewhat restrictive structure consisting of one or more programmable
sum-of-products logic arrays feeding a relatively small number of clocked registers. The result of this is
less flexibility, with the advantage of more predictable timing delays and a higher logic-to-interconnect
ratio. The FPGA architectures, on the other hand, are dominated by interconnect. This makes them far
more flexible (in terms of the range of designs that are practical for implementation within them) but
also far more complex to design for. In practice, the distinction between FPGAs and CPLDs is often
one of size as FPGAs are usually much larger in terms of resources than CPLDs. Typically only
FPGA's contain more advanced embedded functions such as adders, multipliers, memory, serdes and
other hardened functions. Another common distinction is that CPLDs contain embedded flash to store
their configuration while FPGAs usually, but not always, require an external flash or other device to
store their configuration.

1.7 APPLICATIONS
Applications of FPGAs include digital signal processing, software-defined radio, aerospace and
defense systems, ASIC prototyping, medical imaging, computer vision, speech recognition,
cryptography, bioinformatics, computer hardware emulation, radio astronomy, metal detection and a
growing range of other areas. FPGAs originally began as competitors to CPLDs and competed in a
similar space, that of glue logic for PCBs. As their size, capabilities, and speed increased, they began to
take over larger and larger functions to the state where some are now marketed as full systems on chips
(SoC). Particularly with the introduction of dedicated multipliers into FPGA architectures in the late
1990s, applications which had traditionally been the sole reserve of DSPs began to incorporate FPGAs
instead. Traditionally, FPGAs have been reserved for specific vertical applications where the volume of
production is small. For these low-volume applications, the premium that companies pay in hardware
costs per unit for a programmable chip is more affordable than the development resources spent on
creating an ASIC for a low-volume application. Today, new cost and performance dynamics have
broadened the range of viable applications.
Page
9

1.8 APPLICATION-SPECIFIC INTEGRATED CIRCUIT (ASIC)


An Application-Specific Integrated Circuit (ASIC) is an integrated circuit (IC) customized for a
particular use, rather than intended for general-purpose use. For example, a chip designed solely to run
a cell phone is an ASIC. Application-specific standard products (ASSPs) are intermediate between
ASICs and industry standard integrated circuits like the 7400 or the 4000 series.As feature sizes have
shrunk and design tools improved over the years, the maximum complexity (and hence functionality)
possible in an ASIC has grown from 5,000 gates to over 100 million. Modern ASICs often include
entire 32-bit processors, memory blocks including ROM, RAM, EEPROM, Flash and other large
building blocks. Such an ASIC is often termed a SoC (system-on-chip). Designers of digital ASICs use
a hardware description language (HDL), such as Verilog or VHDL, to describe the functionality of
ASICs. Field-programmable gate arrays (FPGA) are the modern-day technology for building a
breadboard or prototype from standard parts; programmable logic blocks and programmable
interconnects allow the same FPGA to be used in many different applications. For smaller designs
and/or lower production volumes, FPGAs may be more cost effective than an ASIC design even in
production. The non-recurring engineering (NRE) cost of an ASIC can run into the millions of dollars.

1.8.1 FOUNDATION OF ASIC


The initial ASICs used gate array technology. Ferranti produced perhaps the first gate-array, the
ULA (Uncommitted Logic Array), around 1980. An early successful commercial application was the
ULA circuitry found in the 8-bit ZX81 and ZX Spectrum low-end personal computers, introduced in
1981 and 1982. These were used by Sinclair Research (UK) essentially as a low-cost I/O solution
aimed at handling the computer's graphics. Some versions of ZX81/Timex Sinclair 1000 used just four
chips (ULA, 2Kx8 RAM, 8Kx8 ROM, Z80A CPU) to implement an entire mass-market personal
computer with built-in BASIC interpreter. Customization occurred by varying the metal interconnect
mask. ULAs had complexities of up to a few thousand gates. Later versions became more generalized,

Page
10

with different base dies customised by both metal and polysilicon layers. Some base dies include RAM
elements.

1.8.2 STANDARD-CELL DESIGN


In the mid-1980s, a designer would choose an ASIC manufacturer and implement their design
using the design tools available from the manufacturer. While third-party design tools were available,
there was not an effective link from the third-party design tools to the layout and actual semiconductor
process performance characteristics of the various ASIC manufacturers. Most designers ended up using
factory-specific tools to complete the implementation of their designs. A solution to this problem,
which also yielded a much higher density device, was the implementation of standard cells. Every
ASIC manufacturer could create functional blocks with known electrical characteristics, such as
propagation delay, capacitance and inductance, that could also be represented in third-party tools.
Standard-cell design is the utilization of these functional blocks to achieve very high gate density and
good electrical performance.
Standard-cell design fits between Gate Array and Full Custom design in terms of both its nonrecurring engineering and recurring component cost. By the late 1990s, logic synthesis tools became
available. Such tools could compile HDL descriptions into a gate-level netlist. Standard-cell Integrated
Circuits (ICs) are designed in the following conceptual stages, although these stages overlap
significantly in practice.
A team of design engineers starts with a non-formal understanding of the required functions for
a new ASIC, usually derived from Requirements analysis.
The design team constructs a description of an ASIC to achieve these goals using an HDL. This
process is analogous to writing a computer program in a high-level language. This is usually
called the RTL (Register transfer level) design.
Suitability for purpose is verified by functional verification. This may include such techniques
as logic simulation, formal verification, emulation, or creating an equivalent pure software
Page
11

model (see Simics, for example). Each technique has advantages and disadvantages, and often
several methods are used.
Logic synthesis transforms the RTL design into a large collection of lower-level constructs
called standard cells. These constructs are taken from a standard-cell library consisting of precharacterized collections of gates (such as 2 input nor, 2 input NAND, inverters, etc.). The
standard cells are typically specific to the planned manufacturer of the ASIC. The resulting
collection of standard cells, plus the needed electrical connections between them, is called a
gate-level netlist.
The gate-level netlist is next processed by a placement tool which places the standard cells onto
a region representing the final ASIC. It attempts to find a placement of the standard cells,
subject to a variety of specified constraints.
The routing tool takes the physical placement of the standard cells and uses the netlist to create
the electrical connections between them. Since the search space is large, this process will
produce a sufficient rather than globally optimal solution. The output is a file which can be
used to create a set of photomasks enabling a semiconductor fabrication facility (commonly
called a 'fab') to produce physical ICs.
Given the final layout, circuit extraction computes the parasitic resistances and capacitances. In
the case of a digital circuit, this will then be further mapped into delay information, from which
the circuit performance can be estimated, usually by static timing analysis. This, and other final
tests such as design rule checking and power analysis (collectively called signoff) are intended
to ensure that the device will function correctly over all extremes of the process, voltage and
temperature. When this testing is complete the photomask information is released for chip
fabrication.
These steps, implemented with a level of skill common in the industry, almost always produce a
final device that correctly implements the original design, unless flaws are later introduced by the
physical fabrication process. The design steps (or flow) are also common to standard product design.
Page
12

The significant difference is that standard-cell design uses the manufacturer's cell libraries that have
been used in potentially hundreds of other design implementations and therefore are of much lower risk
than full custom design. Standard cells produce a design density that is cost effective, and they can also
integrate IP cores and SRAM (Static Random Access Memory) effectively, unlike Gate Arrays.

Page
13

CHAPTER - 2
LITERATURE REVIEW
A. 7-Segments Decoder
A 7-segments decoder is able to convert the logic states of inputs into seven bits of outputs and displays
in7-segments display. It is used widely in devices where its main function is to display numbers from a
digital circuitry. An example of these devices includes calculators, displays in elevator, digital timers,
digital clocks and etc.
There are many types of decoders such as 2-4 decoder, 3- 8 decoder and 4-16 decoder. Since there are
ten decimal numerals (09) to be displayed in the 7-segments display, a 4-16 decoder was used.
The structure of a 7-segments display is shown in Fig. 1. It is used to display decimal numerals in
seven segments and each segment is represented by an alphabet a to g. By setting the required
segments to be turned on, the desired decimal numeral can be displayed on the 7-segments display

B. IC Design
IC layouts are built from three basic components which are the transistors, wires and vias. During the
design of the layouts, the design rule has to be considered.
Design rules govern the layout of individual components and the interaction between those
components. When designing an IC, designers tend to make the components as small as possible
enabling implementation of as many functions as possible onto a single chip. However, since the
transistors and wires are extremely small, errors may happen during the fabrication process. Hence,
Page
14

design rules are created and formulated to minimise problems during fabrication process and helps to
increase the yield of correct chips to a suitable level. Therefore, it is important to adhere to the design
rules during layout design.

C. Physical Verification of Design


Physical verification is a process where an IC layout design will be checked via EDA tools to ensure it
meets design criteria and rules. The verification process used in this project involves DRC (Design
Rule Check), LVS (Layout Versus Schematic) and ERC (Electrical Rule Check). These are important
procedures in IC layout design and cannot be treated lightly.

i) Design Rule Check (DRC)


DRC is a verification process that determines whether the physical layout of a chip design satisfies the
Design Rules or not. It ensures that all the polygons and layers meet the manufacturing process rules
that define the limits of a manufacturing design such as the width and space rules. DRC is the first level
of verification once the layout is ready. In this verification stage, the connectivity and guidelines rules
will be checked as well. DRC will not only check the designs that are created by the designers, but also
the design placed within the context in which it is going to be used. Therefore, the possibility of errors
in the design will be greatly reduced and a high overall yield and reliability of design will be achieved.

ii) Layout Versus Schematic (LVS)


LVS is a process to check if a particular IC layout corresponds to the original schematic circuit of the
design.
The schematic acts as the reference circuit and the layout will be checked against it. In this process, the
electrical connectivity of all signals, including the input, output and power signals to their
Page
15

corresponding devices are checked. Besides that, the sizes of the device will also be checked including
the width and length of transistors, sizes of resistors and capacitors.
The LVS will also identify the extra components and signals that have not been included in the
schematic, for example, floating nodes.
In Electric VLSI Design System, this type of checking is known as the Network Consistency Checking
(NCC) as is able to compare any two circuits, which includes two layouts or two schematics

iii) Electrical Rules Check (ERC)


ERC is usually used to check the errors in connectivity or device connection. It is an optional choice of
checking and seldom used as an independent verification step.
ERC is usually used to check for any unconnected, partly connected or redundant devices. Also, it will
check for any disabled transistors, floating nodes and short circuits.
ERC is very useful in accelerating debugging problems such as short circuits as can speed up the
design process.

Page
16

CHAPTER - 3
TASKS PERFORMED
3.1: Fundamentals in Digital Abstraction
3.1.1: Why digital?
In classical physics, measureable quantities that we might use to represent information such as
position, voltage, frequency, force, and many others, have values that vary continuously over some
range of possibilities. The values of such variables are real numbers: even over a restricted range. For
example the interval between 0 and 1, the number of possible values of a real variable is infinite,
implying that such a variable might carry arbitrarily large amounts of information. This may appear
as an advantage of continuous variables, but it comes at a serious cost. It obscures
the actual information content of the variable. This limitation of the engineering discipline
surrounding analog systems results in representation of information in digital format.

Page
17

3.2: Digital Abstraction


Digital systems send and store the data by using ranges of voltages to represent a value.
Typically a wire holds one of two values: 0 or 1.
Digital Abstraction is the phenomena of reducing complex analog circuit behavior down to 1 and
0.Digital abstraction allows one to build digital circuits without understanding the actual electronics
of the circuits.
In modern computers, information is generally represented by voltages.
For example: when a component needs to communicate with another component, they are
interconnected by one or more signal wire and to a common ground as shown in Figure 3.1.
One component sets the voltage and other component reads the voltage.
Since voltage is continuous value we can code numerous information on a signal wire by encoding
every possible message by a different voltage value.

Figure 3.1: Digital Abstraction


However this method is associated with two drawbacks:
The sources of noise can cause the received voltage to differ from the one set by the sending
component. Example: If the sent voltage is 1.74v and due to noise the received voltage becomes
1.78v.
Page
18

Limited precision in sending voltage.


Therefore in order to overcome these drawbacks we consider the standard voltage value from 0-5v. 02.5v is considered as 0 and 2.5-5v is considered as 1.

3.3: The MOSFET as A Switch


MOSFET is a voltage controlled field effect transistor, it has a Metal Oxide Gate electrode
which is electrically insulated from the main semiconductor n-channel or p-channel by a very thin
layer of insulating material usually silicon dioxide, commonly known as glass.
This ultra-thin insulated metal gate electrode can be thought of as one plate of a capacitor. The
isolation of the controlling Gate makes the input resistance of the MOSFET extremely high way up in
the Mega-ohms (M ) region thereby making it almost infinite.
As the Gate terminal is isolated from the main current carrying channel NO current flows into the
gate, the MOSFET also acts like a voltage controlled resistor were the current flowing through the
main channel between the Drain and Source is proportional to the input voltage. MOSFETs are three
terminal devices with a Gate, Drain and Source and both P-channel (PMOS) and N-channel (NMOS)
MOSFETs are available.

3.3.1: MOSFET Characteristics Curves

Page
19

Figure 3.3.1.1: MOSFET characteristic curves.

The minimum ON-state gate voltage required to ensure that the MOSFET remains ON when
carrying the selected drain current can be determined from the V-I transfer curves above in figure
3.3.1.1. When VIN is HIGH or equal to VDD, the MOSFET Q-point moves to point A along the load line.
The drain current ID increases to its maximum value due to a reduction in the channel
resistance. ID becomes a constant value independent of VDD, and is dependent only on VGS. Therefore,
the transistor behaves like a closed switch but the channel ON-resistance does not reduce fully to zero
due to its RDS(on) value, but gets very small.
Likewise, when VIN is LOW or reduced to zero, the MOSFET Q-point moves from point A to
point B along the load line. The channel resistance is very high so the transistor acts like an open
circuit and no current flows through the channel. So if the gate voltage of the MOSFET toggles
between two values, HIGH and LOW the MOSFET will behave as a single-pole single-throw

Page
20

(SPST) solid state switch. The MOSFET basically has two regions, cut-off region and saturation
region.

3.4: Digital-Circuit Speed and Power Consumption


The essential figures of merit of a digital circuit or system are speed and power consumption. The
usual measure for speed is a (reciprocal) delay time td or a maximum clock frequency fc,max. Power
efficiency can be determined as the total power Ptot or in terms of a switching energy Es, i.e., the
average energy consumed for one switching transition of a device.
Consider a digital circuit as a network of switches with parasitic capacitances, which can be
described statistically in terms of average values. Thus, one representative pair switches that charge
and discharge a representative load capacitance Cl to the supply voltage VDD with a current Ion. Each
switch in the off-state draws a leakage current Ioff. For simplicity, any short-circuit current flowing
across the switches is assumed to be negligible (which is usually acceptable), and a pair of switches is
regarded as one device.
The total power consumption per device is the sum of a dynamic component from charging and
discharging the capacitance and a static component from the leakage current:
Ptot= Pdyn+Pstat = fcCL(VDD)2+IoffVDD

(3.4.1)

In this expression fc is the clock frequency and is the switching probability, the so-called activity
ratio. A more universal measure is the switching energy which is related to the power consumption
as seen in equation 3.4.2.
Es=CL(VDD)2+(1/fc)IoffVDD
Therefore

Ptot=fcEs

(3.4.3)
Page
21

(3.4.2)

The advantage of using the switching energy rather than the total power consumption is
that Es is independent of the system throughput. For example, when pipelined architectures are
considered the power consumption varies with the architectural changes, but the switching energy
does not and is therefore the better candidate to optimize.
The speed of a digital circuit can be characterized in two ways: the delay time td which is assumed
as and the maximum clock frequency fc,max which is given by
Td=CLVDD / Ion

(3.4.4)

fcmax= 1 / tdld

(3.4.5)

In the equation 3.4.5 the logic depth ld is the number of stages through which a switching event must
propagate during one clock cycle.
Clearly, a system is most efficient when operated at the maximum clock frequency. Combining
(3.4.2), (3.4.4), and (3.4.5) assuming that fc = fc,max yields the following fundamental equation:
Es =CL(VDD)2[1+(ld / )(Ioff/Ion)]

(3.4.6)

The following two conclusions can be drawn from (3.4.6):


1. The switching energy increases quadratically with the supply voltage.
2. The leakage current Ioff does not increase the switching energy as long as I off /Ion <</ld. Thus, the key
to reducing the power consumption is to reduce the supply voltage. Typical values of /ld for a
microprocessor are 0.1/7, i.e., the leakage current can be almost as large as I on /70 which is several
orders of magnitude above the conventional leakage constraints.

3.5: MOS Logic


MOS: Metal Oxide Semiconductor

Page
22

Transistors are built on a Silicon (semiconductor) substrate. Pure silicon has no free carriers
and conducts poorly. Dopants are added to increase conductivity: extra electrons (n-type) or extra holes
(p-type).
MOS structure is created by superimposing several layers of conducting, insulating and
transistor forming materials. Metal gate has been replaced by polysilicon or poly in modern
technologies.
There are two types of MOS transistors:
nMOS : Negatively doped silicon, rich in electrons.
pMOS : Positively doped silicon, rich in holes.
CMOS: Both type of transistors are used to construct any gate.

3.5.1: Signal Strengths


Signals such as 1 and 0 have strengths, measures ability to sink or source current. VDD and
GND rails are the strongest 1 and 0.Under the switch abstraction, G has complete control and S and
D have no effect.
In reality, the gate can turn the switch ON only if a potential difference of at least Vt, that
exists between the G and S. Thus signal strengths are related to Vt and therefore p and n transistors
produce signals with different strengths such as:
Strong 1: VDD,
Strong 0: GND,
Weak 1: (~VDD -Vt) and
Weak 0: (~GND + Vt).
The different signal strengths of nMOS and pMOS are shown in figure 3.5.1

Page
23

Figure 3.5.1: Different signal strengths seen in nmos and pmos

3.6: Combinational Logic


Combinational circuit is a circuit in which we combine the different gates in the circuit,
for example encoder, decoder, multiplexer and demultiplexer. Some of the characteristics of
combinational circuits are following:

The output of combinational circuit at any instant of time depends only on the levels present at
input terminals.

The combinational circuits do not use any memory. The previous state of input does not have
any effect on the present state of the circuit.

A combinational circuit can have an n number of inputs and m number of outputs as shown in
figure 3.6.1 and its classification can be seen in figure 3.6.2.

Page
24

Figure 3.6.1: Block diagram of combinational circuit

Figure 3.6.2: Classification of combinational logic circuit Few important combinational circuits
are elaborated as follows.
Examples of combinational circuits are Half Adder, Full Adder, Multiplexers, Demultiplexer, Decoder,
encoder etc.

3.7: Sequential Logic


The output state of a sequential logic circuit is a function of the following three states, the
present input, the past input and/or the past output. Sequential Logic circuits remember these
conditions and stay fixed in their current state until the next clock signal changes one of the states,
giving sequential logic circuits Memory.
Sequential logic circuits are generally termed as two state or Bistable devices which can have
their output or outputs set in one of two basic states, a logic level 1 or a logic level 0 and will
remain latched (hence the name latch) indefinitely in this current state or condition until some other
input trigger pulse or signal is applied which will cause the bistable to change its state once again.

Page
25

3.7.1: Sequential Logic Representation

Figure 3.7.1: Representation of sequential logic circuit

The word Sequential means that things happen in a sequence, one after another and in
Sequential Logic circuits, the actual clock signal determines when things will happen next. Simple
sequential logic circuits can be constructed from standard Bistable circuits such as: Flip-flops,
Latches and Counters and which themselves can be made by simply connecting together universal
NAND Gates and/or NOR Gates in a particular combinational way to produce the required sequential
circuit. Figure 3.7.1 shows the representation of sequential logic circuit.

3.7.2: Classification of Sequential Logic


As standard logic gates are the building blocks of combinational circuits, bistable latches and
flip-flops are the basic building blocks of Sequential Logic Circuits
. Sequential logic circuits can be constructed to produce either simple edge-triggered flip-flops or
more complex sequential circuits such as storage registers, shift registers, memory devices or
counters. Either way sequential logic circuits can be divided into the following three main categories:
1. Event Driven asynchronous circuits that change state immediately when enabled.
Page
26

2. Clock Driven synchronous circuits that are synchronized to a specific clock signal.
3. Pulse Driven This is a combination of the two that responds to triggering pulses.

Figure 3.7.2: Classification of sequential logic circuits

Sequential logic circuits return back to their original steady state once reset and sequential
circuits with loops or feedback paths are said to be cyclic in nature. Sequential circuits changes
occur only on the application of a clock signal making it synchronous, otherwise the circuit is
asynchronous and depends upon an external input. To retain their current state, sequential circuits rely
on feedback and this occurs when a fraction of the output is fed back to the input. The classification
of sequential circuits is seen in figure 3.7.2.

3.7.3: Synchronous sequential circuit

Page
27

A master-clock generator is used to generate a periodic train of clock pulses. These clock pulses
are distributed throughout the system. Clocked sequential circuits are most commonly used. The
memory elements are flip-flops. These flip-flops are the binary cells capable of storing one bit of
information. It generates two outputs: one for the normal value and one for the complement value. It
also maintains a binary state indefinitely until directed by an input signal to switch states.

3.8: Synchronization

Data Synchronization is a process of establishing consistency among systems and subsequent


continuous updates to maintain consistency. It should not be considered as a one-time task. It is really
a process which needs to be planned, owned, managed, scheduled and controlled.

3.8.1: Process of synchronization


1) Planning
Requirements on the data synchronization should be gathered in the planning phase. It covers the
data content, data formats, initial load and frequency of the updates. Non-functional requirements like
performance, timing and security are covered as well.
2) Ownership
Although the data synchronization idea may come from the IT organization of the company, an
owner or champion from the company business is necessary to provide a continuity of the initiative.
It is business who will benefit from the data synchronization initiative in the end.
3) Scheduling
The scheduling and frequency of updates is one of the items which needs to be investigated
during initial planning phase. Often the requirements change during this time and the schedule
Page
28

updates needs to be revised. Obviously, the granularity of a schedule on which the


updates/synchronization is performed cannot be finer than the source system is able to provide.
However, the scheduling also needs to take into account performance aspects.
4) Monitoring
The synchronization process should be monitored to evaluate whether the update schedule and
frequency meets the company's needs.
From the technical point of view, the synchronization may be implemented on any level:

System/Application level

File level

Record level synchronization

3.8.2: Challenges of synchronization


1) Data Formats Complexity
As the enterprise grows and evolves, new systems from different vendors are implemented. The
data formats for employees, products, suppliers and customers vary among different industries which
results not only in building a simple interface between the two applications (source and target), but
also in a need to transform the data while passing them to the target application. The data formats, of
course, vary from proprietary formats through plain text to xml. Some of the applications provide API
to push the data directly. ETL tools can be helpful here.
2) Real-timeliness
The requirement today is that the systems are real time. Customers want to see what the status of
their order in e-shop is; the status of a parcel delivery - a real time parcel tracking; what the current
balance on their account is; etc. Enterprises need to have their system real-time updated as well to
Page
29

enable smooth manufacturing process, e.g. ordering material when enterprise is running out stock;
synchronizing customer orders with manufacturing process, etc. There are thousands of examples
from real life when the real time is becoming either advantage or a must to be successful and
competitive.
3) Security
Different systems may have different policies to enforce data security and access levels. Even
though the security is maintained correctly in the source system which captures the data, the security
and information access privileges must be enforced on the target systems as well to prevent any
potential misuse of the information. This is particularly an issue when handling personal information
or any piece of confidential information under Non Disclosure Agreement (NDA). Any intermediate
results of the data transfer as well as the data transfer itself must be encrypted.

4) Data Quality
Maintaining data in one place and sharing with other applications is best practice in managing and
improving data quality. This prevents inconsistencies in the data caused by updating the same data in
one system.
5) Performance
The data synchronization process consists basically of five phases:
1.

Data extraction from the source/master system

2.

Data transfer

3.

Data transformation

4.

Data transfer
Page
30

5.

Data load to the target system


In case of large data, each of these steps may impact performance. Therefore, the synchronization
needs to be carefully planned to avoid any negative impact e.g. during peak processing hours.
6) Maintenance
As any other process, the synchronization process needs to be monitored to ensure that it is
running as scheduled and properly handling any errors during the process of synchronization such as
rejected records or malformed data.

3.9: Setup & Hold Time


Every flip-flop has restrictive time regions around the active clock edge in which input should not
change. Any change in the input in this region, the output may be derived from either the old input,
the new input, or even in between the two. Here we define, two very important terms in the digital
clocking, Setup and Hold time.

The setup time is the interval before the clock where the data must be held stable.

The hold time is the interval after the clock where the data must be held stable. Hold time can
be negative, which means the data can change slightly before the clock edge and still be properly
captured. Most of the current day flip-flops have zero or negative hold time.

Page
31

Figure 3.9.1: Waveform Demonstrating Set-Up and Hold Time


In the above figure, the shaded region is the restricted region. The shaded region is divided
into two parts by the dashed line. The left hand side part of shaded region is the setup time period and
the right hand side part is the hold time period. If the data changes in this region, as shown in the
figure 3.9.1, the output may, follow the input, or many not follow the input, or may go to metastable
state (where output cannot be recognized as either logic low or logic high, the entire process is known
as metastability).

Figure 3.9.2: Waveform Demonstrating Set-Up time


Figure 3.9.2 shows the restricted region (shaded region) for a flip-flop whose hold time is
negative. The diagram in figure 3.9.3 illustrates the restricted region of a D flip-flop. D is the
input, Q is the output, and clock is the clock signal. If D changes in the restricted region, the flip-flop
may not behave as expected, means Q is unpredictable.

Page
32

Figure 3.9.3: Timing Diagram Of D-Flip Flop


To avoid setup time violations:

The combinational logic between the flip-flops should be optimized to get minimum delay.

Redesign the flip-flops to get lesser setup time.

Tweak launch flip-flop to have better slew at the clock pin, this will make launch flip-flop to be
fast there by helping fixing setup violations.

Play with clock skew (useful skews).

To avoid hold time violations:

By adding delays (using buffers).

One can add lockup-latches (in cases where the hold time requirement is very huge, basically to
avoid data slip).

3.10: Metastability
Page
33

Whenever there are setup and hold time violations in any flip-flop, it enters a state where its
output is unpredictable: this state is known as metastable state (quasi stable state); at the end of
metastable state, the flip-flop settles down to either '1' or '0'. This whole process is known as
metastability. In the figure 3.10.1 below Tsu is the setup time and Th is the hold time. Whenever the
input signal D does not meet the Tsu and Th of the given D flip-flop, metastability occurs.

Figure 3.10.1: Example Demonstrating Timing Constraints.


When a flip-flop is in metastable state, its output oscillates between '0' and '1' as shown in the
figure 3.10.2 (here the flip-flop output settles down to '0') .The time taken to settle down, depends on
the technology of the flip-flop.

Page
34

Figure 3.10.2: Metastable State.


If we look deep inside of the flip-flop we see that the quasi-stable state is reached when the
flip-flop setup and hold times are violated. Assuming the use of a positive edge triggered "D" type
flip-flop, when the rising edge of the flip-flop clock occurs at a point in time when the D input to the
flip-flop is causing its master latch to transition, the flip-flop is highly likely to end up in a quasistable state. This rising clock causes the master latch to try to capture its current value while the slave
latch is opened allowing the Q output to follow the "latched" value of the master. The most perfectly
"caught" quasi-stable state (on the very top of the hill) results in the longest time required for the flipflop to resolve itself to one of the stable states as shown in figure 3.10.3.

Figure 3.10.3: Quasi Stable State.


The relative stability of states in the figure 10.3 above shows that the logic 0 and logic 1 states
(being at the base of the hill) are much more stable than the somewhat stable state at the top of the
hill. In theory, a flip-flop in this quasi-stable hilltop state could remain there indefinitely but in reality
it won't. Just as the slightest air current would eventually cause a ball on the illustrated hill to roll
down one side or the other, thermal and induced noise will jostle the state of the flip-flop causing it to
move from the quasi-stable state into either the logic 0 or logic 1 state.
Page
35

Causes of metastability:
When the setup time and hold time violation occurs, then metastability occurs, so we have to see
when signals violate this timing requirement:

When the input signal is an asynchronous signal.

When the clock skew/slew is too much (rise and fall time are more than the tolerable values).

When interfacing two domains operating at two different frequencies or at the same frequency
but with different phase.

When the combinational delay is such that flip-flop data input changes in the critical window
(setup + hold window).

3.10.1: Measures to avoid metastability


In the simplest case, designers can tolerate metastability by making sure the clock period is long
enough to allow for the resolution of quasi-stable states and for the delay of whatever logic may be in
the path to the next flip-flop. This approach, while simple, is rarely practical given the performance
requirements of most modern designs.
The most common way to tolerate metastability is to add one or more successive synchronizing
flip-flops to the synchronizer. This approach allows for an entire clock period (except for the setup
time of the second flip-flop) for metastable events in the first synchronizing flip-flop to resolve
themselves. This does, however, increase the latency in the synchronous logic's observation of input
changes.

3.11 VLSI Design Methodology

Page
36

Figure 3.11.1 Flowchart of design methodology

3.12: Floor Planning


Floorplanning is nothing but taking layout information into account at early stages of the design
process.

3.12.1: The floorplan-based design methodology


Planning layout at early stages may suggest valuable architectural modifications.
Floorplanning fits very well in a top-down design strategy, this stepwise refinement strategy is also
propagated in software design. Floorplanning gives flexibility in layout design, the existence of cells
that can adapt their shapes and terminal locations to the environment.

Page
37

This is the first major step in getting the layout done, and is the most important one. Floorplan
determines the chip quality. At this step, we decide the size of our chip/block, allocates power routing
resources, place the hard macros, and reserve space for standard cells. Every subsequent stage like
placement, routing and timing closure is dependent on how good the floorplan is. In a real time
design, many iterations are performed before arriving at an optimum floorplan.
1. Core Boundary
Floorplan defines the size and shape of the chip/block. A top level digital design will have a
rectangular/square shape, whereas a sub block may have rectangular or rectilinear shapes. Core
boundary refers to the area where one will be placing standard cells and other IP blocks. One may
have power routing spaces allocated outside the core boundary. For a full chip, we also have IO
buffers and IO pads placed outside the core boundary.
In PnR tool, floorplanning can be controlled by various parameters:
Aspect ratio: This is the ratio of height divided by width and determines whether we get a square or
rectangular floorplan. An aspect ratio of 1 gives a square floorplan.
Core utilization: Core utilization = (standard cell area+ macro cells area)/ total core area
A core utilization of 0.8 means that 80% of the area is available for placement of cells, whereas 20%
is left free for routing.
Boundary: We need to specify a boundary and the tool can honour it. This comes in handy when we
have an existing boundary from a previous version. When we specify Boundary as the control
parameter, both aspect ratio and core utilization are irrelevant. The tool gives a report of the
utilization for the current boundary specified.
2. IO Placement/Pin placement
In a digital-top design, we need to place IO pads and IO buffers of the chip. Take a rectangular or
square chip that has pads in four sides. To start with, one may get the sides and relative positions of
the PADs from the designers. We also get a maximum and minimum die size according to the
package we have selected. Perl script is used to place IO once the chip size is decided.
Page
38

3. Macro placement
Once the size & shape of the floorplan is ready and initialized, thereby creating standard cell
rows, then place the macros. Do not use any auto placement. Flylines in the tool will show the
connection between the macros and standard cells or IOs.
i.

Use flylines and make sure to place blocks that connects each other closer

ii.

For a full-chip, if hard macros connect to IOs, place them near the respective IOs.

iii.

Consider the power straps while placing macros.

4. Creating Power Rings and Straps


Generating

the

power

rings

using

IC

Compiler

can

be

done

as

follows:

First decide the trunks that supply power to the core then make sure that all the hard macros have
sufficient rings/straps around it to hook into the PG trunks. As usual, a robust power structure will
take iterations and IR drop analysis at a later stage, but a close approximation can be arrived at the
initial stages.

3.12.2 Floorplanning concepts


3.12.2.1 Abutment: Establishing connections between cells by putting them directly next to each other,
without the necessity of routing as shown in figure 3.12.2.1

Page
39

Figure 3.12.2.1: Abutment


3.12.2.2 Leaf cell: A cell at the lowest level of the hierarchy, sit does not contain any other cell.
3.12.2.3 Composite cell: A cell that is composed of either leaf cells or composite cells. The whole
chip is the highest-level composite cell.
3.12.2.4 Slicing floorplans: A floorplan with the property that a composite cells subcells are obtained
by a horizontal or vertical bisection of the composite cell. Slicing floorplans can be represented by a
slicing tree.
In a slicing tree, all cells (except for the toplevel cell) have a parent, and all composite cells have
children. Not all floorplans are slicing. Limiting floorplans to those that have the slicing property is
reasonable. It certainly facilitates floorplanning algorithms. Siliced floorplan is as shown in figure
3.12.2.2.

Figure 3.12.2.2: Sliced floorplan

3.13: PLACEMENT
Placement does not just place the standard cells available in the synthesized netlist. It also
optimizes the design, thereby removing any timing violations created due to the relative placement on
die.
Page
40

3.13.1:-Important things in placement


1. High fanout net synthesis
High fanout nets other than clocks are synthesized at the placement stage. In logic synthesis,
high fanout nets like reset, scan enable etc are not synthesized. The SDC used for PnR should not
have any set_ideal_network or set_dont_touch commands on these signals. Also, make sure to set an
appropriate fanout limit for your library using the command set_max_fanout.
If a driver has too many loads, it will negatively affect the delay numbers and transitions values.
After placement, check for any fanout violations in the timing report.
2. Use Ideal clock
Synthesize the clock later in the design. Define the clocks as ideal. If not, HFN synthesis will be
done on the clock. Clock constraints like skew or clock buffers are not used, and effectively your
clock tree is messed up.
3.Control Congestion
Congestion needs to be analyzed after placement and the routing results depend on how congested
the design is. Routing congestion may be localized. Some of the things that can be done to make sure
routing is hassle free are:
4.Macro-padding:
Macro padding or placement halos around the macros are placement blockages around the edge of the
macros. This makes sure that no standard cells are placed near the pin outs of the macros, thereby
giving extra breathing space for the macro pin connections to standard cells.
5. Maximum Utilization constraint:
Some tools specifies maximum core utilization numbers for specific regions. If any region
has routing congestion, utilization there can be reduced, thus freeing up more area for routing.
Page
41

6. Placement blockages:
The utilization constraint is not a hard rule, and if you want to specifically avoid placement in
certain areas, use placement blockages.
7. Scan chain reordering:
In a less complex design, usually scan reordering is carried out. However, sometimes it may
become difficult to pass scan timing constraints once the placement is done. The scan flip flop
placements may create lengthier routes if the consecutive flops in scan chain are placed far apart due
to a functional requirement. In this case, the PnR tool can reconnect the scan chains, to make routing
easier. A prerequisite for this option is a scan DEF for the tool to recognize the chains.
8. TIE cells
In your netlist, some unused inputs are tied to either VDD/VSS (or logic1/logic0). It is not
recommended to connect a gate directly to the power network, so one can use TIEHI or TIELO cells
if available in the library for the same. These are single pin cells which effectively ties the pin it
connects high or low. After placement, dump out a netlist and search for direct pin connections to the
PG rails (other than power pins).
3.14: CLOCK TREE SYNTHESIS
Clock Tree Synthesis is a process which makes sure that the clock gets distributed evenly to
all sequential elements in a design. The goal of CTS is to minimize the skew and latency. The
placement data will be given as input for CTS, along with the clock tree constraints. The clock tree
constraints will be Latency, Skew, Maximum transition, Maximum capacitance, Maximum fan-out,
list of buffers and inverters etc.

The clock tree synthesis contains clock tree building and clock tree balancing. Clock tree
can be build by clock tree inverters so as to maintain the exact transition (duty cycle) and clock tree
Page
42

balancing is done by clock tree buffers (CTB) to meet the skew and latency requirements. Less clock
tree inverters and buffers should be used to meet the area and power constraints.

Standard Clock Tree Synthesis engines are driven by timing closure and, hence, are not PVT
(process/voltage/temperature) variation aware. They are used to fix setup/hold violations by adjusting
the clock skew, adding, removing, and swapping buffers, or exploiting different clock wire lengths
and levels and so on. As a result, the skew sensitivity with respect to PVT variations cannot be kept
low, since there are several contributors originating from different physical phenomena.

Figure 3.14.1: Randomly connected Buffer


One approach to overcoming the impairment of clock skew in chip design, mostly in high
speed design, is the clock MESH, as schematized. The major difference between this method and the
standard approach to clock tree synthesis (CTS) is that at a certain level of the tree, the drivers
outputs are connected to the same metal net, called a mesh net. Such a shorting of several clock
drivers enables an averaging and spatial smoothing effect, which reduces the clock skew of the
different clock drivers.

Page
43

Figure 3.14.2: Orderly Connected Buffers.


Clock mesh technology produces a much lower clock skew compared to a conventional clock tree.
Unfortunately, one issue is how to take full advantage of this technique in the standard design flow.
Typically a full set of analog simulations are needed to evaluate the residual clock uncertainty on the
mesh net before continuing with the timing analysis. A correct skew evaluation needs the layout to be
frozen and every alteration of the pre-mesh structure obliges this out-of-the-flow characterization to
be run again, making the overall design flow very long.

Another problem with a clock mesh, especially when conceived at top level, is the large
amount

of

power

consumed,

which

means

that

dynamic

power

drop

is

likely.

Other techniques found in the literature, such as PLL/DLL de-skewing, are not suited for high
precision, low uncertainty clock distribution, mainly due to added jitter of PLL/DLL circuitry.

3.14.1: Setup and Hold Fixing


Set Up Fixing:
I.

Upsizing the cells (increase the drive strength) in data path.


Page
44

II.

Pull the launch clock

III.

Push the capture clock

IV.

We can reduce the buffers from datapath .

V.
VI.
VII.
VIII.

We can replace buffers with two inverters placing farther apart so that delay can adjust.
We can also reduce some larger than normal capacitance on a cell output pin.
We can upsize the cells to decrease the delay through the cell.
LVT cells

Hold Fixing:
It is well understood hold time will be large if data path has more delay. So we have to add
more delays in data path.

I.

Downsizing the cells (decrease the drive strength) in data path.

II.

Pulling the capture clock.

III.

Pushed the launch clock.

IV.

By adding buffers/Inverter pairs/delay cells to the data path.

V.

Decreasing the size of certain cells in the data path, It is better to reduce the cells n capture path
closer to the capture flip flop because there is less chance of affecting other paths and causing new
errors.

VI.

By increasing the wire load model, we can also fix the hold violation.

3.15: ROUTING

Page
45

The routing process determines the precise paths for interconnections. This includes the standard cell
and macro pins, the pins on the block boundary or pads at the chip boundary. After placement and
CTS, the tool has information about the exact locations of blocks, pins of blocks, and I/O pads at chip
boundaries. The logical connectivity as defined by the netlist is also available to the tool. In routing
stage, metal and vias are used to create the electrical connection in layout so as to complete all
connections defined by the netlist. Now, to do the actual interconnections, the tool relies on some
Design Rules.

Most of the routers available are grid based routers. There are routing grids defined for the
entire layout. Consider it like a graph as below. For grid based routers, there are also preferred
routing directions defined for each metal layer. e.g. Metal1 has a preferred direction of horizontal,
metal2 has preferred routing direction of vertical and so on. So, in the whole layout, metal1 routing
grids will be drawn (superimposed) horizontally with metal1 wire pitch and metal2 grids will be
drawn vertically with metal2 wire pitch between each.

Figure 3.15.1 Routing Grids


The above figure shows how routing grids are drawn. Here only two metals are considered
for now, but in a process with more metals, similar grids will be superimposed on the layout for all
available metals. Pitch is calculated by determining the minimum spacing required between grid lines
of same metal.
Page
46

This can be the minimum spacing of the metal itself, but is usually a value greater than the
minimum spacing. This is calculated by taking into account the via dimension as well, so that no two
adjacent wires on the grid create any DRC violation even when there are vias present.

3.15.1: Routing Congestion


It is difficult to route a highly congested design. Some not-so congested designs may have
pockets of high congestion which will again create routing issues. It is important that the congestion
is analyzed and fixed before detailed routing. After CTS, the tool can give you a congestion map by a
trial route/ global route values.

3.15.2: Routing Order


It is recommended that you route sensitive nets like clock before the rest of the signal route. My
assumption is that you have completed power routing after the floorplan stage (because that is what I
do.). For this discussion I am going with a traditional routing approach and not considering signal
integrity issues. The order of routing is:
3.15.2.1: Power Routing:
Connect the macro and standard cell power pins to the power rings and staps you have created for the
design. IR drop
3.15.2.2: Clock Routing:
We do not want to upset the skew and delay values for the clock net as much as possible. So
the clocks are given higher priority in using routing resources and routed prior to any other net
routing. Clock routing can be limited to higher metal layers for reduced RC numbers.
3.15.2.3: Signal Routing:
The rest of the nets are routed. We can also route groups of nets, and non-default routing rules can
also be applied to select nets.
Page
47

3.16: Signal Integrity


By definition, integrity means complete and unimpaired. Likewise, a digital signal with
good integrity has clean, fast transitions; stable and valid logic levels; accurate placement in time and
it would be free of any transients.
Evolving technology makes it increasingly difficult for system developers to produce and maintain
complete, unimpaired signals in digital systems.
The term Signal Integrity (SI) addresses two concerns in the electrical design aspects the timing
and the quality of the signal. The goal of signal integrity analysis is to ensure reliable high-speed data
transmission. In a digital system, a signal is transmitted from one component to another in the form of
logic 1 or 0, which is actually at certain reference voltage levels. At the input gate of a receiver,
voltage above the reference value VIH(high input voltage) is considered as logic high, while voltage
below the reference value VIL(low input voltage) is considered as logic low.
Figure 3.16.1 shows the ideal voltage waveform in the perfect logic world, whereas Figure 3.16.2
shows how signal will look like in a real system. More complex data, composed of a string of bits 1s
and 0s, are actually continuous voltage waveforms.
The receiving component needs to sample the waveform in order to obtain the binary
encoded information. The data sampling process is usually triggered by the rising edge or the falling
edge of a clock signal as shown in the Figure 3.16.3. It is clear from the diagram that the data must
arrive at the receiving gate on time and settle down to a non-ambiguous logic state when the
receiving component starts to latch in. Any delay of the data or distortion of the data waveform will
result in a failure of the data transmission. If the signal waveform in Figure 3.16.2 exhibits excessive
ringing into the logic gray zone while the sampling occurs, then the logic level cannot be reliably
detected.
Page
48

Figure 3.16.1: Ideal Waveform at the Receiving Gate.

Figure 3.16.2: Real Waveform at the Receiving Gate.

Figure 3.16.3: Data Sampling Process and Timing Conventions.


Page
49

3.17: IR Drop Analysis


Voltage drop in power (and bounces in ground) network is due to electrical parameters such as
resistance, capacitance, inductance of power (ground) network.
The impact of IR drop is, it decreases power supply voltage across the cells, leads to increased
cell delay and degraded performance. Severe IR drop can lead to functionality failures and reduced
yield.
For example: Consider the figure 3.17.1, the voltage supplied to the circuit element(NMOS
transistor) is 1.5V but only a part of the supply voltage i.e. 1.2V reaches the circuit element and the
rest of 0.3V is dissipated or lost in the resistive wire AB.

Figure 3.17.1 IR drop Analysis.


We can prevent IR drop by robust power & ground routing and by using sufficient amount
of decoupling capacitors as shown below:

3.17.1: Types of IR Drop


3.17.1.1: Static IR drop:
Static IR drop is the average voltage, Vavg drop across the power/ground network, based on
average power or current. Static IR drop uses total power consumption to calculate the constant
current drawn. This current is then multiplied with the equivalent resistance of the network to get the
average voltage drop.
For chip it is given by:
Page
50

Iavg * R
Static IR drop is caused by the resistance of the metal wires comprising the power distribution
network. It occurs due to current flow when the circuit is at a steady state i.e. no inputs are
switching.
3.17.1.2: Dynamic IR drop
In dynamic IR drop time-varying voltage V(t) drop on power/ground network. Dynamic IR drop
additionally models the following

Package inductance

Decoupling Capacitance

Dynamic current need (di/dt).


Dynamic IR drop evaluates the IR Drop caused when large amounts of circuitry switch
simultaneously, causing peak current demand. Such as when there is simultaneous switching of onchip components such as clocks, clocked elements, bus drivers, memory decoder drivers it causes
sudden dip or spike in the power network. Dynamic IR Drop depends on the switching time of the
logic and is less dependent on the clock period. Since the current flowing through the metal
interconnect is greater when the logic is switching is greater than the current flowing when the
network is stable the dynamic IR drop is greater than static IR drop.

3.17.2: IR drop reduction techniques


Decap Insertion: The most common method of IR drop reduction is by inserting decoupling capacitors
(Decaps) figure 3.17.2.1. Decaps hold a reservoir of charges and are placed around the regions where
there is high current demand. High current demand may be due to high switching activity of cells.
Decaps provide current whenever large driver switches and keeps the average as well as peak voltages
within their tolerable DC noise margins.

Page
51

(a)

(b)
Figure 3.17.2.1 (a) Insertion of decoupling capacitor (b) supply voltage within tolerance
level after decap insertion
There are 2 types of decaps:
1. White-space decaps figure 3.17.2.2(a): consists of NMOS transistors placed between the logic blocks
in the open area on a chip.
2. Standard cell decaps figure 3.17.2.2(b): consists of cross coupled PMOS and NMOS transistors
placed within the logic blocks.

Page
52

Figure 3.17.2.2. (a) White Space Decaps. (b) Standard Cell Decaps.
Selective Glitch Reduction: Whenever there is a transition at the output of any logic it is due to two
reasons:
1. When there is transition of input signals resulting in desired output.
2. When there is transition of unnecessary logic resulting in undesired output (spurious transition).
These unnecessary signals at the output of logic are known as glitches.
Dynamic power consumption = NTPT
NT = number of switching transitions through the logic.
PT = average power consumption per switching transition.
The motive of selective glitch reduction technique is to reduce N T through glitch elimination in
selected combinational cells which are contributing to peak IR drop.

3.18 Static Timing Analysis


Static Timing Analysis (STA) is one of the techniques to verify design in terms of timing. This
kind of analysis does not depend on any data or logic inputs, applied at the input pins. The input to an

Page
53

STA tool is the routed netlist, clock definitions (or clock frequency) and external environment
definitions.
Figure 3.18.1 shows the basic functionality of static timing analysis. Given a design along with a
set of input clock definitions and the definitions of external environment of the design, the purpose of
static timing analysis is to validate that the design can operate safely at the specified frequency of
clocks without any timing violations.

Figure 3.18.1: Static Timing Analysis.


DUA is design under analysis. Some examples of timing checks are setup and hold checks. A
setup check ensures that the data can arrive at a flip flop within given clock period. A hold check
ensures that the data is held for at least a minimum time so that flip flop captures the intended data
correctly.
Page
54

The entire design is analyzed once and required timing checks are performed for all possible
paths of the design. Thus STA is a complete and exhaustive method for verifying the timing of a
design.

3.18.1: How STA is performed on a given circuit?


1. Design is broken down into sets of timing paths.
2. Calculates the signal propagation delay along each path.
3. And checks for violations of timing constraints inside the design and at the input/output interface.

3.18.2: Types of paths for timing analysis


1. Data paths
Start Point:
Input port of the design (because the input data can be launched from some external source).
Clock pin of the flip-flop/latch/memory (sequential cell).
End point:
Data input pin of the flip-flop/latch/memory (sequential cell).
Output port of the design (because the output data can be captured by some external sink).
If we use all the combination of 2 types of Starting Point and 2 types of End Point, we can say
that there are 4 types of Timing Paths on the basis of Start and End point as shown in figure 3.18.2.
1. Input pin/port to Register (flip-flop).
2. Input pin/port to Output pin/port.
Page
55

3. Register (flip-flop) to Register (flip-flop)


4. Register (flip-flop) to Output pin/port

Figure 3.18.2: Data paths.


PATH1- starts at an input port and ends at the data input of a sequential element. (Input port to
Register)
PATH2- starts at the clock pin of a sequential element and ends at the data input of a sequential
element. (Register to Register)
PATH3- starts at the clock pin of a sequential element and ends at an output port. (Register to Output
port)
PATH4- starts at an input port and ends at an output port. (Input port to Output port)
2. Clock paths
Start point: Clock input port.
End point: Clock pin of the flip-flop/latch/memory (sequential cell).

Page
56

In figure 3.18.3 it is very clear that the clock path starts from the input port/pin of the design
which is specific for the Clock input and the end point is the clock pin of a sequential element. In
between the Start point and the end point there may be lots of Buffers/Inverters/clock divider.

Figure 3.18.3: Clock Paths.


3. Clock gating paths
Start point: Input port of the design.
End point: Input port of clock-gating element.
Clock path may be passed through a gated element to achieve additional advantages. In this case,
characteristics and definitions of the clock change accordingly. We call this type of clock path as
gated clock path (figure 3.18.4).

Page
57

Figure 3.18.4: Clock Gating Path


LD pin is not a part of any clock but it is used for gating the original CLK signal. Such types of
paths are neither a part of Clock path nor of Data Path because as per the Start Point and End Point
definition of these paths, it is different. So such types of paths are part of Clock gating path.
4. Asynchronous paths
Start point: Input Port of the design.
End point: Set/Reset/Clear pin of the flip-flop/latch/memory (sequential cell).
A path from an input port to an asynchronous set or clear pin of a sequential element is referred as
asynchronous path (figure 3.18.5).

Page
58

Figure 3.18.5: Asynchronous Path.

3.18.3: Advantages and Limitations of STA


Advantages

It is faster because it does not need to simulate multiple test vectors.

All timing paths are considered for the timing analysis.


Limitations

All paths in the design may not run always in worst case delay.

Clock related all information has to be fed to the design in the form of constraints.

STA does not check for logical correctness of the design.

STA is not suitable for asynchronous circuits.

Page
59

3.19: Advanced Low Power Techniques


Power consumption can be divided into two aspects:
1) Dynamic power the power that is consumed by a device when it is actively switching from one
state to another. Dynamic power consists of switching power, consumed while charging and
discharging the loads on a device, and internal power (also referred to as short circuit power),
consumed internal to the device while it is changing state.
2) Leakage power the power consumed by a device not related to state changes (also referred to
as static power). Leakage power is actually consumed when a device is both static and switching, but
generally the main concern with leakage power is when the device is in its inactive state, as all the
power consumed in this state is considered wasted power.

Figure 3.19.1: Total Power Consumption.

3.19.1: Techniques to reduce dynamic and leakage power


Various techniques have been developed to reduce both dynamic and leakage power. The two
most common traditional techniques are:

Page
60

1) Clock gating involves disconnecting of the clock from a device it drives when the data going
into the device is not changing. This technique is used to minimize dynamic power.

Figure 3.19.2: Clock Gating.


2) Multi-Vth optimization: involves replacement of faster Low-Vth cells, which consume more
leakage power, with slower High-Vth cells, which consume less leakage power. Since the High-Vth
cells are slower, this swapping only occurs on timing paths that have positive slack and thus can be
allowed to slow down.

Figure 3.19.3: Multi Vth Optimization.

Page
61

As technologies have shrunk, leakage power consumption has grown exponentially, thus
requiring more aggressive power reduction techniques to be used. Similarly, increase in clock
frequency have caused dynamic power consumption of the devices to outstrip the capacity of the
power networks that supply them, and this becomes especially acute when high power consumption
occurs in very small geometries, as this is a power density issue as well as a power consumption
issue.

Several advanced low power techniques have been developed to address these needs. The most
commonly adopted techniques today are:
1) Multi-voltage supply (MVS): The operation of different areas of a design at different voltage
levels. Only specific areas that require a higher voltage to meet performance targets are connected to
the higher voltage supplies. Other portions of the design operate at a lower voltage, allowing for
significant power savings. Multi-voltage supply is generally a technique used to reduce dynamic
power, but the lower voltage values also cause leakage power to be reduced.
2) Power gating: The complete shut off of supply nets to different areas of a design when they are
not needed (also known as MTCMOS or power shutdown). Since the power has been completely
removed from these shutdown areas, the power for these areas is reduced essentially to zero. This
technique is used to reduce leakage power.
It is very common to see multi-voltage and power gating used together on the same design, whereby
different regions operate at different voltages, and one or more of those regions can also be shutdown.

Page
62

CHAPTER - 4
REFLECTION NOTES (Specific Outcomes)
Experience and Assessment:
The experience at the company was satisfactory, the people work in co-ordination and the company
environment is very safe and studios. The reason to choose this company was that it was offering
internship in VLSI which is my core specialization in PG degree and I wanted to benefit from this
experience, also I got to learn new tools like Electric, Symica DE and Microwind.
I used to spend nearly 5 to 6 hours daily in the company trying out with different circuits
and making their layouts manually. I thank my guide who was always there by my side throughout my
internship process giving me advice, feedback and tips on how the people work in an industry
environment.
Some technical outcomes are:
VLSI chips application is mainly for mobile devices which are expected to have longer
battery life. Additionally, the new electronic products require increased functionality, high performance
and integration of large number of components within a single chip leading to power-consumed
designs. So how to make the chip lower power is very important for the company.
Page
63

Goals of a chip manufacturer are:


To reduce power consumption.
To reduce area.
To increase speed and performance of the chip.
Some non-technical outcomes are:

Confidence level is increased.


Stage fear is almost vanished.
I have developed the ability to explain my point of view to the front person very clearly.
I got the brief idea about how the things will be carried out in a real industry.

4.1 CONTRIBUTION TO THE ORGANIZATION


During my internship I wanted to actively contribute to the companys projects by applying the
knowledge which I have gained during my internship learning process. So I have contributed by
designing a 7-segment decoder in 12nm technology using the tool Microwind2.
The main objective of my contribution in this internship is to design an IC layout of a 7-segments
decoder by MicroWind VLSI Design System. It is a free open source EDA system that provides
service in handling IC layout, schematic drawing, textual hardware description language, and other
features . By using this software, a micrometer sized IC can be easily designed due to the availability
of various features that can be used to design and check the IC layout. Moreover, MicroWind VLSI
Design System also allows the schematic and layout design to be done in a systematic and efficient
manner, thus saving time and reducing the production cost of the IC chip.
There are different technologies to construct integrated circuits such as bipolar integrated technology,
NMOS technology and CMOS technology. In my implementation, CMOS technology is used. The main
reason in using CMOS technology is due to its scalable high noise immunity and low power consumption.
Basically, CMOS technology uses both NMOS and PMOS, which means only either one of both types of
transistors will be ON at a time during the operation. Thus, CMOS IC consumes less power as power is
used only when the NMOS and PMOS transistors are switching between on and off states [2].

4. 2 BASICS OF 7-SEGMENTS DECODER


Page
64

A 7-segments decoder is able to convert the logic states of inputs into seven bits of outputs
and displays in 7-segments display. It is used widely in devices where its main function is to display
numbers from a digital circuitry. Examples of these devices includes calculators, displays in elevator,
digital timers, digital clocks and etc. There are many types of decoders such as 2:4 decoder, 3:8
decoder and 4:16 decoder. Since there are ten decimal numerals (09) to be displayed in the 7segments display, a 4:16 decoder was used.
The structure of a 7-segments display is shown in Fig. 4.2.1. It is used to display decimal numerals
in seven segments and each segment is represented by an alphabet a to g. By setting the required
segments to be turned on, the desired decimal numeral can be displayed on the 7-segments display. The
logic diagram of 7-segments decoder is shown in Fig. 4.2.2.

Figure 4.2.1 Structure of a 7-segments display

Page
65

Figure 4.2.2 Logic Diagram of 7-Sgment Decoder

4.3 IC DESIGN OF COMPONENTS OF 7-SEGMENT DECODER


IC layouts are built from three basic components which are the transistors, wires and vias.
During the design of the layouts, the design rule has to be considered. Design rules govern the layout
of individual components and the interaction between those components. When designing an IC,
designers tend to make the components as small as possible enabling implementation of as many
functions as possible onto a single chip.
Page
66

However, since the transistors and wires are extremely small, errors may happen during the
fabrication process. Hence, design rules are created and formulated to minimize problems during
fabrication process and helps to increase the yield of correct chips to a suitable level.

FINDINGS
In my implementation project, an IC layout of a decoder that displays the decimal numeral in
7-segments display was designed. It consists of NOT gates, 2-input NAND gates, 3-input NAND
gates, 4-input NAND gates, 2-input AND gates and 3-input AND gates. The schematic circuits and
layouts of all these gates were drawn and simulated using MicroWind VLSI Design System.

4.3.1 2-Input NAND Gate


Figure 4.3.1.1 and Figure 3.1.2 shows the schematic diagrams. Figure 4.3.1.3 shows layout
design and figure 4.3.1.4 shows simulation waveforms of a 2-input NAND gate in MicroWind
software.

Figure 4.3.1.1 Schematic diagram of a 2-input NAND gate.

Page
67

Figure 4.3.1.2 Schematic diagram of 2-Input NAND Gate in MicroWind.

Page
68

Figure 4.3.1.3 Layout Design of a 2-input NAND gate in Microwind

Page
69

Figure 4.3.1.4 Simulation waveforms of 2-Input NAND Gate in MicroWind

4.3.2 3-Input NAND Gate


Figure 4.3.2.1 and Figure 4.3.2.2 show the schematic diagrams Figure 4.3.2.3 shows layout
design and figure 4.3.2.4 shows simulation waveforms of a 3-input NAND gate in MicroWind
software.

Figure 4.3.2.1 Schematic diagram of a 3-input NAND gate.

Page
70

Figure 4.3.2.2 Schematic diagram of a 3-input NAND gate in MicroWind.

Page
71

Figure 4.3.2.3 Layout Design of a 3-input NAND gate in MicroWind

Page
72

Figure 4.3.2.4 Simulation Waveforms of a 3-input NAND gate in MicroWind.

4.3.3 4 Input NAND Gate


Figure 4.3.3.1 and Figure 4.3.3.2 show the schematic diagrams. Figure 4.3.3.3 shows layout
design and figure 4.3.3.4 shows simulation waveforms of a 4-input NAND gate in MicroWind
software.

Figure 4.3.3.1. Schematic diagram of 4-input NAND gate.

Page
73

Figure 4.3.3.2. Schematic diagram of 4-input NAND gate in MicroWind.

Page
74

Figure 4.3.3.3. Layout Design of 4-input NAND gate in MicroWind.

Figure 4.3.3.4. Simulation Waveforms of 4-input NAND gate in MicroWind.


Page
75

Figure 4.3.3.5 shows the 4-input NAND gate symbol and spice code. The code was
required and used to complete the simulation process. The input wire are named as a, b, c andd;
whereas output wire is named as y. Simulation result of the 4-input NAND gate. If all the inputs of
the 4-input NAND gate are 1, the output will be 0. Meanwhile, whenever there is 0 among the
inputs, the output will be 1. In addition, due to the capacitance, the fall in the waveform actually
indicates a logic 0 whereas the rise in waveform indicates a logic 1. From the waveform
generated, as shown in Fig. 11, the results are match with the theoretical 4-input NAND gate. It can
be deduced that the 4-input NAND gate drawn by using MicroWind VLSI Design System operates
correctly.

Figure 4.3.3.5. Icon of 4-input NAND gate and spice code.

4.4 7-SEGMENT DECODER DESIGN


All the schematic diagrams and layouts of the basic gates drawn have to go through the
physical verification process before simulation. These are the essential procedures to ensure the
validity output results of the system. Based on the logic diagram of 7-segment decoder, the schematic
circuit and its IC layout was designed using MicroWind VLSI Design System tool. Left side of
Figure 4.4.1 shows schematic diagram. However, right side of Figure 4.4.1 shows the icon view of
the decoder with, which is ready for simulation. It shows the inputs and outputs port of the decoder
for simulation. There are five inputs: A, B, C, D and E, where A is the least significant bit
Page
76

and D is the most significant bit. The input E is acting as an enable signal, such that when E is
1(high), all the 7 outputs will be displayed depend on the inputs else all the 7 outputs will remain
0(low)

Since it is a 7-segment decoder, seven outputs will be needed. These outputs are labelled

as a, b, c,d, e, f and g.

Figure 4.4.1 Schematic diagram and Icon View of 7-Segment Decoder in MicroWind.
In reference to the truth table of the binary-convert-decimal, as shown in Table I, the designed
decoder is match with the theoretical work and said to function as expected.

Page
77

Table 4.4.1 Truth Table of 7-Segment Decoder.


Figure 4.4.2 shows Layout Diagram and figure 4.4.3 shows Simulation Waveforms of entire
7-Segment Decoder in MicroWind VLSI Design System with different inputs. It is known that as
long as the E is off, the outputs will be 0. This figure shows simulation waveform for when the E
is off in the first cycle. When E is on all the 7 outputs will b displayed depend on the input
combinations.

Page
78

Figure 4.4.2 Layout Design of 7-Segment Decoder in MicroWind.

Page
79

Figure 4.4.3 simulation Waveforms of 7-Segment Decoder in MicroWind.

Page
80

5. APPLICATIONS
As the technology advances, the time factor becomes an important element in many of the
fields. The time is usually notified soon when displayed in digital format. Hence 7 Segment Decoder
is applicable in many areas to display time and other field related information. Few of them are listed
below:
1) Railway Stations: To display time of arrival and departure of trains and train number,7-Segment
decoder based LED Display is very much useful.
2) Big Organizations: For example in a cricket match, to display scored runs, lost wickets and time
of start of the match etc.
3) Wrist Watch: When time need to display in digital format, technology use 7-Segment Decoder in
wrist watches.
4) Calculators: Since all the calculated values are in number format from 0 to 9, hence calculators
are also use 7-Segment Decoder based LED Display.
Page
81

5) Bus Stands: To display departure and arrival time of each bus can be displayed using 7-Segment
Decoder based LED Display.
6) Lifts: In order to display floor numbers, 7-Segment decoder based LED Display is much
applicable.

6. CONCLUSION
In conclusion, 7-segments decoder IC is to display the numbers in 7 segments. It converts the
binary input to 7 bits according to the input. The IC layout of the decoder is designed and
successfully proves that the output waveforms generated matches the theoretical decoder.
In addition, the open source MicroWind VLSI Design System is a user friendly software to be
used in designing a layout of 7-segments decoder. It is expected that the software is able to cope with
more complex digital IC design with its suite of verification and design tools.
The internship program was quite beneficial to me. I learned various technical aspects which
were included in our curriculum, apart from these there were non-technical areas in which I
developed a lot. It was a great experience working with professionals and interacting with them,
learning from their experience and working under their guidance. I also enjoyed complying with the
protocols of the organization. I learned professionalism on whole.
Page
82

7. BIBLIOGRAPHY
[1] About Electric. (May 2013). [Online]. Available: www.staticfreesoft.com/electric.html
[2] R. J. Baker, CMOS: Circuit Design, Layout, and Simulation, John Wiley & Sons,
2010, pp. 6-12.
[3] M. M. Forrest, Understanding Digital Computers, Radio Shack, 1987.
[4] S. H. Teen and S. Y. Lee, CMOS IC layout design: 7-segments counter, Lecture Notes on
Photonics and Optoelectronics, vol. 1, no. 2, pp. 52-55, December 2013.
[5] A. P. Douglas and E. Kamran, Basic VLSI Design, 3rd ed., Prentice Hall, 1994, pp. 72-76.
Page
83

Page
84

You might also like