You are on page 1of 24

INTRODUCTION

Intelligence is the ability to comprehend, understand and profits from experience .Humans are
the most intelligent organism in this planet .Human brain is pretty impressive in many ways. But
it has certain limitations. Brains parallelism (one hundred trillion interneuronal connections
working simultaneously) can be used to quickly recognize subtle patterns. As the neural
transactions are slow compared to some electronic circuits, our thinking process is slow. This
makes our ability to process information limited compared to the exponential growth of human
knowledge.
Machines were invented by man to assist him while performing human tasks. As years passed by,
new and more improved machines were invented by man. One among them was named robot, a
mechanical machine controlled by a computer program or an electronic circuit. Robotics
developed and autonomous robots were developed . In 1955 , John McCarthy coined a new term
called artificial intelligence (AI) . The main aims of AI are reasoning , knowledge , planning ,
learning ,natural language processing ,perception and the ability to move and manipulate
objects .
Now coming to what is singularity. It can be said as an era that will happen in future where the
pace of technological change will be so rapid and its impact so deep, that human life will be
irreversibly transformed. Nowadays researchers are, with some success, making machines that
are more intelligent and responsive to solving real world problems.. Robotics departments are
trying to bring out robots that understand their environment better and act according to the
situations. There have been quantum leaps made in producing artificial balancing for these robots
though not complete compared to a human balancing system till date. Above all artificial
intelligence is growing day by day and is coursing through the blood of embodied science
But still we are a very long way away from understanding how consciousness arises in a human
brain. We are even a long way from the much simpler goal of creating autonomous, selforganizing and perhaps even self-replicating machines.

Singularity The Origin


In 1982, Vernor Vinge proposed that the creation of an intelligence which is smarter-than-human
intelligence represented a breakdown in humans' ability to model their future. Vinges argument
was that the authors cannot write realistic characters that surpass the human intellect, as the
thoughts of such an intellect would be beyond the ability of humans to express. Vinge named this
event "the Singularity". He compared it to the breakdown of the then-current model of physics
when it was used to model the gravitational singularity beyond the event horizon of a black hole.
In 1993, Vernor Vinge associated the Singularity more explicitly with I. J. Good's intelligence
explosion, and tried to project the arrival time of artificial intelligence (AI) using Moore's law,
which thereafter came to be associated with the "Singularity" concept. Futurist, Ray Kurzweil
generalizes singularity to apply to the sudden growth of any technology, not just intelligence; and
argues that singularity in the sense of sharply accelerating technological change is inevitably
implied by a long-term pattern of accelerating change that generalizes Moore's law to
technologies predating the integrated circuit, and includes material technology, medical
technology, and others. Aubrey de Grey has applied the term the "Methuselarity" to the point at
which medical technology improves so fast that expected human lifespan increases by more than
one year per year.
Robin Hanson, taking "singularity" to refer to sharp increases in the exponent of economic
growth, lists the agricultural and industrial revolutions as past "singularities". Extrapolating from
such past events, Hanson proposes that the next economic singularity should increase economic
growth between 60 and 250 times. An innovation that allowed for the replacement of virtually all
human labor could trigger this event.

This is a graph from Ray Kurzweils Nearing Technological Singularity . Many significant
technological and biological developments are shown in the graph . This essential shows how
rapidly things are changing now. In the early years of life , things evolved slowly . Now its
changing quickly . Its an exponential graph.It goes to show that more developments are going to
occur in the next two decades than the what happened in the past two decades.

The Six Epochs


Evolution is the process by which something passes by degrees to a different stage( especially a
more advanced or mature stage). We can categorise evolution into 6 epochs or stages

1. Physics and Chemistry Origin of life can be traced back to a state that represents information
in its basic structures: patterns of matter and energy.
2. Biology and DNA-carbon-based compounds became more and more intricate until complex
aggregations of molecules formed self-replicating mechanisms, and life originated. Molecules
named DNAs were used to store information.
3. Brains -DNA-guided evolution produced organisms that could detect information with their
own sensory organs and process and store that information in their own brains and nervous
systems.
4. Technology- Humans started creating technology to ease their work. This started out with
simple mechanisms and developed into automated machines.

5. Merge of technology with human intelligence-merger of the vast human knowledge with the
vastly greater capacity, speed, and knowledge-sharing ability of our technology.
6.The universe wakes up

SINGULARITY SCENARIO
According to Vernor Vinge , Singularity is expected to come as a combination of the following:
3.1 AI scenario: It involves creating super human artificial intelligence in computers where
databases and computers become sufficiently effective enough to be considered a superhuman
being. AI researches are highly technical and specialized and are deeply divided into sub fields
that often fail to communicate with each other. The main aims of AI researchers are reasoning,
knowledge, planning, learning, natural language processing, perception and the ability to move
and manipulate objects.

Moores law- It is an observation that the no of transistors in an IC doubles in every two years.
Or in other words , the speed of the IC is increasing year by year

Advancements in digital electronics are strongly linked to Moore's law: quality-adjusted


microprocessor prices,] memory capacity, sensors and even the number and size of pixels in
digital cameras. All of these are improving at exponential rates
3.2 IA scenario: It is the way of improving human intelligence through amplification of
intelligence. It implies the efficient employing of information technology in enhancing human
intelligence. The term was first put forward in 1950s by Cybernetics and early computer
pioneers. IA is sometimes contrasted with AI, that is, the project of building a human like
intelligence in the form of an autonomous technological system such as computer or robot.
3.3 Biomedical Scenario: We directly increase our brainpower by improving the neurological
actions of our brains.
3.4 The Internet Scenario: Humanity, its networks, computers, and databases become
sufficiently effective to be considered a superhuman being.
3.5 The Digital Gaia Scenario: The network of embedded microprocessors becomes
sufficiently effective to be considered a superhuman being

AI SEED AI SINGULARITY
Software capable of improving itself has been a dream of computer scientists since the inception
of the field. Since the early days of computer science, visionaries in the field anticipated
creation of self-improving intelligent system frequently as an easier pathway to creation of true
artificial intelligence. As early as 1950s Alan Turing wrote instead of trying to produce a
program to stimulate the adult mind why not rather try to produce one which stimulates the
childs?. If this were then subjected to an appropriate course of education one would obtain the
adult brain.
Let an ultra-intelligent machine be defined as a machine that can far surpass all the intellectual
activities of any man however clever. Since the design of machines is one of the intellectual
activities, an ultra-intelligent machine could design even better machines; there would then
unquestionably be an intelligent explosion and the intelligence of man would be left far
behind. Thus the first ultra-intelligent machine is the last invention that man ever needs to make.
Once program with a genuine capacity for self-improvement has been devised a rapid
revolutionary process will begin. As the machine improves both itself and its model of itself
there begins a phenomenon associated with the terms consciousness, intuition and
intelligence itself.
Self-improving software can be classified by the degree of self-modification it entails. In general
we distinguish three levels of improvement modification, improvement (weak selfimprovement) and recursive improvement (strong self-improvement).

Self-modification does not produce improvement and is typically employed for code
obfuscation to protect software from being reverse engineered or to disguise selfreplicating computer viruses from detection software. While a number of obfuscation
techniques are known to exist, ex. self-modifying code, polymorphic code, metamorphic
code, diversion code, none of them are intended to modify the underlying algorithm.

Self-improvement or Self-adaptation is a desirable property of many types of software


products and typically allows for some optimization or customization of the product to
the environment and users it is deployed with. Common examples of such software
include such as Genetic Algorithms or Genetic Programming which optimize software
parameters with respect to some well understood fitness function and perhaps work over
some highly modular programming language to assure that all modifications result in
software which can be compiled and evaluated. Omohundro proposed the concept of
efficiency drives in self-improving software. Because of one of such drives, balance
drive, self-improving systems will tend to balance the allocation of resources between
their different subsystems increase. While performance of the software as a result of such
optimization may be improved the overall algorithm is unlikely to be modified to a
fundamentally more capable one.
Recursive Self-Improvement is the only type of improvement which has potential to
completely replace the original algorithm with a completely different approach and more
importantly to do so multiple times. [2] At each stage newly created software should be
better at optimizing future version of the software compared to the original algorithm. As
of the time of this writing it is a purely theoretical concept with no working RSI software
known to exist. However, as many have predicted that such software might become a
reality in the 21st century it is important to provide some analysis of properties such
software would exhibit.
Self-modifying and self-improving software systems are already well understood and are quite
common. Consequently, we will concentrate exclusively on RSI systems. In practice
performance of almost any system can be trivially improved by allocation of additional
computational resources such as more memory, higher sensor resolution, faster processor or
greater network bandwidth for access to information. This linear scaling doesnt fit the
definition of recursive-improvement as the system doesnt become better at improving itself. To
fit the definition the system would have to engineer a faster type of memory not just purchase
more memory units of the type it already has access to. In general hardware improvements are
likely to speed up the system, while software improvements (novel algorithms) are necessary
for achievement of meta-improvements.
It is believed that AI systems will have a number of advantages
over human programmers making it possible for them to succeed where we have so far failed.
Such advantages include : longer work spans (no breaks, sleep, vocation, etc.), omniscience
(expert level knowledge in all fields of science, absorbed knowledge of all published works),
superior computational resources (brain v/s processor, human memory v/s RAM),
communication speed (neurons v/s wires), increased serial depth (ability to perform sequential
operations in access of about a 100 human brain can manage), duplicability (intelligent software
can be instantaneously copied), editability (source code unlike DNA can be quickly modified),

goal coordination (AI copies can work towards a common goal without much overhead),
improved rationality (AIs are likely to be free from human cognitive biases) , new sensory
modalities (native sensory hardware for source code), blending over of deliberative and
automatic processes (management of computational resources over multiple tasks),
introspective perception and manipulation (ability to analyze low level hardware, ex. individual
neurons), addition of hardware (ability to add new memory, sensors, etc.), advanced
communication (ability to share underlying cognitive representations for memories and skills).

PRIMARYBUILDING BLOCKS OF SINGULARITY


The Singularity will unfold through these three overlapping revolutions:
G, N, &R.
Genetics (G)
Nanotechnology (N)
Robotics (R)
These are the primary building blocks of the impending Singularity as Ray Kurzweil sees them.
He calls them "the three overlapping revolutions," and he says they will characterize the first half
of the twenty-first century which we are in now. He goes on to say, "These (GNR) will usher in
the beginning of the Singularity. Now we are in the early stages of the 'G' revolution today. By
understanding the information processes underlying life, we are starting to learn to reprogram
our biology to achieve the virtual elimination of disease, dramatic expansion of human potential,
and radical life extension.[4]
Ray Kurzweil then says regarding nanotechnology, "The 'N' revolution will enable us to redesign
and rebuild - molecule by molecule - our bodies and brains and the world with which we interact,
going far beyond the limitations of biology.

Of the three (GNR), Ray Kurzweil believes that the most powerful impending revolution is the
'R' revolution. He says, "Human-level robots with their intelligence derived from our own but
redesigned to far exceed human capabilities represent the most significant transformation,
because intelligence is the most powerful 'force' in the universe. Intelligence, if sufficiently
advanced, is, well, smart enough to anticipate and overcome any obstacles that stand in its path.

TECHNICAL ASPECTS OF SINGULARITY


A reliable and long lasting power source
Solar cells are well known for their use as power sources for satellites, environmentalist green
energy campaigns and pocket calculators. In robotics solar cells are used mainly in BEAM
robots( Biology, Electronics, Aesthetics and Mechanics).Commonly these consist of a solar cell
which charges a capacitor and a small circuit which allows the capacitor to be charged up to a set
voltage level and then be discharged through the motor(s) making it move . For a larger robot
solar cells can be used to charge its batteries. Such robots have to be designed around energy
efficiency as they have little energy to spare.[2]

Faster and more efficient chips


Carbon based transistors have attracted significant interest dueto their versatility and high
intrinsic mobility. Carrier mobility in graphitic forms of carbon such as nanotubes and thin
graphite sheets can be very high. An alternative form of carbon is graphene. Graphene is a
horizontally extended single atomic layer of graphite. Recently graphene devices have been built
on thin exfoliated sheets of highly oriented pyrolytic graphite.
Graphene has many extraordinary properties. It is about 207 times stronger than steel by
weight, conducts heat and electricity efficiently and is nearly transparent. Graphene holds a great
promise for the future electronic technology. It has excellent thin film property, films that are thin
as 0.4 nm have been shown to have high mobility. This is in contrast to silicon where mobility
rapidly degrades as a function of thickness at the nanometer scale.
Memory back up
The search for new nonvolatile universal memories is propelled by the need for pushing power
efficient nano-computing to the next higher level. As a potential for the next memory technology
of choice, the recently found the missing fourth circuit element, memristor has drawn a great
deal of research interests. The basic circuit elements, resistance, capacitance, and inductance,
describe the relations between fundamental electrical quantities: voltage, current, charge and
flux. Resistance relates voltage and current (dv= Rdi), capacitance relates charge and voltage

(dq=Cdv), and inductance relates flux and current (d=Ldi), respectively. However there is a
missing link between flux and charge which scientist Chua called as memresistance.
While in the linear case, memristance becomes constant which acts like resistance. However if
-q relation is nonlinear, the element is referred to as memresistance, which can be chargecontrolled.
Eq 1: M(q)=d/dq
The prototyped memristor devices can be scaled down to 10nm or below and the memristor
memories can achieve an integration density of 1000gbits/cm3, a few times higher than today
advanced flash memory technologies. In addition, the nonvolatile nature of memristor memory
makes it an attractive candidate for the next generation memory technology. The switching
power consumption of memristor can be 20 times smaller than flash. Memristor memories are
non-volatile so computers can start without reboot. Moreover it has unique characteristics that
can be used for self-programming. It could vary value according to the current passing through it
and could even remember it even after the current has disappeared.

NEARING SINGULARITY?
Fueled by creative imagination coupled with technological expertise, wearable robotic
applications like exoskeletons are moving out of the realm of science fiction and into the real
world. Military applications can turn ordinary people into super soldiers with the ability to carry
far more weights faster, farther and for longer periods of time than is possible for humans alone.
Exoskeletons can protect wearers from any enemy fire and chemical attack. By increasing speed,
strength and protection, these wearable robots can help rescue workers more effectively dig
people out from under rubble after earthquakes or carry them from burning buildings while
protecting the rescuers from falling debris and collapsing structures. [3]
Exo Hiker

A recent force driving exoskeleton development has been a U.S. Defense Advanced Projects
Agency (DARPA) program known as Exoskeleton for Human Performance Augmentation
(EHPA). One example is its ExoHiker, which weighs 31 pounds, including power unit, batteries,
and onboard computer. It operates with virtually imperceptible noise. With lithium polymer
batteries, the device can travel 42 miles per pound of battery at a speed of 2.5 miles per hour.
With a small pack-mounted solar panel its mission time will be unlimited. It enables wearers to
carry 150 pounds without feeling the load on their shoulders and features retractable legs,
unfettered driving while using the device.
Japans HAL 5
A research team led by a professor in the Department of Intelligent Interaction Technologies has
developed the Robot Suit Hybrid Assistive Limb (HAL) exoskeleton for applications in physical
training support, activities of daily living, heavy labor support for workers, and rescue support
for emergency disaster personnel. HAL can magnify a persons strength by two times or more.
The suit detects faint bio-signals on the surface of the skin when the human brain tries to move
the exoskeleton. When the robot suit detects the signal, it helps the user to move and this
information is then relayed back to the brain.
MIT Exoskeleton
The Massachusetts Institute of Technology (MIT) Media Lab Bio-mechatronics Group has
developed an exoskeleton that can support up to 80-pound load and which requires only two
watts of electrical power during loaded walking. The quasi-passive design does not use any
actuators for adding power at the joints. Instead the design relies completely on the controlled
release of energy stored in springs during the (negative power) phases of the walking gait. The
quasi-passive elements in the exoskeleton were chosen based on analysis of the kinetics and
kinematics of human walking.
Big Dog
Big Dog is a dynamically stable robot funded by DARPA in the hopes that it will be able to serve
as a robotic pack mule to accompany soldiers in terrain too rough for conventional vehicles.
Instead of wheels or treads, Big Dog uses four legs for movement, allowing it to move across
surfaces that would defeat wheels. The legs contain a variety of sensors including joint position
and ground contact. Its walking pattern is controlled with four low-friction hydraulic cylinder
actuators that power the joints.

Fig. 10.1: Big dog

Fig. 10.2: Exo-hiker

CONCLUSION
When greater than human intelligence drives progress, that progress will be much more rapid. In
fact, there seems no reason why progress itself would not involve the creation of still more
intelligent entities on a still shorter timescale. The best analogy is with the evolutionary past.
Animals can adapt to problems and make inventions but often no faster than natural selection can
do is work. We humans have the ability to internalize and conduct what ifs in our heads; can
solve many problems thousands of times faster than natural selection.

Smarter-than-human intelligence, faster-than-human intelligence, and self-improving intelligence


are all interrelated. If you're smarter that makes it easier to figure out how to build fast brains or
improve your own mind. In turn, being able to reshape your own mind isn't just a way of starting
up a slope of recursive self-improvement; having full access to your own source code is, in itself,
a kind of smartness that humans don't have. Self-improvement is far harder than optimizing
code; nonetheless, a mind with the ability to rewrite its own source code can potentially make
itself faster as well. And faster brains also relate to smarter minds; speeding up a whole mind
doesn't make it smarter, but adding more processing power to the cognitive processes underlying
intelligence is a different matter.
Who would have believed that 100 years ago that the following technological advances will be
possible?

Moving pictures of events around the world

Instantaneous wireless global communication

Portable computing devices that can store trillions of words and execute billions of
instructions

Human landing on moon and an international man space station

Similarly who knows in the next 50 years intelligence superior to human intelligence will come
into existence which can even question the mere existence of humans on this planet.

How singularity transcends our biology


Augmentation There are lot of people who are born disabled or disabled due to accidents. With
the help of robotics we could create prostheses that could resolve the problems due to
deficiencies of human body .We could build our way, engineer our way to make a better way
around it making their lives easier
Control our body if we could understand how cancer works and a real molecular on it and turn
things off when it starts to go wrong

Backing Up If all of our functions are controlled by brain. we could back up our brain every
day to computers or machines that could simulate brain function. Every morning if we are
backing up ourself , it doesnt matter if we die later that day .On other words , humans can
become immortal . Below is an image of TIME magazines cover in February 2013 illustrating
this possibility.

Leaving the human body This is another possibility of technological singularity. If our body
become unsuitable for life like the body is having some deadly disease, one could leave their
human body and continue living in some another substrate. This substrate can be a machine or
even could be a human body made from ones own DNA

Fears of Technological Singularity


Extinction - It is the most feared aspect of technological aspect of technological singularity .
These highly intelligent machines could overthrow the human race.
Slavery Another possibility is that humans becoming slaves of these machines just like animals
are slaves of humans.
War 1st and 2nd world war were fought by humans. There might be one in future where humans
and machines will fight each other.

Economic Collapse Machines would replace humans in jobs there by creating unemployment.
Also higher rates of production would also result in economic collapses.
Moving away from nature When we live in a global society where everything is mass produced
by robots, our manufactured civilization will sever the last connection to the natural world. We
will lose the very last bit of respect for Mother Nature.
Matrioshka Brains -A Matrioshka brain is a hypothetical megastructure of immense computational
capacity. Based on the Dyson sphere, the concept derives its name from the Russian Matrioshka doll and
is an example of a planet-size solar-powered computer, capturing the entire energy output of a star. To
form the Matrioshka brain all planets of the solar system are dismantled and a vast computational device
inhabited by uploaded or virtual minds, inconceivably more advanced and complex than us, is created.

So the idea is that eventually, one way or another, all matter in the universe will be smart. All
dust will be smart dust, and all resources will be utilized to their optimum computing potential.
There will be nothing else left but Matrioshka Brains and/or computronium

Achieving the Computational Capacity of the Human Brain


Technologies that will help to achieve singularity are molecular three-dimensional computing
include nanotubes and nanotube circuitry, molecular computing, self-assembly in nanotube
circuits, biological systems emulating circuit assembly, computing with DNA, spintronics
(computing with the spin of electrons), computing with light, and quantum computing. Many of
these independent technologies can be incorporated into computational systems that will in the
long run approach the theoretical maximum capacity of matter and energy to perform
computation and will far outpace the computational capacities of a human brain.

Nanotubes Carbon nanotubes are allotropes of carbon with a cylindrical structure .They use
molecules organized in three dimensions to store memory bits and to act as logic gates and are
the most likely technology to lead in the era of three-dimensional molecular computing. The chip
designer company Nantero provides random access as well as non-volatility (data is retained
when the power is off), meaning that it could potentially replace all of the primary forms of
memory: RAM, flash, and disk.. They are ultra fast compared to the conventional ones used .
Nantero is producing RAMs named NRAMs (Nano RAMs) using this carbon nanotube
technology. The chips based on this super-fast and dense technology can be used in a wide array
of markets such as mobile computing, wearables, consumer electronics, space and military
applications, enterprise systems, automobiles, the Internet of Things, and industrial markets. In
the future, Nantero expects to be able to store terabits of data on a single memory chip, enabling
that chip to store hundreds of movies on a mobile device, or millions of songs.

Computing with Molecules- In addition to nanotubes, major progress has been made in recent
years in computing with just one or a few molecules. The idea of computing with molecules was
first suggested in the early 1970s by IBM's Avi Aviram and Northwestern University's Mark A.
Ratner. At that time, we did not have the enabling technologies, which required concurrent
advances in electronics, physics, chemistry, and even the reverse engineering of biological
processes for the idea to gain traction. One type of molecule that researchers have found to have
desirable properties for computing is called a"rotaxane," which can switch states by changing the
energy level of a ringlike structure contained within the molecule. Rotaxane memory and
electronic switching devices have been demonstrated, and they show the potential of storingone
hundred gigabits (1011 bits) per square inch. The potential would be even greater if organized in

three dimensions. Rotaxanes are mechanically interlocked molecular architectures consisting of a


dumbbell-shaped molecule, the axle, that threads through a ring called a macrocycle. Because
the rings can spin around and slide along the axle, rotaxanes are promising components of
molecular machines. While most rotaxanes have been entirely organic, the physical properties
desirable in molecular machines are mostly found in inorganic compounds. Working together,
two British groups at the University of Edinburgh and the University of Manchester have bridged
this gap with hybrid rotaxanes, in which inorganic rings encircle the organic axles.

Self-Assembly - Self-assembling of nanoscale circuits is another key enabling technique for


effective nanoelectronics. Self-assembly allows improperly formed components to be discarded
automatically and makes it possible for the potentially trillions of circuit components to organize
themselves, rather than be painstakingly assembled in a top down process. Conventional
assembly technology has been adopted to pick and place devices by picking microchips from a
wafer and placing them on the substrate. But the techniques encounter speed and cost constraints.
In addition, while the size of chips is in the micro scale, it has a serious sticking problem due to
electrostatic forces, van der Waals forces, and surface forces. It's also important that nanocircuits
be self-configuring. The large number of circuit components and their inherentfragility (due to
their small size) make it inevitable that some portions of a circuit will not function correctly. It
will not be economically feasible to discard an entire circuit simply because a small number of
transistors out of a trillion are non functioning.

Emulating Biology- The idea of building electronic or mechanical systems that are selfreplicating and self-organizing is inspired by biology, which relies on these properties . There are
self replicating proteins like prions which can be used to construct nanowires.
DNA Computing- The term refers to computation using DNA and not computing on DNA. This
field was initially developed by Leonard Adleman of the University of Southern California, in
1994. DNA is nature's own nanoengineered computer, and its ability to store information and
conduct logical manipulations at the molecular level has already been exploited in specialized
"DNA computers.".Instead of using electrical signals to perform logical operations, these DNA
logic gates rely on DNA code. They detect fragments of genetic material as input. Each such

strand is replicated trillions of times using a process called "polymerase chain reaction" (PCR).
These pools of DNA are then put into a test tube. Because DNA has an affinity to link strands
together, long strands form automatically, with sequences of the strands representing the different
symbols, each of them a possible solution to the problem. Since there will be many trillions of
such strands, there are multiple strands for each possible answer. The next step of the process is
to test all of the strands simultaneously. This is done by using specially designed enzymes that
destroy strands that do not meet certain criteria. The enzymes are applied to the test tube
sequentially, and by designing a precise series of enzymes the procedure will eventually
obliterate all the incorrect strands, leaving only the ones with the correct answer. There's a
limitation, however, to DNA computing: each of the many trillions of computers has to perform
the same operation at the same time (although on different data), so that the device is a "single
instruction multiple data"(SIMD) architecture. A gram of DNA can hold about 1x1014 MB of
data.With bases spaced at 0.35 nm along DNA, data density is over a million Gbits/inch
compared to 7 Gbits/inch in typical high performance HDD.
Computing with Spin (Spintronics or Fluxtronics)-. In addition to their negative electrical
charge, electrons have another property that can be exploited for memory and computation: spin.
According to quantum mechanics, electrons spin on an axis, similar to the way the Earth rotates
on its axis. This is a theoretical notion, because an electron is considered to occupy a point in
space, so it is difficult to imagine a point with no size that nonetheless spins. However, when an
electrical charge moves, it causes a magnetic field, which is real and measurable. An electron can
spin in one of two directions, described as "up" and "down, so this property can be exploited for
logic switching or to encode a bit of memory. spin of the electron can be transported without any
loss of energy, or dissipation. Furthermore, this effect occurs at room temperature in materials
already widely used in the semiconductor industry, such as gallium arsenide. That's important
because it could enable a new generation of computing devices. The potential, then, is to achieve
the efficiencies of superconducting (that is, moving information at or close to the speed of light
without any loss of information) at room temperature. It also allows multiple properties of each
electron to be used for computing, thereby increasing the potential for memory and
computational density.

Computing with Light (optical or photonic computing)-Another approach to SIMD


computing is to use multiple beams of laser light in which information is encoded in each stream
of photons. Optical components can then be used to perform logical and arithmetic functions on
the encoded information streams. SIMD technologies such as DNA computers and optical
computers will have important specialized roles to play in the future of computation. The
replication of certain aspects of the functionality of the human brain, such as processing sensory
data, can use SIMD architectures. For other brain regions, such as those dealing with learning
and reasoning, general-purpose computing with its "multiple instruction multiple data" (MIMD)
architectures will be required. For high-performance MIMD computing, we will need to apply

the three-dimensional molecular-computing paradigms described above. Optical fibres will be


used in these computers. Instead of voltage packets used as signals in our computers, these use
light pulses. Processors change from binary code to light pulses using lasers. One of the major
limiting factors of this technology is that the optical fibers on a chip are wider than electrical
traces.

Quantum Computing-Quantum computing is an even more radical form of SIMD parallel


processing. A quantum computer contains a series of qubits, which essentially are zero and one at
the same time. The qubit is based on the fundamental ambiguity inherent in quantum mechanics.
In a quantum computer, the qubits are represented by a quantum property of particlesfor
example, the spin state of individual electrons. There are a no of physical objects that can be used
as a qubit a single photon, nucleus, electron etc. When the qubits are in an "entangled" state,
each one is simultaneously in both states. In a process called "quantum decoherence" the
ambiguity of each qubit is resolved, leaving an unambiguous sequence of ones and zeroes. If the

quantum computer is set up in the right way that decohered sequence will represent the solution
to a problem. Essentially, only the correct sequence survives the process of decoherence. In
quantum mechanics, state of a qubit will be a superposition (weighted sum) of all the possible
states. Consider 2 qubits . We have four combinations of states- 00,01,10,11. The state of the
qubits will be a superposition of these four states. In other words N qubits is equivalent to 2N
bits in a classical computer .Like in the case of DNA computer described in the previous point ,
a key to successful quantum computing is a careful statement of the problem, including a precise
way to test possible answers. The quantum computer effectively tests every possible combination
of values for the qubits. So a quantum computer with one thousand qubits would test 21,000 .Dwave is the main company in this field. They produced the first commercially available quantum
computer in 2011.Quantum computers cant replace classical computers. Quantum computers
reduce the no of steps considerably in a complex operation. But it doesnt increase the execution
time of a single step. Therefore for simple tasks like playing a video or browsing the internet,
classical computers are better than quantum computers.

Limits of Computation
Energy Requirement From the below graph we can say that the power per MIPS
(Microprocessor without Interlocked Pipeline Stages) is reducing. However, we also know that
the number of MIPS in computing devices has been growing exponentially. The degree to which
enhancements in power usage have kept pace with processor speed depends on the extent to
which we use parallel processing. A larger number of less-powerful computers can inherently run
cooler because the computation is spread out over a larger area. Processor speed is related to

voltage, and the power required is proportional to the square of the voltage. So running a
processor at a slower speed significantly reduces power consumption.

Reversible Computing- Ultimately, organizing computation with massive parallel processing, as


is done in the human brain, will not by itself be sufficient to keep energy levels and resulting
thermal dissipation at reasonable levels. The current computer paradigm relies on what is known
as irreversible computing, meaning that we are unable in principle to run software programs
backward. At each step in the progression of a program, the input data is discardederased
and the results of the computation pass to the next step. Programs generally do not retain all
intermediate results, as that would use up large amounts of memory unnecessarily. This selective
erasure of input information is particularly true for pattern-recognition systems. Vision systems,
for example, whether human or machine, receive very high rates of input (from the eyes or visual
sensors) yet produce relatively compact outputs (such as identification of recognized patterns).
This act of erasing data generates heat and therefore requires energy. When a bit of information is
erased, that information has to go somewhere. According to the laws of thermodynamics, the
erased bitis essentially released into the surrounding environment, thereby increasing its entropy,
which can be viewed as a measure of information (including apparently disordered information)

in an environment. This results in a higher temperature for the environment (because temperature
is a measure of entropy).
Landauer's principle asserts that there is a minimum possible amount of energy required to erase
one bit of information, known as the Landauer limit:
given by E = kT ln 2 = 2.75zJ or 0.0172 eV
There are ongoing researches in this field trying to make computation a reversible process so that
it becomes energy efficient
Memory and Computational Efficiency- With the limits of matter and energy to perform
computation in mind, two useful metrics are the memory efficiency and computational efficiency
of an object. Our brains have evolved significantly in their memory and computational efficiency
from pre-biology objects. To match its memory and efficiency is going to be a difficult task.

Lee, Chin-Fa, et al. "Hybrid organicinorganic rotaxanes and molecular


shuttles." Nature 458.7236 (2009): 314-318
Chang, Chia-Shou, et al. "Self-assembly of microchips on substrates."Electronic Components
and Technology Conference, 2006. Proceedings. 56th. IEEE, 2006.

You might also like