Professional Documents
Culture Documents
Intelligence is the ability to comprehend, understand and profits from experience .Humans are
the most intelligent organism in this planet .Human brain is pretty impressive in many ways. But
it has certain limitations. Brains parallelism (one hundred trillion interneuronal connections
working simultaneously) can be used to quickly recognize subtle patterns. As the neural
transactions are slow compared to some electronic circuits, our thinking process is slow. This
makes our ability to process information limited compared to the exponential growth of human
knowledge.
Machines were invented by man to assist him while performing human tasks. As years passed by,
new and more improved machines were invented by man. One among them was named robot, a
mechanical machine controlled by a computer program or an electronic circuit. Robotics
developed and autonomous robots were developed . In 1955 , John McCarthy coined a new term
called artificial intelligence (AI) . The main aims of AI are reasoning , knowledge , planning ,
learning ,natural language processing ,perception and the ability to move and manipulate
objects .
Now coming to what is singularity. It can be said as an era that will happen in future where the
pace of technological change will be so rapid and its impact so deep, that human life will be
irreversibly transformed. Nowadays researchers are, with some success, making machines that
are more intelligent and responsive to solving real world problems.. Robotics departments are
trying to bring out robots that understand their environment better and act according to the
situations. There have been quantum leaps made in producing artificial balancing for these robots
though not complete compared to a human balancing system till date. Above all artificial
intelligence is growing day by day and is coursing through the blood of embodied science
But still we are a very long way away from understanding how consciousness arises in a human
brain. We are even a long way from the much simpler goal of creating autonomous, selforganizing and perhaps even self-replicating machines.
This is a graph from Ray Kurzweils Nearing Technological Singularity . Many significant
technological and biological developments are shown in the graph . This essential shows how
rapidly things are changing now. In the early years of life , things evolved slowly . Now its
changing quickly . Its an exponential graph.It goes to show that more developments are going to
occur in the next two decades than the what happened in the past two decades.
1. Physics and Chemistry Origin of life can be traced back to a state that represents information
in its basic structures: patterns of matter and energy.
2. Biology and DNA-carbon-based compounds became more and more intricate until complex
aggregations of molecules formed self-replicating mechanisms, and life originated. Molecules
named DNAs were used to store information.
3. Brains -DNA-guided evolution produced organisms that could detect information with their
own sensory organs and process and store that information in their own brains and nervous
systems.
4. Technology- Humans started creating technology to ease their work. This started out with
simple mechanisms and developed into automated machines.
5. Merge of technology with human intelligence-merger of the vast human knowledge with the
vastly greater capacity, speed, and knowledge-sharing ability of our technology.
6.The universe wakes up
SINGULARITY SCENARIO
According to Vernor Vinge , Singularity is expected to come as a combination of the following:
3.1 AI scenario: It involves creating super human artificial intelligence in computers where
databases and computers become sufficiently effective enough to be considered a superhuman
being. AI researches are highly technical and specialized and are deeply divided into sub fields
that often fail to communicate with each other. The main aims of AI researchers are reasoning,
knowledge, planning, learning, natural language processing, perception and the ability to move
and manipulate objects.
Moores law- It is an observation that the no of transistors in an IC doubles in every two years.
Or in other words , the speed of the IC is increasing year by year
AI SEED AI SINGULARITY
Software capable of improving itself has been a dream of computer scientists since the inception
of the field. Since the early days of computer science, visionaries in the field anticipated
creation of self-improving intelligent system frequently as an easier pathway to creation of true
artificial intelligence. As early as 1950s Alan Turing wrote instead of trying to produce a
program to stimulate the adult mind why not rather try to produce one which stimulates the
childs?. If this were then subjected to an appropriate course of education one would obtain the
adult brain.
Let an ultra-intelligent machine be defined as a machine that can far surpass all the intellectual
activities of any man however clever. Since the design of machines is one of the intellectual
activities, an ultra-intelligent machine could design even better machines; there would then
unquestionably be an intelligent explosion and the intelligence of man would be left far
behind. Thus the first ultra-intelligent machine is the last invention that man ever needs to make.
Once program with a genuine capacity for self-improvement has been devised a rapid
revolutionary process will begin. As the machine improves both itself and its model of itself
there begins a phenomenon associated with the terms consciousness, intuition and
intelligence itself.
Self-improving software can be classified by the degree of self-modification it entails. In general
we distinguish three levels of improvement modification, improvement (weak selfimprovement) and recursive improvement (strong self-improvement).
Self-modification does not produce improvement and is typically employed for code
obfuscation to protect software from being reverse engineered or to disguise selfreplicating computer viruses from detection software. While a number of obfuscation
techniques are known to exist, ex. self-modifying code, polymorphic code, metamorphic
code, diversion code, none of them are intended to modify the underlying algorithm.
goal coordination (AI copies can work towards a common goal without much overhead),
improved rationality (AIs are likely to be free from human cognitive biases) , new sensory
modalities (native sensory hardware for source code), blending over of deliberative and
automatic processes (management of computational resources over multiple tasks),
introspective perception and manipulation (ability to analyze low level hardware, ex. individual
neurons), addition of hardware (ability to add new memory, sensors, etc.), advanced
communication (ability to share underlying cognitive representations for memories and skills).
Of the three (GNR), Ray Kurzweil believes that the most powerful impending revolution is the
'R' revolution. He says, "Human-level robots with their intelligence derived from our own but
redesigned to far exceed human capabilities represent the most significant transformation,
because intelligence is the most powerful 'force' in the universe. Intelligence, if sufficiently
advanced, is, well, smart enough to anticipate and overcome any obstacles that stand in its path.
(dq=Cdv), and inductance relates flux and current (d=Ldi), respectively. However there is a
missing link between flux and charge which scientist Chua called as memresistance.
While in the linear case, memristance becomes constant which acts like resistance. However if
-q relation is nonlinear, the element is referred to as memresistance, which can be chargecontrolled.
Eq 1: M(q)=d/dq
The prototyped memristor devices can be scaled down to 10nm or below and the memristor
memories can achieve an integration density of 1000gbits/cm3, a few times higher than today
advanced flash memory technologies. In addition, the nonvolatile nature of memristor memory
makes it an attractive candidate for the next generation memory technology. The switching
power consumption of memristor can be 20 times smaller than flash. Memristor memories are
non-volatile so computers can start without reboot. Moreover it has unique characteristics that
can be used for self-programming. It could vary value according to the current passing through it
and could even remember it even after the current has disappeared.
NEARING SINGULARITY?
Fueled by creative imagination coupled with technological expertise, wearable robotic
applications like exoskeletons are moving out of the realm of science fiction and into the real
world. Military applications can turn ordinary people into super soldiers with the ability to carry
far more weights faster, farther and for longer periods of time than is possible for humans alone.
Exoskeletons can protect wearers from any enemy fire and chemical attack. By increasing speed,
strength and protection, these wearable robots can help rescue workers more effectively dig
people out from under rubble after earthquakes or carry them from burning buildings while
protecting the rescuers from falling debris and collapsing structures. [3]
Exo Hiker
A recent force driving exoskeleton development has been a U.S. Defense Advanced Projects
Agency (DARPA) program known as Exoskeleton for Human Performance Augmentation
(EHPA). One example is its ExoHiker, which weighs 31 pounds, including power unit, batteries,
and onboard computer. It operates with virtually imperceptible noise. With lithium polymer
batteries, the device can travel 42 miles per pound of battery at a speed of 2.5 miles per hour.
With a small pack-mounted solar panel its mission time will be unlimited. It enables wearers to
carry 150 pounds without feeling the load on their shoulders and features retractable legs,
unfettered driving while using the device.
Japans HAL 5
A research team led by a professor in the Department of Intelligent Interaction Technologies has
developed the Robot Suit Hybrid Assistive Limb (HAL) exoskeleton for applications in physical
training support, activities of daily living, heavy labor support for workers, and rescue support
for emergency disaster personnel. HAL can magnify a persons strength by two times or more.
The suit detects faint bio-signals on the surface of the skin when the human brain tries to move
the exoskeleton. When the robot suit detects the signal, it helps the user to move and this
information is then relayed back to the brain.
MIT Exoskeleton
The Massachusetts Institute of Technology (MIT) Media Lab Bio-mechatronics Group has
developed an exoskeleton that can support up to 80-pound load and which requires only two
watts of electrical power during loaded walking. The quasi-passive design does not use any
actuators for adding power at the joints. Instead the design relies completely on the controlled
release of energy stored in springs during the (negative power) phases of the walking gait. The
quasi-passive elements in the exoskeleton were chosen based on analysis of the kinetics and
kinematics of human walking.
Big Dog
Big Dog is a dynamically stable robot funded by DARPA in the hopes that it will be able to serve
as a robotic pack mule to accompany soldiers in terrain too rough for conventional vehicles.
Instead of wheels or treads, Big Dog uses four legs for movement, allowing it to move across
surfaces that would defeat wheels. The legs contain a variety of sensors including joint position
and ground contact. Its walking pattern is controlled with four low-friction hydraulic cylinder
actuators that power the joints.
CONCLUSION
When greater than human intelligence drives progress, that progress will be much more rapid. In
fact, there seems no reason why progress itself would not involve the creation of still more
intelligent entities on a still shorter timescale. The best analogy is with the evolutionary past.
Animals can adapt to problems and make inventions but often no faster than natural selection can
do is work. We humans have the ability to internalize and conduct what ifs in our heads; can
solve many problems thousands of times faster than natural selection.
Portable computing devices that can store trillions of words and execute billions of
instructions
Similarly who knows in the next 50 years intelligence superior to human intelligence will come
into existence which can even question the mere existence of humans on this planet.
Backing Up If all of our functions are controlled by brain. we could back up our brain every
day to computers or machines that could simulate brain function. Every morning if we are
backing up ourself , it doesnt matter if we die later that day .On other words , humans can
become immortal . Below is an image of TIME magazines cover in February 2013 illustrating
this possibility.
Leaving the human body This is another possibility of technological singularity. If our body
become unsuitable for life like the body is having some deadly disease, one could leave their
human body and continue living in some another substrate. This substrate can be a machine or
even could be a human body made from ones own DNA
Economic Collapse Machines would replace humans in jobs there by creating unemployment.
Also higher rates of production would also result in economic collapses.
Moving away from nature When we live in a global society where everything is mass produced
by robots, our manufactured civilization will sever the last connection to the natural world. We
will lose the very last bit of respect for Mother Nature.
Matrioshka Brains -A Matrioshka brain is a hypothetical megastructure of immense computational
capacity. Based on the Dyson sphere, the concept derives its name from the Russian Matrioshka doll and
is an example of a planet-size solar-powered computer, capturing the entire energy output of a star. To
form the Matrioshka brain all planets of the solar system are dismantled and a vast computational device
inhabited by uploaded or virtual minds, inconceivably more advanced and complex than us, is created.
So the idea is that eventually, one way or another, all matter in the universe will be smart. All
dust will be smart dust, and all resources will be utilized to their optimum computing potential.
There will be nothing else left but Matrioshka Brains and/or computronium
Nanotubes Carbon nanotubes are allotropes of carbon with a cylindrical structure .They use
molecules organized in three dimensions to store memory bits and to act as logic gates and are
the most likely technology to lead in the era of three-dimensional molecular computing. The chip
designer company Nantero provides random access as well as non-volatility (data is retained
when the power is off), meaning that it could potentially replace all of the primary forms of
memory: RAM, flash, and disk.. They are ultra fast compared to the conventional ones used .
Nantero is producing RAMs named NRAMs (Nano RAMs) using this carbon nanotube
technology. The chips based on this super-fast and dense technology can be used in a wide array
of markets such as mobile computing, wearables, consumer electronics, space and military
applications, enterprise systems, automobiles, the Internet of Things, and industrial markets. In
the future, Nantero expects to be able to store terabits of data on a single memory chip, enabling
that chip to store hundreds of movies on a mobile device, or millions of songs.
Computing with Molecules- In addition to nanotubes, major progress has been made in recent
years in computing with just one or a few molecules. The idea of computing with molecules was
first suggested in the early 1970s by IBM's Avi Aviram and Northwestern University's Mark A.
Ratner. At that time, we did not have the enabling technologies, which required concurrent
advances in electronics, physics, chemistry, and even the reverse engineering of biological
processes for the idea to gain traction. One type of molecule that researchers have found to have
desirable properties for computing is called a"rotaxane," which can switch states by changing the
energy level of a ringlike structure contained within the molecule. Rotaxane memory and
electronic switching devices have been demonstrated, and they show the potential of storingone
hundred gigabits (1011 bits) per square inch. The potential would be even greater if organized in
Emulating Biology- The idea of building electronic or mechanical systems that are selfreplicating and self-organizing is inspired by biology, which relies on these properties . There are
self replicating proteins like prions which can be used to construct nanowires.
DNA Computing- The term refers to computation using DNA and not computing on DNA. This
field was initially developed by Leonard Adleman of the University of Southern California, in
1994. DNA is nature's own nanoengineered computer, and its ability to store information and
conduct logical manipulations at the molecular level has already been exploited in specialized
"DNA computers.".Instead of using electrical signals to perform logical operations, these DNA
logic gates rely on DNA code. They detect fragments of genetic material as input. Each such
strand is replicated trillions of times using a process called "polymerase chain reaction" (PCR).
These pools of DNA are then put into a test tube. Because DNA has an affinity to link strands
together, long strands form automatically, with sequences of the strands representing the different
symbols, each of them a possible solution to the problem. Since there will be many trillions of
such strands, there are multiple strands for each possible answer. The next step of the process is
to test all of the strands simultaneously. This is done by using specially designed enzymes that
destroy strands that do not meet certain criteria. The enzymes are applied to the test tube
sequentially, and by designing a precise series of enzymes the procedure will eventually
obliterate all the incorrect strands, leaving only the ones with the correct answer. There's a
limitation, however, to DNA computing: each of the many trillions of computers has to perform
the same operation at the same time (although on different data), so that the device is a "single
instruction multiple data"(SIMD) architecture. A gram of DNA can hold about 1x1014 MB of
data.With bases spaced at 0.35 nm along DNA, data density is over a million Gbits/inch
compared to 7 Gbits/inch in typical high performance HDD.
Computing with Spin (Spintronics or Fluxtronics)-. In addition to their negative electrical
charge, electrons have another property that can be exploited for memory and computation: spin.
According to quantum mechanics, electrons spin on an axis, similar to the way the Earth rotates
on its axis. This is a theoretical notion, because an electron is considered to occupy a point in
space, so it is difficult to imagine a point with no size that nonetheless spins. However, when an
electrical charge moves, it causes a magnetic field, which is real and measurable. An electron can
spin in one of two directions, described as "up" and "down, so this property can be exploited for
logic switching or to encode a bit of memory. spin of the electron can be transported without any
loss of energy, or dissipation. Furthermore, this effect occurs at room temperature in materials
already widely used in the semiconductor industry, such as gallium arsenide. That's important
because it could enable a new generation of computing devices. The potential, then, is to achieve
the efficiencies of superconducting (that is, moving information at or close to the speed of light
without any loss of information) at room temperature. It also allows multiple properties of each
electron to be used for computing, thereby increasing the potential for memory and
computational density.
quantum computer is set up in the right way that decohered sequence will represent the solution
to a problem. Essentially, only the correct sequence survives the process of decoherence. In
quantum mechanics, state of a qubit will be a superposition (weighted sum) of all the possible
states. Consider 2 qubits . We have four combinations of states- 00,01,10,11. The state of the
qubits will be a superposition of these four states. In other words N qubits is equivalent to 2N
bits in a classical computer .Like in the case of DNA computer described in the previous point ,
a key to successful quantum computing is a careful statement of the problem, including a precise
way to test possible answers. The quantum computer effectively tests every possible combination
of values for the qubits. So a quantum computer with one thousand qubits would test 21,000 .Dwave is the main company in this field. They produced the first commercially available quantum
computer in 2011.Quantum computers cant replace classical computers. Quantum computers
reduce the no of steps considerably in a complex operation. But it doesnt increase the execution
time of a single step. Therefore for simple tasks like playing a video or browsing the internet,
classical computers are better than quantum computers.
Limits of Computation
Energy Requirement From the below graph we can say that the power per MIPS
(Microprocessor without Interlocked Pipeline Stages) is reducing. However, we also know that
the number of MIPS in computing devices has been growing exponentially. The degree to which
enhancements in power usage have kept pace with processor speed depends on the extent to
which we use parallel processing. A larger number of less-powerful computers can inherently run
cooler because the computation is spread out over a larger area. Processor speed is related to
voltage, and the power required is proportional to the square of the voltage. So running a
processor at a slower speed significantly reduces power consumption.
in an environment. This results in a higher temperature for the environment (because temperature
is a measure of entropy).
Landauer's principle asserts that there is a minimum possible amount of energy required to erase
one bit of information, known as the Landauer limit:
given by E = kT ln 2 = 2.75zJ or 0.0172 eV
There are ongoing researches in this field trying to make computation a reversible process so that
it becomes energy efficient
Memory and Computational Efficiency- With the limits of matter and energy to perform
computation in mind, two useful metrics are the memory efficiency and computational efficiency
of an object. Our brains have evolved significantly in their memory and computational efficiency
from pre-biology objects. To match its memory and efficiency is going to be a difficult task.