Professional Documents
Culture Documents
Seminar Report
Submitted By
Ansil S Shajil
B120205EC
In Partial Fulfillment of the Requirements
for the Award of the Degree of Bachelor of Technology
CERTIFICATE
This is to certify that the report titled Technological singularity is a
bona fide record of the seminar presentation made by Ansil S Shajil (Roll
No. B120205EC), under my supervision and guidance, in partial fulfillment
of the requirements for the award of the Degree of Bachelor of Technology in
Electronics and Communication Engineering from the National Institute of
Technology Calicut.
Dr. G. Abhilash
Associate Professor
Acknowledgement
Abstract
Contents
1 Acknowledgement
2 Introduction
5 Singularity Scenario
5.1 AI scenario . . . . . . . .
5.1.1 Moore0 s law . . . .
5.2 IA scenario . . . . . . . .
5.3 Biomedical Scenario . . .
5.4 Internet Scenario . . . . .
5.5 The Digital Gaia Scenario
6 AI
6.1
6.2
6.3
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Seed
13
Self-modification . . . . . . . . . . . . . . . . . . . . . . . . . 13
Self-improvement . . . . . . . . . . . . . . . . . . . . . . . . . 13
Recursive Self-Improvement . . . . . . . . . . . . . . . . . . . 14
11
11
11
12
12
12
12
16
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
17
17
18
19
19
19
20
21
22
Limits of Computation
24
9.1 Energy Requirement . . . . . . . . . . . . . . . . . . . . . . . 24
9.2 Reversible Computing . . . . . . . . . . . . . . . . . . . . . . 24
9.3 Memory and Computational Efficiency . . . . . . . . . . . . . 25
4
26
26
26
26
28
12 Nearing secularity?
12.1 Exo Hiker . . . .
12.2 Japans HAL 5 . .
12.3 MIT Exoskeleton
12.4 Big Dog . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
29
29
29
30
30
31
14 Conclusion
32
15 References
33
Introduction
Intelligence is the ability to comprehend, understand and profits from experience .Humans are the most intelligent organism in this planet .Human brain
is pretty impressive in many ways. But it has certain limitations. Brains
parallelism (one hundred trillion interneuronal connections working simultaneously) can be used to quickly recognize subtle patterns. As the neural
transactions are slow compared to some electronic circuits, our thinking process is slow. This makes our ability to process information limited compared
to the exponential growth of human knowledge.
Machines were invented by man to assist him while performing human
tasks. As years passed by, new and more improved machines were invented by
man. One among them was named robot, a mechanical machine controlled
by a computer program or an electronic circuit. Robotics developed and
autonomous robots were developed . In 1955 , John McCarthy coined a new
term called artificial intelligence (AI) . The main aims of AI are reasoning ,
knowledge , planning , learning ,natural language processing ,perception and
the ability to move and manipulate objects .
Now coming to what is singularity. It can be said as an era that will happen in future where the pace of technological change will be so rapid and its
impact so deep, that human life will be irreversibly transformed. Nowadays
researchers are, with some success, making machines that are more intelligent and responsive to solving real world problems.. Robotics departments
are trying to bring out robots that understand their environment better and
act according to the situations. There have been quantum leaps made in producing artificial balancing for these robots though not complete compared
to a human balancing system till date. Above all artificial intelligence is
growing day by day and is coursing through the blood of embodied science
But still we are a very long way away from understanding how consciousness
arises in a human brain. We are even a long way from the much simpler goal
of creating autonomous, self-organizing and perhaps even self-replicating machines.
This is a graph taken from Ray Kurzweils The Singularity is near . Many
significant technological and biological developments are shown in the graph.
This essential shows how rapidly things are changing now. In the early years
of life, things evolved slowly. The graph is exponential . It goes to show that
more developments are going to occur in the next two decades than what
happened in the past two decades.
mation with their own sensory organs and process and store that information
in their own brains and nervous systems.
4. Technology- Humans started creating technology to ease their work. This
started out with simple mechanisms and developed into automated machines.
5. Merge of technology with human intelligence-merger of the vast human
knowledge with the vastly greater capacity, speed, and knowledge-sharing
ability of our technology.
6.The universe wakes up The universe becomes saturated with intelligent
processes and knowledge.
10
Singularity Scenario
5.1
AI scenario
Moore0 s law
11
5.2
IA scenario
It is the way of improving human intelligence through amplification of intelligence. It implies the efficient employing of information technology in
enhancing human intelligence. The term was first put forward in 1950s by
Cybernetics and early computer pioneers. IA is sometimes contrasted with
AI, that is, the project of building a human like intelligence in the form of
an autonomous technological system such as computer or robot.
5.3
Biomedical Scenario
5.4
Internet Scenario
Humanity, its networks, computers, and databases become sufficiently effective to be considered a superhuman being.
5.5
12
AI Seed
6.1
Self-modification
6.2
Self-improvement
tion of the product to the environment and users it is deployed with. Common examples of such software include such as Genetic Algorithms or Genetic Programming which optimize software parameters with respect to some
well understood fitness function and perhaps work over some highly modular programming language to assure that all modifications result in software
which can be compiled and evaluated. Omohundro proposed the concept of
efficiency drives in self-improving software. Because of one of such drives,
balance drive, self-improving systems will tend to balance the allocation of
resources between their different subsystems increase. While performance of
the software as a result of such optimization may be improved the overall
algorithm is unlikely to be modified to a fundamentally more capable one.
6.3
Recursive Self-Improvement
Recursive Self-Improvement is the only type of improvement which has potential to completely replace the original algorithm with a completely different approach and more importantly to do so multiple times. At each stage
newly created software should be better at optimizing future version of the
software compared to the original algorithm. As of the time of this writing it
is a purely theoretical concept with no working RSI software known to exist.
However, as many have predicted that such software might become a reality
in the 21st century it is important to provide some analysis of properties
such software would exhibit.
Self-modifying and self-improving software systems are already well understood and are quite common. Consequently, we will concentrate exclusively on RSI systems. In practice performance of almost any system can be
trivially improved by allocation of additional computational resources such
as more memory, higher sensor resolution, faster processor or greater network bandwidth for access to information. This linear scaling doesnt fit the
definition of recursive-improvement as the system doesnt become better at
improving itself. To fit the definition the system would have to engineer a
faster type of memory not just purchase more memory units of the type it
already has access to. In general hardware improvements are likely to speed
up the system, while software improvements (novel algorithms) are necessary
for achievement of meta-improvements.
It is believed that AI systems will have a number of advantages over human programmers making it possible for them to succeed where we have so
14
far failed. Such advantages include : longer work spans (no breaks, sleep,
vocation, etc.), omniscience (expert level knowledge in all fields of science, absorbed knowledge of all published works), superior computational resources
(brain v/s processor, human memory v/s RAM), communication speed (neurons v/s wires), increased serial depth (ability to perform sequential operations in access of about a 100 human brain can manage), duplicability (intelligent software can be instantaneously copied), editability (source code
unlike DNA can be quickly modified), goal coordination (AI copies can work
towards a common goal without much overhead), improved rationality (AIs
are likely to be free from human cognitive biases) , new sensory modalities
(native sensory hardware for source code), blending over of deliberative and
automatic processes (management of computational resources over multiple
tasks), introspective perception and manipulation (ability to analyze low level
hardware, ex. individual neurons), addition of hardware (ability to add new
memory, sensors, etc.), advanced communication (ability to share underlying
cognitive representations for memories and skills).
15
The human genome contains the complete genetic information of the organism as DNA sequences stored in 23 chromosomes structures that are organized from DNA and protein.
The Singularity will unfold through these three overlapping revolutions:
1.Genetics(G)
2.Nanotechnology(N)
3.Revolution(R)
These are the primary building blocks of the impending Singularity as Ray
Kurzweil sees them. He calls them the three overlapping revolutions, and
he says they will characterize the first half of the twenty-first century which
we are in now. He goes on to say, These (GNR) will usher in the beginning
of the Singularity. Now we are in the early stages of the 0 G0 revolution today.
By understanding the information processes underlying life, we are starting
to learn to reprogram our biology to achieve the virtual elimination of disease,
dramatic expansion of human potential, and radical life extension.
Ray Kurzweil then says regarding nanotechnology, The 0 N0 revolution
will enable us to redesign and rebuild - molecule by molecule - our bodies and
brains and the world with which we interact, going far beyond the limitations
of biology.
Of the three (GNR), Ray Kurzweil believes that the most powerful impending revolution is the 0 R0 revolution. He says, Human-level robots with
their intelligence derived from our own but redesigned to far exceed human
capabilities represent the most significant transformation, because intelligence is the most powerful force in the universe. Intelligence, if sufficiently
advanced, is, well, smart enough to anticipate and overcome any obstacles
that stand in its path.
16
8.1
Carbon Nanotubes
In the last few decades, there has been a nearly constant exponential growth
in the capabilities of silicon-based microelectronics( Moores law). When
silicon layers become thin as 5 atoms ,effect of thermodynamics and quantum mechanics takes place. Because of the fundamental physical limitations
of silicon, which prevent current designs from functioning reliably at the
nanometer scale, will be reached while at the same time exponentially rising
fabrication costs will make it prohibitive to raise integration levels. This is
where the importance of nano-tubes come . Carbon nanotubes which are
allotropes of carbon( with a cylindrical structure) have molecules organized
in three dimensions to store memory bits and to act as logic gates and are
the most likely technology to lead in the era of three-dimensional molecular
computing. The chip designer company Nantero provides random access as
well as non-volatility (data is retained when the power is off), meaning that
it could potentially replace all of the primary forms of memory: RAM, flash,
and disk.. They are ultra fast compared to the conventional ones used. Nantero is producing RAMs named NRAMs (Nano RAMs) using this carbon
nanotube technology. The chips based on this super-fast and dense technology can be used in a wide array of markets such as mobile computing,
wearables, consumer electronics, space and military applications, enterprise
systems, automobiles, the Internet of Things, and industrial markets. In the
future, Nantero expects to be able to store terabits of data on a single memory chip, enabling that chip to store hundreds of movies on a mobile device,
or millions of songs.
17
8.2
8.3
Self-Assembly
Self-assembling of nanoscale circuits is another key enabling technique for effective nanoelectronics. Self-assembly allows improperly formed components
to be discarded automatically and makes it possible for the potentially trillions of circuit components to organize themselves, rather than be painstakingly assembled in a top down process. Conventional assembly technology
has been adopted to pick and place devices by picking microchips from a
wafer and placing them on the substrate. But the techniques encounter
speed and cost constraints. In addition, while the size of chips is in the micro scale, it has a serious sticking problem due to electrostatic forces, van
der Waals forces, and surface forces. It0 s also important that nanocircuits be
self-configuring. The large number of circuit components and their inherent
fragility (due to their small size) make it inevitable that some portions of
a circuit will not function correctly. It will not be economically feasible to
discard an entire circuit simply because a small number of transistors out of
a trillion are non functioning.
8.4
Emulating Biology
8.5
DNA Computing
The term refers to computation using DNA and not computing on DNA.
This field was initially developed by Leonard Adleman of the University of
Southern California, in 1994. DNA is nature0 s own nanoengineered computer, and its ability to store information and conduct logical manipulations
at the molecular level has already been exploited in specialized DNA computers..Instead of using electrical signals to perform logical operations, these
DNA logic gates rely on DNA code. They detect fragments of genetic material as input. Each such strand is replicated trillions of times using a process
called polymerase chain reaction (PCR). These pools of DNA are then put
into a test tube. Because DNA has an affinity to link strands together, long
strands form automatically, with sequences of the strands representing the
19
8.6
In addition to their negative electrical charge, electrons have another property that can be exploited for memory and computation: spin. According to
quantum mechanics, electrons spin on an axis, similar to the way the Earth
rotates on its axis. This is a theoretical notion, because an electron is considered to occupy a point in space, so it is difficult to imagine a point with
no size that nonetheless spins. However, when an electrical charge moves,
it causes a magnetic field, which is real and measurable. An electron can
spin in one of two directions, described as up and down,so this property
can be exploited for logic switching or to encode a bit of memory. spin of
the electron can be transported without any loss of energy, or dissipation.
Furthermore, this effect occurs at room temperature in materials already
widely used in the semiconductor industry, such as gallium arsenide. Thats
important because it could enable a new generation of computing devices.
The potential, then, is to achieve the efficiencies of superconducting (that
is, moving information at or close to the speed of light without any loss
of information) at room temperature. It also allows multiple properties of
each electron to be used for computing, thereby increasing the potential for
memory and computational density.
20
8.7
Usual computers use transistors which rely on the motion of electrons . But
as the size of transistors decreases(to integrate more number of transistors
in an IC), quantum mechanics effects come into play. So instead of electrons we use photons . Photons travel at a speed which 1000 times greater
than that of electrons . And also since light does not have resistance , there
wont be much power dissipation and will result in less heating. This is another approach to SIMD computing is to use multiple beams of laser light
in which information is encoded in each stream of photons. Optical components can then be used to perform logical and arithmetic functions on the
encoded information streams. SIMD technologies such as DNA computers
and optical computers will have important specialized roles to play in the
future of computation. The replication of certain aspects of the functionality
of the human brain, such as processing sensory data, can use SIMD architectures. For other brain regions, such as those dealing with learning and
reasoning, general-purpose computing with its multiple instruction multiple
data (MIMD) architectures will be required. For high-performance MIMD
computing, we will need to apply the three-dimensional molecular-computing
paradigms described above. Optical fibres will be used in these computers.
Instead of voltage packets used as signals in our computers, these use light
pulses. Processors change from binary code to light pulses using lasers. In
the figure below, a simple building block of opto computer is shown. Its like
a transistor that emits light when its ON and doesnt emit light in its OFF
condition. HP has introduced something similar named HP Silicon Microring resonator which absorbs light when a beam of light passes near that ring.
The absorption can be turned off by applying a small voltage. The smallest
ring that can be made is 3 microns. That is size also increased with speed.
A solution to this will be to use nano-science. Metal nano particle is like an
antenna. It resonates with a specific frequency of light i.e the electrons in the
metal oscillate in resonance with the frequency of the light .This can be used
to control light and channel light much below the diffraction limit of light If
we put nano-particles in a row and shine light at one end , photons travel
through this path .This is much faster than an electron diffusing through the
semiconductor.
21
8.8
Quantum Computing
Quantum computing is an even more radical form of SIMD parallel processing. A quantum computer contains a series of qubits, which essentially are
zero and one at the same time. The qubit is based on the fundamental ambiguity inherent in quantum mechanics. There are a no of physical objects that
can be used as a qubit a single photon, nucleus, electron etc .In a quantum
computer, the qubits are represented by a quantum property of particlesfor
example, the spin state of individual electrons.. When the qubits are in an
entangled state, each one is simultaneously in both states. In a process called
quantum decoherence the ambiguity of each qubit is resolved, leaving an
unambiguous sequence of ones and zeroes. If the quantum computer is set
up in the right way that decohered sequence will represent the solution to a
problem. Essentially, only the correct sequence survives the process of decoherence. In quantum mechanics, state of a qubit will be a superposition
(weighted sum) of all the possible states. Consider 2 qubits . We have four
combinations of states- 00,01,10,11. The state of the qubits will be a superposition of these four states. In other words N qubits is equivalent to 2N
bits in a classical computer .Like in the case of DNA computer described
in the previous point , a key to successful quantum computing is a careful
statement of the problem, including a precise way to test possible answers.
The quantum computer effectively tests every possible combination of values
for the qubits. So a quantum computer with one thousand qubits would
test 21000 bits in a classical computer .D- wave is the main company in this
field. They produced the first commercially available quantum computer in
22
2011.Quantum computers cant replace classical computers. Quantum computers reduce the no of steps considerably in a complex operation. But it
doesnt increase the execution time of a single step. Therefore for simple tasks
like playing a video or browsing the internet, classical computers are better
than quantum computers. There is a joint initiative by Google and NASA
called Quantum Artificial Intelligence Lab (QuAIL) which does research on
how quantum computing can solve complex computational problems.
23
Limits of Computation
9.1
Energy Requirement
From the below graph we can say that the power per MIPS (Microprocessor
without Interlocked Pipeline Stages) is reducing. However, we also know that
the number of MIPS in computing devices has been growing exponentially.
The degree to which enhancements in power usage have kept pace with processor speed depends on the extent to which we use parallel processing. A
larger number of less-powerful computers can inherently run cooler because
the computation is spread out over a larger area. Processor speed is related
to voltage, and the power required is proportional to the square of the voltage. So running a processor at a slower speed significantly reduces power
consumption.
9.2
Reversible Computing
9.3
With the limits of matter and energy to perform computation in mind, two
useful metrics are the memory efficiency and computational efficiency of an
object. Our brains have evolved significantly in their memory and
computational efficiency from pre-biology objects. To match its memory
and efficiency is going to be a difficult task.
25
10
10.1
Solar cells are well known for their use as power sources for satellites,
environmentalist green energy campaigns and pocket calculators. In
robotics solar cells are used mainly in BEAM robots( Biology, Electronics,
Aesthetics and Mechanics).Commonly these consist of a solar cell which
charges a capacitor and a small circuit which allows the capacitor to be
charged up to a set voltage level and then be discharged through the
motor(s) making it move . For a larger robot solar cells can be used to
charge its batteries. Such robots have to be designed around energy
efficiency as they have little energy to spare.
10.2
10.3
Memory back up
The search for new nonvolatile universal memories is propelled by the need
for pushing power efficient nano-computing to the next higher level. As a
potential for the next memory technology of choice, the recently found the
missing fourth circuit element, memristor has drawn a great deal of
research interests. The basic circuit elements, resistance, capacitance, and
inductance, describe the relations between fundamental electrical
quantities: voltage, current, charge and flux. Resistance relates voltage and
26
current (dv= Rdi), capacitance relates charge and voltage (dq=Cdv), and
inductance relates flux and current (d=Ldi), respectively. However there is
a missing link between flux and charge which scientist Chua called as
memresistance. While in the linear case, memristance becomes constant
which acts like resistance. However if -q relation is nonlinear, the element is
referred to as memresistance, which can be charge-controlled.
Memresistance is given by
M(q)=d/dq
The prototyped memristor devices can be scaled down to 10nm or below
and the memristor memories can achieve an integration density of 1000
gbits/cm3, a few times higher than today advanced flash memory
technologies. In addition, the nonvolatile nature of memristor memory
makes it an attractive candidate for the next generation memory
technology. The switching power consumption of memristor can be 20
times smaller than flash. Memristor memories are non-volatile so computers
can start without reboot. Moreover it has unique characteristics that can
be used for self-programming. It could vary value according to the current
passing through it and could even remember it even after the current has
disappeared.
27
11
Augmentation
There are lot of people who are born disabled or disabled due to accidents.
With the help of robotics we could create prostheses that could resolve the
problems due to deficiencies of human body .We could build our way,
engineer our way to make a better way around it making their lives easier
Control our body
If we could understand how cancer works and a real molecular on it and
turn things off when it starts to go wrong.
Backing up the human brain
If all of our functions are controlled by brain. we could back up our brain
every day to computers or machines that could simulate brain function.
Every morning if we are backing up ourself , it doesnt matter if we die later
that day .On other words , humans can become immortal . Below is an
image of TIME magazines cover in February 2013 illustrating this
possibility.
12
Nearing secularity?
12.1
Exo Hiker
12.2
Japans HAL 5
29
12.3
MIT Exoskeleton
12.4
Big Dog
Big Dog is a dynamically stable robot funded by DARPA in the hopes that
it will be able to serve as a robotic pack mule to accompany soldiers in
terrain too rough for conventional vehicles. Instead of wheels or treads, Big
Dog uses four legs for movement, allowing it to move across surfaces that
would defeat wheels. The legs contain a variety of sensors including joint
position and ground contact. Its walking pattern is controlled with four
low-friction hydraulic cylinder actuators that power the joints.
30
13
Extinction
It is the most feared aspect of technological aspect of technological
singularity . These highly intelligent machines could overthrow the human
race.
Slavery
Another possibility is that humans becoming slaves of these machines just
like animals are slaves of humans.
War
First and second world war were fought by humans. There might be one in
future where humans and machines will fight each other.
Economic Collapse
Machines would replace humans in jobs there by creating unemployment.
Also higher rates of production would also result in economic collapses.
Moving away from nature
When we live in a global society where everything is mass produced by
robots, our manufactured civilization will sever the last connection to the
natural world. We will lose the very last bit of respect for Mother Nature.
Matrioshka Brains
A Matrioshka brain is a hypothetical megastructure of immense
computational capacity. Based on the Dyson sphere, the concept derives its
name from the Russian Matrioshka doll and is an example of a planet-size
solar-powered computer, capturing the entire energy output of a star. To
form the Matrioshka brain all planets of the solar system are dismantled
and a vast computational device inhabited by uploaded or virtual minds,
inconceivably more advanced and complex than us, is created. So the idea
is that eventually, one way or another, all matter in the universe will be
smart. All dust will be smart dust, and all resources will be utilized to their
optimum computing potential. There will be nothing else left but
Matrioshka Brains and/or computronium
31
14
Conclusion
When greater than human intelligence drives progress, that progress will be
much more rapid. In fact, there seems no reason why progress itself would
not involve the creation of still more intelligent entities on a still shorter
timescale. The best analogy is with the evolutionary past. Animals can
adapt to problems and make inventions but often no faster than natural
selection can do is work. We humans have the ability to internalize and
conduct what ifs in our heads; can solve many problems thousands of times
faster than natural selection.
Smarter-than-human intelligence, faster-than-human intelligence, and
self-improving intelligence are all interrelated. If you are smarter that
makes it easier to figure out how to build fast brains or improve your own
mind. In turn, being able to reshape your own mind is not just a way of
starting up a slope of recursive self-improvement; having full access to your
own source code is, in itself, a kind of smartness that humans don0 t have.
Self-improvement is far harder than optimizing code; nonetheless, a mind
with the ability to rewrite its own source code can potentially make itself
faster as well. And faster brains also relate to smarter minds; speeding up a
whole mind doesn0 t make it smarter, but adding more processing power to
the cognitive processes underlying intelligence is a different matter. Who
would have believed that 100 years ago that the following technological
advances will be possible? -Moving pictures of events around the world
-Instantaneous wireless global communication
-Portable computing devices that can store trillions of words and execute
billions of instructions
-Human landing on moon and an international man space station
Similarly who knows in the next 50 years intelligence superior to human
intelligence will come into existence which can even question the mere
existence of humans on this planet
32
15
References
33