You are on page 1of 64

A

REPORT ON
GEOPHYSICAL FIELD TRAINING
AT
NATIONAL GEOPHYSICAL RESEARCH INSTITUTE
HYDERABAD

From: 25th January 2010 to 8th February 2010

Under the guidance of: Submitted by:


Proff. S.S.Teotia Renu Yadav
Chairman 3rd Sem.
Department of Geophysics Roll No: GP-23

Department Of Geophysics
Kurukshetra University, Kurukshetra
Acknowledgement

There are some people whose continuous support and guidance


make the way to your target much easier. I would like to express my
sincere gratitude to those entire people without whom my field
training work at India’s leading research institutes NGRI and IIG
Mumbai would have been become possible.
Firstly I express my veneration to Prof S.S.Teotia Chairman,
Department of Geophysics, and KUK .For making the necessary
arrangement of field training work and also guided me with his
experience during my field training.
I forward my sincere thanks to Dr. V. P. Dimri director NGRI
Hyderabad and Dr Archana Bhattacharya director IIG who allowed us
to work.
I render my deep sense of gratefulness and sincere thanks to Dr
Devender Kumar and Dr Abhay Ram Bansal, scientist at NGRI
Hyderabad and Dr. Gautam Gupta scientist at IIG for their meticulous
guidance to make my field training work complete by devoting his
valuable time and also for the help he provided in bringing clarity to
the topic at every stage of my work.
I would like to thanks my friends without their co-operation it was
not even possible to come with my field report and of course for their
wonderful support during field training.
Last but not the least, I wish to express my deep sense of respect
and gratitude towards my parents for their affection, care, support,
encouragement and blessing which I continuously receive from them.

I.

(Renu Yadav)
Contents

Chapter-1: Introduction
Chapter-2: Seismology
Chapter-3 : Magnetic Prospecting
Chapter-4 : Electrical Method
Chapter-5 : Gravity Method
Chapter-6 : Gas Hydrate
Chapter-7 : Magneto Tulluric Method
Chapter-8 : Hydrology
Chapter-9 : Paleomagnetism
Chapter-10 : Tsunami
Chapter-11 : Refraction Method
Chapter-12 : Tomography
References:
Introduction Chapter 1
1.1 Introduction to Geophysics
The term Geophysics, meaning the Physics of the Earth or the study of the
properties of the earth, is used comprehensively to include the different but interrelated
subjects covered in the Earth Sciences such as Meteorology, dealing with the phenomena
and properties of the atmosphere and other water covered portions ; Geodesy dealing
with the shape, size and other aspects of figure of the Earth’s magnetic and electrical
phenomena ; Seismology and Volcano logy , dealing with the phenomena of earthquakes
and volcanoves`
respectively. Another subject, Tectonophysics, dealing with the physical analyses of
stresses and strains etc. of the structures in the Earth and their relationship to crustal
structures, has been recognized in recent years as branch of Geophysics.
The branch of Geophysics devoted to the exploration of mineral deposits by
appropriate use of physics properties of the materials in the Earth is termed variously as
Applied Geophysics, Exploration Geophysics geophysical exploration, or Geophysical
Prospecting.
Depending on the physical properties on which they are based, the various methods in
vogue in geophysical prospecting may be classified under the following heads :
1. Electrical Methods,
2. magnetic Methods,
3. gravity Methods,
4. Seismic Methods, and
5. Radioactivity Methods.
In addition , the various technique of measurements in bore holes may be grouped under
Well Logging Methods.
The art of mineral exploration is fairly ancient. After learning the use of metals ,
man began searching for mineral deposits from which he could win the ores. He was not
content with taking what he discovered accidentally. The old time prospector was not
merely a man of adventure ; he had good powers of observation and was intelligent
enough to note the correlations between soil, vegetation , topography and rock formation
favorable for the occurrence of mineral deposits. Even where surface indications were
lacking, attempts have been make from a long time to gain knowledge of hidden mineral
deposits by using instrumental devices.
1.2 Purpose of Geophysical Field Training
The Geophysical Field training is very important and essential for the
students of Earth Science being a part of out degree course. In my point of view the main
purpose of geophysical field training is to provide a good practical exposure to us there
in field which we are studying in our curse about the various Geophysical Methods
applied in the exploration of subsurface.
Also from these kind of training develops our skills to work in a team. The
interaction with experienced Scientists gives good knowledge to us about subjects.
1.3 About the National Geophysical Research Institute (NGRI) Hydrabad.
National Geophysical Research Institute (N.G.R.I) was established in 1962 at
Hyderabad, with the aim to be the premier geoscientific organization in India undertaking
world class research and development in Geophysics. Over the last decade the institute
has grown into a large research organization having highly skilled technical staff. It has
built up an enviable record of scientific excellence and technical competence in all the
crucial area of Earth sciences. This is demonstrated by the impressive output of original
publications, collaborations with major Earth Science Institutions around the world and
transfer of knowledge from basic research to its practical application.
Research and Development Programs :
The Institute undertakes basic and applied research in the field of solid earth
geophysics. Major research and development programmes of NGRI are regarding :
• Geophysical Exploration
• Lithosphere Structure
• Natural Hazard Assessments
• Assessment and Management of Ground Resources
• Geophysical Instrumentation
Major capability and Services
Expertise and capabilities are available in the areas of magneto telluric, controlled
sources seismic, gravity air borne geophysics and deep resistivity sounding to under
geophysical survey both on land and water for hydrocarbons and gas hydrates exploration
and for geophysical data interpretation. There are state of art laboratory facilities for
geochemical, geochronological, mineral physics and high pressure and temperature
studies.
National and International Affiliations
International Union of Geodesy and Geophysics (IUGG)
International Association of Seismology and Physics of Earths Interior
International Association of Geomagnetic and Aeronomy (IAGA)
Third World Academy of Sciences (TWAS).

Seismology Chapter 2
2.1 Earthquake
Vibrations within the Earth caused by the rupture and sudden movement of rocks
that have been strained beyond their elastic limit.
2.2 Earthquake Seismology
The study of vibrations within the Earth caused by the sudden movement along
faults or other natural processes.
The study of earthquakes is important for scientific, social, and economic reasons.
Earthquakes attest to the fact that dynamic forces are operating within the Earth. Stress
builds up through time, storing strain energy; earthquakes represent sudden release of the
strain energy.
Plate tectonic theory relies heavily on observations from earthquakes. Most
tectonic activity occurs due to interaction between plates; the distribution of earthquakes
thus dramatically outlines lithospheric plate boundaries. Locations of earthquakes in three
dimensions reveal the depths of stresses built up as a result of plate interaction. There are
only shallow earthquakes at divergent and transform plate boundaries, but earthquakes
occur over a broad range from shallow to deep where plates converge.The type of
earthquake faulting shows relative motion between plates; rocks are generally subjected
to normal faulting at divergent plate boundaries, strike slip faulting at transform
boundaries , and reverse faulting ( with significant normal and strike slip faulting at
convergent boundaries.
Earthquakes also provide crucial data on the deep interior of the Earth , because
seismic waves travel through the entire earth and are recorded by a world wide network
of seismometers. Interpretations of the thickness, structure and composition of the crust,
mantle, and core can be made from the types and speeds of waves that travel through
each zone.
Earthquakes are important from a human and economic point of view. In some
years, Earthquakes kill thousands of people and cause damage totally billions of dollars.
It is useful to understand how Earthquakes occur , where they are likely to occur and
when they might occur. We can minimize earthquake effects by designing buildings that
will withstand Earthquakes and by not building in areas prone to intense shaking.

2.3 Elastic Rebound Theory


When Earth material is stressed beyond its elastic limit, failure could be through
ductile flow or brittle fracture. The latter situation results in earthquakes. For earthquakes
to occur, two factors are thus necessary :
1) there must be some sort of movement that will stress the material
beyond its elastic limit.
2) And material must fail by brittle fracture.
The region of the Earth that fits the above criteria is the lithosphere.
Other regions, such as the asthenosphere and outer core, behave ductilly
and fluidly, respectively , when large stresses are applied over long
periods of time . Lower mantle and the inner core are solid , but they
are not subjected to large differential stresses. Eartquakes are, therefore,
almost exclusively confined to the moving, rigid lithosphere,
particularly where stresses are concentrated near the boundaries of
plates.
Elastic Rebound Theory states that rock can be stressed , obeying Hooke’s law, until it
reaches its elastic limit. If the rock fails in the brittle fashion, it rebounds (snaps) into a
new position as the stored strain energy is released. The sudden release of strain energy is
an earthquake, which sends off vibrations as seismic waves.

Fig(2.1) Elastic rebound . Figure can represent either map view of a strike slip fault,
or cross-sectional view of a dip – slip (normal or reverse ) fault.
a) Sequence of rocks in undeformed state b) Rocks initially behave elastically as
stress is applied c) Elastic limit of the rocks is reached. If failure occurs, stored
energy is released as an earthquake. The rebound to new positions across the fault,
as seismic waves radiate from the rupture zone (earthquake focus).
Fig( 2.2) Cross section of a rupture fault, illustrating terminology used to
describe the location and depth of and earthquake.

Seismic Waves
Seismic waves are mechanical vibrations that occur inside the Earth. Seismic waves are
the waves of energy caused by the sudden breaking of rock within the earth or an
explosion. They are the energy that travels through the earth and is recorded on
seismographs.
2.4 Types of Seismic Waves
There are several different kinds of seismic waves, and they all move in different
ways. The two types of waves are body waves and surface waves.
Body waves: travel through the interior of the Earth. They follow curved paths because
of the varying density and composition of the Earth’s interior.
This effect is similar to the refraction of light waves . Body waves transmit the
preliminary tremors of an earthquake but have little destructive effect. Body waves are
divided into two types : Primary (P-waves and secondary (S –waves).
Surface waves : are analogous to water waves and travel over the Earth’s surface.
Surface waves can only move along the surface of the planet like ripples on water. They
travel more slowly than body waves. Because of their low frequency, they are more likely
than body waves to stimulate resonance in buildings, and are therefore the most
destructive type of seismic wave. There are two types of surface waves: Rayleigh waves
and love waves.
P- waves :

P waves are longitudinal or compress ional waves, which means that the ground
is alternately compressed and dilated in the direction of propagation. These waves
generally travel twice as fast as S waves and can travel through any type of material. As
pressure waves they travel at the speed of sound. Typical speeds are 330 m /s in air, 1450
m/s in water and about 5000 m/s in granite.
P waves are compressional or longitudinal waves ; that is , the medium vibrates parallel
to the direction that the wave energy is traveling.
A P wave travels fastest and arrives first at a detector. For this reason, these waves are
called primary waves ( hence the letter “P” )_ A P wav e can travel through liquid and
gas.
P – wave velocity is given by :
Vp = [(K+4/3u)/p] ….(2.1)
Where , Vp is the velocity of the P- wave, K is the incompressibility of the
material , u is the rigidity of the material , and p is the density of the material.
S waves : S waves are transverse or shear waves, which mean that the ground is
displaced perpendicularly to the direction of propagation, alternately to one side and then
the other. S waves can travel only through solids , as fluids (liquids and gases) do not
support shear stresses. Their speed is about 58% of that of P waves in a given material.

As S wave is slower and arrives at the detector second. For this reason, S waves are
called secondary waves ( because they arrive second!). These waves are transverse
waves, In transverse waves, the medium vibrates perpendicularly to the direction of
energy travel.
S – wave velocity is given by :

Vs =
Where , Vs is the velocity of S – wave, Because liquids respond to changes in volume but
not shape, they will not transmit S waves.
Love Waves:
The first kind of surface wave is called a Love wave, named after A.E.H Love, a British
mathematician model for this kind of wave in 1911. It’s the fastest surface wave and
moves the ground from side to side.
Rayleigh Waves :
The other kind of surface wave is the Rayleigh wave, named for John William Strutt,
Lord Rayleigh, who mathematically predicted the existence of this kind of wave in 1885 .
A Rayleigh wave rolls along the ground just like a wave rolls across a lake or an ocean.
Because it rolls, it moves the ground up and down , and side to side in the same direction
that the wave is moving . Most of the shaking felt from an earthquake is due to the
Rayleigh wave, which can be much larger than the other waves.
2.5 Seismographs and Seismograms :
Sensitive seismographs are the principal tool of scientists who study earthquakes.
Fundamentally, a seismograph is a simple pendulum. When the ground shakes, the base
and frame of the instrument move with it, but intertia keeps the pendulum bob in place. It
will then appear to move, relative to the shaking ground. As it moves it records the
pendulum displacements as they change with time , tracing out a record called a
seismogram. The record of an earthquake, a seismograph , as recorded by a seismometer,
will be a plot of vibrations versus time. On the seismograph, time is marked at regular
intervals, so that we can determine the time of arrival of the first P wave and the time of
arrival of the first S wave.
One seismograph station , having three different pendulums sensitive to the north-
south , east-west, and vertical motions of the ground , will record seismograms that allow
scientists to estimate the distance, direction , Richter Magnitude, and type of faulting of
the earthquake . Seismologists use networks of seismograph stations to determine the
location of an earthquake.
2.6 Locating Earthquakes :
Focus and Epicenter : The location of earthquake cab be described by the
latitude , longitude, and depth of the zone of rupture. The focus (or hypocenter ) is the
actual “point” 9relativel small volume) within the Earth where the earthquake energy is
released. The Epicenter is the point on Earth’s surface directly above the focus . The
Focal Depth is the distance from the epicenter to the focus.

In order to determine the location of an Earthquake , we need to have a recorded


seismogram of the Earthquake from at least three seismographic stations at different
distances from the epicenter of the quake. In addition, we need one further piece of
information – that is the time it takes for P waves and S waves to travel through the Earth
and arrive at a seismographic station. Such information has been collected over the last
80 or so years, and is available as travel time curves.
For the seismographs at each station one determines the S-P internal ( the
difference in the time of arrival of the first S wave and the time of arrival of the first P-
wave.
Note that on the travel time curves, the S-P interval increases with increasing
distance from the epicenter. Thus the S-P interval tells us the distance to the epicenter
form the seismographic station where the Earthquake was recorded. Thus , at each station
we can draw a circle on a man that has a radius equal to the distance from the epicenter.
Three such circles will intersect in a point that locates the epicenter of the Earthquake.
2.7 The Earth’s structure :
Seismic waves also us a great deal about the core. Recall that P waves can travel through
liquids whereas S waves cannot. When an earthquake occurs, both S and P waves radiate
from the focus. Because the rocks get denser as depth increases the path of the waves
bends see the diagram.
The S waves are detected over a little more than one quarter of the earth’s surface 9103
to be exact ) Beyond that no S waves are seen . This tells us then that for some reason, the
S waves do not travel through the core Hence, the core must be made of liquid . A
large, quiet S wave shadows zone is created on the other side of the Earth.
In contrast, the P waves are detected on the opposite side of the Earth as the focus. A
shadow zone from 103 to 142 does exist from P waves, though. Since waves are detected,
then not then reappear again, something inside the Earth must be bending the P waves
and bending them towards the normal. From this evidence using waves, we can tell that
part of the core is liquid (S wave shadow P and part ( the inner part) must be solid with a
different density than the rest of the surrounding material (P wave shadow zone due to
refraction ) . In actuality, the inner core is thought to be made of solid iron and nickel.
2.8 Characteristics of Earthquakes :
Strength (or size) of Earthquake:
There are two terms used to describe the strength of an earthquake :
i) Earthquake intensity ii) Earthquake magnitude
magnitude is quantitative , related to the amount of energy released by the earthquake ;
intensity is qualitative, describing the severity of ground motion at a given location.
Earthquake intensity :
Earthquakes cause deformation of the natural surface features and damage to man made
structures , like buildings , bridges etc. ‘Intensity ‘ of an earthquake is classified on the
basis of the local characters of the visible effects it produces, i.e intensity represents the
effect of an earthquake to man made and natural structures. Different intensity scales
have been developed . Italian scientist M.S Rossi and Swiss scientist F . Forel developed
the first intensity scale. It was known as “Rossy- Frel Scale” I –X . in 1902, Italian
scientist G Mercalli scale’. It was a 12 point scale (I –Xii). It was modified in 1931 and
then again in 1956 . The modified form of Mercalli scale is known as ‘ Modified Mercalli
scale’ .
Once the intensity has been estimated at different places, a contour m ap of intensities can
be drawn . These lines joining points of same inensities are called ‘Isoseismals’.
Limitations :
The intensity of an earth quake is an indirect measute of the size of an
earthquake. A very shallow earthquake can produce very high intensities in a limited
region, although its size may not be large. Thus, the earthquake intensity is not a correct
measure of the size of an earthquake.

Earthquake magnitude :
The size of the earthquake should be measured in terms of the amount of energy
released at the focus. This is independent of the damage caused. For this concept of
earthquake magnitude was introduced , by K. Wadati and C.f Richter in 1930.
Magnitude scales are based on two simple assumptions. The first is that given the
same source – receiver geometry and two earthquakes of different size, the “larger” event
will on average produce larger amplitude arrivals. The second is that the 333amplitudes
of arrivals behave in a “ predictable” fashion. All the magnitude scales are logarithmic
scales. Thus, magnitude of an earthquake is a quantitative measure of an earthquake.
There are four types magnitude scales used :

1) Local magnitude : It was the first magnitude scale developed by C. Richter in


1930 . It is given by :
ML = LOG (A) + 2,276log (@)-2 .48 …(2.3)
Where , A = maximum amplitude of ground displacement
@ = epicentral distance
M is a very useful scale for engineering
3) Body Wave Magnitude : It was proposed by b. Gutenberg in 1945. This magnitude is
based on the first few cycles of the P – wave arrival and is given by
ML is a very useful scale for engineering
Mb = log (A/T) = Q(h,dx …… (2.4)
Where , A = maximum ground motion amplitude
T = corresponding time period
Q (h , dx ) = correction applied for focal depth and epicenteral distance.
4) Surface Wave magnitude : Beyond about 600 km the long period seismograms of
shallow earthquakes are dominated by surface waves, usually
With a period of approximately 20 s. It is given by
Mg = log (A) + alog (dx) + b …..(2.5)
Where, A = maximum amplitude of ground motion for surface waves
dx = epicenteral distance and a and b are constants.
Disadvantage of magnitude scales : The magnitude measured by the local, body or
surface wave magnitude scale is based on the amplitude and frequency of the waves
involved. It is independent of the mechanism of generation of the Earthquake. All
earthquakes above a certain size will have a constant magnitude. This is called magnetic
saturation. So it is desirable to have a magnitude measure that does not suffer from this
deficiency.

Seismic Moment : The total magnitude of an Earthquake is best represented by the


seismic moment (M0 = uAD …..(2.6)

Where , u = modulus of rigidity , A = area of the fault, D = displacement of fault


Thus, ‘Mo’ measures the energy radiated from the entire fault. the magnitude scale based
on the seismic moment is known as ‘Moment Magnitude’ and is represented by ‘Mw’. It
is given by
Mw = 0.667log (M0) – 10.73 …..(2.7)
This scale derived by Kanamori (1977) is tied to Ms but will not saturate because M0
does not saturate . Generally, determination of m0 is much more complicated than
magnitude measurement.

Magnetic Prospecting Chapter 3

3.1 Introduction
The properties of lode stone and the magnetism of the earth have been known
from a very long time. An ordinary compass needle gets deflected in the neighbourhood
of iron ore deposits rich in magnetite. There are a few other minerals like pyrrhotite, and
franklinite which are strongly magnetic; there are many others so strongly magnetic;
there are many others like limonite and chromite which are not so strongly magnetic; and
few minerals like rock salt and quartz, which are diamagnetic.
Magnetic method is the oldest method of locating both hidden ores and structures
associated with deposits of oil and gas. A mineral deposit having considerable quantities
of magnetite disturbs in its vicinity the normal value of Earth’s magnetic field intensity at
that place. By measuring the intensities of the Earth’s magnetic field at a number of
points in the area, we can ascertain the spots at which there is an anomaly in the normal
intensity, and thus locate the hidden body which causes the disturbance.
In magnetic prospecting, we apply the simple laws of magnetism governing
polarity, attraction, induction, and distribution of magnetic fields. A magnet has two
poles which are the foci of maximum attraction or repulsion. The magnetic field has both
direction and intensity, and extends into the surrounding air The intensity of the field
varies inversely as the square of the distance from a pole. The magnetic field is
graphically presented by lined of force by the trend of the lines, and the intensity by
density of the lines per unit area. Although a group of lines, and the lines per unit area.
Although a group of lines of force filing a magnetic field is a convential representation,
its use is indispensable for practical application of magnetic, and also electromagnetic
methods.
Magnetic surveying has a wide range of applications from small scale engineering
or archaeological surveys to detect buried metallic objects, to large scale surveys carried
out to investigate regional geological structures. Magnetic surveys can be carried out on
land, at sea and in air.
The magnetic field observed at Earth’s surface considerably n both strength and
direction. Unlike gravitational acceleration, which is directed nearly perpendicular to
Earth’s surface , magnetic field directions change from early horizontal at the equator, to
nearly vertical at the poles. The variation in strength of the gravity field is only about
0.5% (nearly 978000 mGal at the equator, 983000 mGal at the poles), comared to
doubling of magnetic field (nearly 30000 nT at the equator , 6000 nT at he poles).
The magnetic method has many important applications. Anomalies induced by
Earth’s natural field give clues to the geometry of magnetized bodies in the crust, and the
depth to sources of the anomalies. The depth to he deepest sources of retain strong
magnetization (curie Depth) illustrates the depth below which rocks are too hot to retain
strong magnetization (Curie Temperature). Studies of rocks that have been permanently
magnetized (paleomagnetism) give clues to the ages of the rocks, the latitudes at which
they formed , and to the relative positios of continents in the past.
3.2 Earth Magnetic Field
About 98% of Earth’s magnetic field is of internal origin, thought to be caused by
motions of liquid metal in the core; the remaining 2 % is external , of solar origin. Unlike
the gravitational field , which is essentially fixed, the magnetic field has secular
variations. Measurements in Europe since the 1600’s show that the direction of the
magnetic field has gradually drifted westward at rates up to 0.2 per year. The overall
strength of the field has also degreased by about 8 % in the last 150 years. In addition
several factors result in daily, monthly, seasonal, yearly, and longer period variations in
the magnetic field. There are also sporadic variations (“magnetic storms” ) which
momentarily disrupt the field.
3.3 GEOMAGNETIC FIELD
The magnetic field of the Earth is known as the ‘Geomagnetic field’.
The geomagnetic field is a vector and it is more complex than the gravitational
field of the Earth and exhibits irregular variation I both orientation and magnitude with
latitude, longitude and time.
When a magnetic needle is feely suspended at any point on the Earth’s surface ,
then it aligns itself in the direction of the geomagnetic field. This will generally be at an
angle to both vertical and geographic north.
3.4 Geomagnetic Elements
The magnetic field of the Earth can be completely described in terms of 7
components, called Geomagnetic Elements.
These are :
1. Total magnetic field, ‘F’
2. Vertical component of total magnetic field, ‘Z;
3. Horizontal component of total magnetic field. ‘H’
4. Component of ‘H’ along the Geographic North direction, ‘X’
5. Component of ‘H’ along the Geographic East direction, ‘Y’
6. Declination (D) of the field is the horizontal angle between the
Geographic North and the Magnetic North.
7. Inclination (I) of the field is the angle which the total field ‘F’ makes
with the horizontal.
These geomagnetic elements are related to each other as :
Since, ‘H’ and ‘Z’ are the components of ‘F’ , hence
H= F cost (I) and Z= F sin(I) ….(3.1)
F
By knowing any 3 elements we can determine the remaining 4 elements using the
above relations.
3.5 Earth’s Magnetic field is composed of three parts:
1. Main Magnetic field.
2. External Magnetic field
3. Crustal Magnetic field or anomalous magnetic field, usually but not always much
smaller than main field, relatively constant in time and space and caused by local
magnetic anomalies I the near surface crust of the earth; these are targets in magnetic
prospecting.
3.5.1 Main magnetic field
About 90% of the Earth’s magnetic field is of internal origin. There are two main
theories regarding the origin of main magnetic field:
(1) Dipolar theory
(2) Dynamo theory

1. Dipolar theory: The Earth’s magnetic field can be explained by placing a small
bar magnet at the center of the Earth which is inclined at an angel 11.5 with the
Earth’s axis of rotation. This fictitious magnet is called ‘ Centered Geomagnetic
Dipole’. The points where an extension of this imaginary dipole intersects with
the Earth’s surface are referred to as the ‘Geomagnetic Poles’.
However, this theory cannot explain all the observations regarding the main
magnetic field. The major limitations of this theory are:
• No bar magnet can exist the center of the Earth, as the core of is very hot.
All the magnetism is lost at such a high temperature.
• Dipolar theory does not account for the polarity reversals in th main
magnetic field.
2. Dynamo theory : It was proposed by W. M Elasasser and E.C Bullard. This
theory combines the principles of fluid mechanics and the electro mechanical
dynamo.
According to Michael Faraday, if a metal disk is rotating on it’s axis in an external
magnetic field, such that the axis is parallel to the external field, then a positive
charge is developed bear the rim of the disk and a negative charge is developed at the
center of the disk. Hence, if a conducting wire is connected between its rim and its
center , then a current flows through it. Now if this conducting wire is wound into a
coil surrounding the disk, then the current in the coil produces a magnetic field. By
adjusting the speed of rotation of the disk, we can make a self generated magnetism
that is just sufficient to maintain the current in the coil and so the external field can be
eliminated. This explains how the magnetic field might be maintained indefinitely.
But, it does not explain the spontaneous polarity reversals.
The polarity reversals cab be explained by considering a combination of two disk
dynamos, such that wire connected between the rim and the center of each disk is
wound into a coil around the other disk. This combination is also a self sustained
dynamo, but the effects of the two spinning disks are very delicately balanced. Slight
disturnance in this balance causes polarity reversal of the magnetic field generated by
this system.
Thus, main magnetic field is generated due to the currents produced within the Earth
due to the rotation of the core.
3.5 The magnetic field of external
The magnetic field of the Earth in space has been measured form satellites and
spacecraft. The external field has a quite complicated appearance . It is strongly affected
by the solar wind, a stream of electrically charged particles (consisting mainly of
electrons, protons and helium nuclei) that is constantly emitted b the Sun. the solar wind
is a plasma. This is the physical term for an ionized gas of low particle density made up
of nearly equal concentrations of oppositely charged ions. At the distance of the Earth
form the Sun (1 AU ) the density of the solar wind is about 7 ions per cm3 ,and it
produces a magnetic field of about 6 nT. The solar wind interacts with the magnetic field
of the Earth to form a region called the magnetosphere. At distances greater than a few
Earth radii the interaction greatly alters the magnetic field from that of simple dipole.

The velocity of the solar wind relative to the Earth is abot 450 kms-1. At a great
distance (about 15 Earth radii ) from the Earth, on the day side , The supersonic solar
wind collides with the thin upper atmosphere.
This produces and effect similar to the build up of a shock wave in front of supersonic
aircraft. The shock front is called the bow shock region; it marks the outer boundary of
the magnetosphere. Within the bow shock region the solar wind is slowed down and
heated up. After passing through the shock front the solar wind is diverted around the
Earth in a region of turbulent motion called the magnetosheath. The moving charged
particles of the solar with constitute electrical currents, They produce and interplanetary
magnetic field, which extends to great distances downwind from the Earth. The Moon’s
distance from the Earth is about 60 Earth radii and so its monthly orbit about the Earth
brings it in and out of the magnetotil on each circuit. The transition between the deformed
magnetic field and the magnetosheath is called magnetopause.
3.6 Basic Concepts
Magnetic poles
Within the vicinity of a bar magnet a magnetic flux is developed which flows
from one end of the magnet to the other. This flux can be mapped from the directions
given by a small compass needle. The points within the magnet where the flux converges
are know as the ‘Poles’ of the magnet.
Free magnetic poles do not exist. Thus each positive pole must be paired with a
corresponding negative pole.
Coulom’s Law of Magnetic force
Accoridng to it, the force between two magnetic poles of strengths ‘m1’and ‘m2 ,
separated by a distance ‘r’ is given by
F=(m1m2/ u r2 ) r1 …(2.8)
Where, u= Magnetic Permaeability of the medium surrounding the magnets, it is the
dimensionless quantity whose value is precisely 1 in vacuum, and r1 is the unit vector
directed from m1 to m2 . This force ‘force ‘F’ is attractive if the poles are of different
signs and repulsive if they are of same sign.
Magnetic field strength
The ‘Magnetic field strength’ due to a pole of strength ‘m’ at a distance ‘r’ from
the pole is defined as the force exerted on a unit positive pole at that point. It is
represented by ‘H’.
H= F/m = (m/ur2 ….(3.9)
Magnetic potential
The ‘magnetic potential’ (V) at a distance ‘r’ from a pole of strength ‘m’ is the
amount of work done to move a unit pole from distance ‘r’ to infinity.
V= (m/ur) r1 …(3.10)
Intensity of magnetization
It represents the extent to which a specimen is magnetized, when placed in a
magnetic field. The Intensity of magnetization is proportional to the strength of the field
and its direction is in the direction of that field. It is defined as the magnetic moment per
unit volume,i.e
I= M / v …(3.11)
If a = uniform area of cross section of the magnetized specimen
2l = magnetic length of specimen
m = strength of each pole, then
I= m x 2l /a x 2l
= m/a …(3.12)
Hence intensity of magnetization of a magnetic material is also defined as the pole
Per unit area of cross section of the material.
Magnetic susceptibility
The degree to which the body is magnetized is determined by its magnetic susceptibility,
defined as
K=I/H …(3.13)

Susceptibility is the fundamental parameter in magnetic prospecting, since the magnetic


response of rocks and minerals is determined by the amount magnetic materials in them
and the latter have k values much larger than the rocks and minerals themselves.

Magnetic Induction
When a magnetic material , is placed within a magnetic field, H, the magnetic material
will produce its own magnetization. This phenomenon is called induced agnetization. In
practice, the induced magnetic field (that is, the one produced by the magnetic dipoles
located within the magnetic material and oriented parallel to the direction of the inducing
field, H. The strength of the magnetic field induced buy the magnetic material due to the
inducing field is called the intensity of magnetization, I .

3.7 Units of magnetic field


In SI system, Magnetic field strength is expressed in ‘weber / m2’ which is called
‘ Tesla’ (T) . In cgs system, unit of magnetic field strength is ‘Gauss’ (G).
lWb/m2 = lT
lG= 10 -4 T
As the magnetic anomalies caused by rocks are very small in magnitude, hence, ‘Tesla’ is
a very large unit to ex Fia(2 em. So, Geoheysicists employed a smaller unit called
‘Nanotesla’ such that 1 nT = 10-9 T. Similarly, in cgs system, the small unit is ‘Gamma”
(y) , such that,
1 y = 10-5 Gauss
As, ly= 10-5 Gauss = 10-9 T
Ly= lnT

3.8 Types of magnetism


All materials – elements, compounds, etc can be classified in three groups
according to their magnetic properties : diamagnetic, paramagnetic and ferromagnetic .
1) Diamagnetism : A diamagnetic mineral, such as halite (rock salt), has
negative magnetic susceptibility, acquiring an induced magnetization
opposite in direction to an applied external field.weal magnetiazation
results from alteration of electron orbitals as force from the external field
is applied to the material. Susceptibility of only about -10-5 mean that
magnetization is on the order of 1/00000th the strength of the external
field.
2) Paramagnetism : by definition all materials which are not diamagnetic
are paramagnetic, that is, k is positive. They acquire magnetism paralled to
an external field. The magnetism occurs as magnetic moments of atoms
are partially aligned in the presence of the externals field. Most of mineral
exhibit this type of weak magnetic behavior. (e.g. chlorite,amphibole,
pyroxene, and olivine) are paramagnetic.
3) Ferromagnetism : In some metals (e.g Iron, Cobalt and nickel) the atoms
occupy lattice positions that are close enough to allow exchange of
electrons. The resulting magnetic moments react in unison to a magnetic
material giving, rise to a class of strong magnetic behavior known as
Ferromagnetism.
4) Ferrimagnetism: materials in which magnetic are subdivided into regions
which may be aligned in opposition to one another, but whose net
magnetic moment is zero when H is zero, are called ferromagnetic. E.g
Magnetite, limonite,oxides of iron and pyrrhotite.
5) Antiferromagnetism : If net magnetic moments of parallel and anti
parallel sub- domains cancel out each other in a material which would
otherwise considered ferromagnetic, the resultant is very small of the
order of paramagnetic substances. Hematite is the most common example.
3.9 Anomalous magnetic Field
It is produced by ferromagnetic substances present in the earth’s crust. Then it
has internal variations. It is relatively constant with time.
3.10 VARIATIONS IN THE EARTH’S MAGNETIC FIELD
The Earth’s magnetic field is not constant. But it changes both in intensity and direction.
The variations in Earth’s field are of two types:
• Secular variations
• Diurnal variations
a) Secular Variations :
At any particular place on the Earth the geomagnetic field is not constant in
time. When the gauss coefficients of the internal field are compared from one
epoch to another, slow but significant changes in their values are observed. The
slow changes of the field only become appreciable over decades or centuries of
observation and so they are called secular variations (from the latin word
saeculum for a long age)_. They are manifest as variations of both the dipole and
non dipole components of the field.
(b Diurnal Variation :
The ionized molecules in the ionosphere release swarms of electrons that form
powerful, horizontal, ring like electrical currents. These act as sources of external
magnetic fields that are detected at the surface of the Earth. The ionization is most intense
on the day side of the earth, where extra layers develop. The sun also causes atmosphere
tide in the ionosphere , partly due to gravitational attraction but mainly because the side
facing the sun is heated up during the day. The motion of the charged particles through
the Earth’s magnetic field produce and electrical field, according to Lorentz’s law, which
drives electrical currents in the ionosphere. In particular, the horizontal component of
particle velocity interacts with the vertical components of the geomagnetic field to
produce horizontal electrical currents loops in the ionosphere. These currents cause a
magnetic field at the Earth’s surface. As the Earth rotates beneath the ionosphere the
observed intensity of the geomagnetic field fluctuates with a range of amplitude of about
10-30 nT at the Earth’s surface and a period of one day. This time dependent change of
geomagnetic field intensity is called the diurnal (or daily ) variation.
The magnitude of the diurnal variation depends on the latitude at which it is
observed. Because it greatly exceeds the accuracy with which magnetic fields are
measured during surveys, the diurnal variation must be compensated by correcting field
measurements accordingly. The intensity of the effect depends on the degree of
ionization of the ionosphere and is therefore determined by the state of solar activity. The
solar activity is not constant on days when the activity of the Sun is especially low, the
diurnal variation is said to be of solar quiet (Sq ) type. On normal days, or when the solar
activity is high, the (Sq ) variation is overlaid by a solar disturbance (SD) variation. Soalr
activity changes periodically with the 11 year cycle of sunspots and solar flares. The
enhanced emission of radiation associated with these solar phenomenon increase
anomalously strong magnetic field (called magnetic storms) with amplitudes of up to
1000 nT at the Earth’s surface. The ionosphere disturbance also disrupts short wave to
long wave radio transmissions. Magnetic surveying must be suspended temporarily while
a magnetic storm is in progress, which can last for hours or days, depending on the
duration of the solar activity.

Electrical Method Chapter 4


4.1 Introduction
Electrical methods depend on the differences in the electrical conductivities of
different rocks and minerals. Certain minerals like pyrite, chalcopyrite, etc. are very good
conductors of electricity; many other ore minerals have low conductivity, while the great
majority of the non metallic minerals are bad conductors. When we pass a current of
electricity into the ground, the current will distributes itself according to the
conductivities of the rocks and minerals coming under the influence of the current. There
will be appreciable concentration of current in places where there are large masses of
pyrite or other highly conductive bodies. It would , therefore be possible to locate such
ore bodies concealed in the ground by studing the electrical or electromagnetic field on
the ground surface.
There are three basic quantities- Potential, Current and Resistance involved in
electrical measurements. The elementary law giving the relation between there three
quantities is the well known Onm’s Law. It enunciates that in and electrical circuit , the
current I is proportional to the potential difference or the electromotive force, V, and the
resistance R that is
I = V/R or R=V/I
Electrical prospecting may be carried out by employing direct current (d.c.)
steady or transient c9 interrupted) or alternating current (a.c.). in a steady state d.c system
a constant current is made to flow into the ground during the measurement. In and
interrupted or ‘pulsed’ d.c system, a steady current is established in the ground for a short
time and then it is switched off, thus producing a pulse of current. The direction of
current may be reversed every time. In current keeps in changing direction and magnitude
in a periodic manner with time, flowing in one direction during one half of a period, and
in the other during the other half. The number of oscillations per second is called the
frequency.
He conduction of electric current inn any medium is a molecular property. There
are three ways in which conduction takes place : (i) Electronic Conduction,
(ii) Electrolytic Conduction, and (iii) Dielectric Conduction. The first of
there , shown by metallic bodies, is due to the movement of fee electrons
which transmit the electrical charges. The second one is the conduction
taking place in electrolysis is the conduction taking place in electrolysis of
solutions, due to transport of ions. The conductivity of salt solutions and
other electro chemical. According , this kind of conduction is also known as
electro chemical conduction takes place in rocks and minerals having high
resistivity in the presence of an alternating current. In these materials, there
are no free electrons or ions to provide conduction as in metals or
solutions ;but, the permit conduction of alternating currents on the surface
of these materials. Orientation of molecules, when an electric current is
applied, may also cause what is known as ‘dielectric polarization’.
In electrical prospecting , the electrical currents which are sometimes naturally
present in the earth, may be measured, or one may introduce currents into
the ground artificially by using batteries or generators, and investigate the
electrically field distribution by suitable measurements. Since the large
majority of rocks and minerals are poor conductors, it is more convenient to
refer to resistivity of the materials of Earth. Resistivity is an expression of
the opposition an electric currents encounters in flowing through a
substance, whereas conductivity is an expression for the ease with which the
current flown through the body. The unit of resistivity is one ohm. Cm. It is
numerically, the resistance of one ohm offered between two opposite faces
of a cube of one cubic o centimeter material. The symbol Ω cm is used to
denote resistivity is mho/cm.
4.2 Classification of Electrical Methods
The developments in electrical prospecting have been very vigorous; a number of
methods and instruments have been proposed , and thousands of patents
may taken out. All these methods may classified under the following eight
groups :
(1) Self Potential Method
(2) Resistivity Methods
(3) Equi Potential line Method
(4) Potential Drop Ratio Methods
(5) Electromagnetic Methods
(6) Telluric Current method
(7) Magneto Telluric and AFMAG Methods
(8) Induced Polarization Method
Of the foregoing methods, the Self Potential and the Telluric current , Magneto Telluric
and AFMAG method make use of electric fields naturally present in the Earth. For the
rest of the methods we use artificial source batteries or generators- for energizing the
ground.
There are two ways in which current can be introduced in the Earth: (1)
Conductively, by driving metal stakes into the ground, and passing a current through
these stakes; or (2) Inductively , by loops or coils of insulated copper wires placed on or
above the ground at some height , and passing a current through those coils of wire.
Depending on the manner of studying the electrical or electromagnetic fields, the
electrical methods may be classified into two broad categories : Electrical Potential
Methods and The Electromagnetic or Inductive Methods.
4.3 Introduction to resistivity surverys
The purpose of electrical surveys is to determine the subsurface resistivity
distribution by making measurements on the ground surface. From these measurements,
the true resistivity of the subsurface can be estimated. The ground resistivity is related to
various geological parameters such ass the mineral and fluid content, porosity and degree
of water saturation in the rock. Electrical resistivity surveys have been used for many
decades in hydro geological, mining and geotechnical investigations. More recently , It
has been used for environment surveys. The resistivity measurements are normally made
by injecting current into the ground through two current electrodes (C1 and C2 in Figure
1 ) and measuring the resulting voltage difference at two potential electrodes (P1 and
P2 ) From the current (I) and voltage (V) values , an apparent resistivity (pa ) value is
calculated.
pa = k V/I
Where k is the geometric factor which depends on the arrangement of the four electrodes.
Figure 2 shows the common arrays used in resistivity surveys tofether with their
geometric factors. In a later section We will examine the advantages and disadvantages of
some of these arrays. Resistivity meters normally give a resistance value, R= V/I , so in
practice the apparent resistivity value is calculated by
Pa = k R
The calculated resistivity value is not the true resistivity of the subsurface, but an
‘apparent’ value which is the resitivity of a homogeneous ground which will give the
same resistance falue for the same electrode arrangemt. The relation ship between the
‘apparent’ resisitivity and the ‘true’ resisitivity is a complex relationshiop. To determine
the true subsurface resistivity , an inversion of the measured apparent resistivity values
using a computer program must be carried out.
4.4 Traditional resistivity surveys
The resistivity method has its origin in the 1920’s due to the work of the
Schlumberger brothers. For approximately the next 60 years , for quantitative
interpretation, conventional sounding surveys (Koefoed 1979 ) were normally used, In
this method , the centre point of the electrode array remains fixed, but the spacing
between the electrodes is increased to obtain more information about the deeper sections
of the subsurface.
C1 P1 P2 C2
↓ ↓ ↓ ↓
Figure 1 A conventional four electrode array to measure the subsurface resistivity
The relationship between geology and resistivity
Before dealing with the 2 -D and 3-D resistivity surveys, we will briefly look at the
resitivity values of some common rocks, soils, and other materials. Resistivity surveys
give a picture of the subsurface resistivity distribution. To convert the resisitivity picture
into a geological picture , some knowledge of typical resisitivity values for different types
of subsurface materials and the geology of the area surveyed, is important . Table 1 gives
the resistivity values of common rocks, soil materials and chemicals (Keller and
Frischknecht 1966, Daniels and Alberty 1966) Igneous and metamorphic rocks typically
have high resistivity values , The resistivity of these rocks is greatly dependent on the
degree of fracturing and the percentage of the fractures filled with ground water.
Sedimentary rocks which usually are more porous and have a higher water content,
normally have lower resisitivity values. Wet soils and fresh ground water have eveb
lower resistivity values. Clayey soil normally has a lower resistivity value
The concentration of dissolved salts. The resistivity of ground water varies from 10 to
100 ohm.m. depending on the concentration of dissolved salts. Note the low resistivity
(about 0.2 ohm.m.) of sea water due to the relatively high salt contents. This makes the
resistivity method an edeal technique for mapping the saline and fresh water interface in
coastal areas. The resistivity values of several industrial contaminants are also given in
Table 1. Metals, such as iron, have extremely low resistivity values.
Chemicals which are strong electrolytes, such as potassium chloride and sodium
chloride, can greatly reduce the resistivity of ground water to less than 1 ohm.m even at
fairly low concentrations. The effect of weak electrolytes, such as acetic acid, is
comparatively smaller. Hydrocarbons, such as xylene, typically hav every high resistivity
values. Resisitivity values have a much larger range compared to other phusical
quantities mapped by other geophysical methods. The resistivity of rocks and soils in a
survey area can vary by several orders of magnitude. In comparison, density values used
by gravity surveys usually change by less than a factor of 10. This makes the resistivity
and other electrical based methods very versatile geophysical techniques.
4.5 Theory
Data from resistivity surveys are customarily presented and interpreted in the
form of values of apparent resistivity pa. Apparent resistivity is defined as the resistivity
of an electrically homogeneous and isotropic half – space that would yield the measured
relationship between the applied current and the potential difference for a particular
arrangement and spacing of electrodes. An equation giving the apparent resistivity in
terms of applied current, distribution of potential, and arrangement of electrodes can be
arrived at through an examination of the potential distribution due to a single current
electrode. The effect of an electrode pair ( or any other combination) can be found by
superposition.
Consider a single point electrode, located on the boundary of a semi infinite,
electrically homogeneous medium, which represents a electode carries a current I,
measured in amperes (a), the potential at any point in the medium or on the boundary is
given by
Figure For an electrode pair with current I at electrode A, and –I at electrode B, the
potential at a point is given by the algebraic sum of the individual contributions:
Where rA and rB = distances from the point to electrodes
A and B. the equipotentials represent imagery shells, or bowls,surrounding the current
electrodes, and on any one of which the electrical potential is everywhere equal. The
current lines represent a sampling of the infinitely many paths followed by the current,
paths that are defined by the condition that they must be everywhere normal to the
equipotential surfaces.
The potential differences V between a pair of electrodes M and N, which carry no
current, may be measured following the previous equation as:

Where UM and UN = potential at M and N, Am= distance between electrodes etc.


These distances are always the actual distances between the respective electrodes,
whether or not they lie on a line.
4.6 Apparent resistivity (pa )
Whereever these measurements are made over a real heterogeneous earth, as
distinguished from the fictitious homogeneous half space, the symbol p is replaced by Pa
for apparent resistivity. The resistivity surveying problem is, reduced to its essence, the
use of apparent resistivity values from field observations at various locations and with
various electrode configurations to estimate the true resistivities of the several earth
materials present at a site and to locate their boundaries spatially below the surface of the
site.
Field survey method – instrumentation and measurement procedure
One of the new developments in recent years is the use of 2- D electrical
imaging/temography surveys to map areas with moderately complex geology (Griffiths
and Barker 1993). Such surveys are usually carried out using a large number of
electrodes, 25 or more, connected to a multi core cable. A laptop microcomputer together
with an electronic switching units is used to automatically select the relevant four
electodes for each measurement (Figure 5) At present, field techniques and equipment to
carry out 2-D resistivity surveys are fairly well developed. The necessary field equipment
is commercially available from a number of International companies, These systems
typically costs from about US$,000 upwards. Some institutions have even constructed
“home-made” manually operated switching units at a nominal cost by using seismic cable
as the multi core cable. Figure 5 shows the typical setup for a 2-D surveys with a number
of constant spacing between adjacent electrodes is used . The multi core cable is attached
to an electronic switching unit which is connected to a laptop computer. The sequence of
measurements to take, the type of array to use and other survey parameters (such the
current to use) is normally entered into a text file which can be read by a computer
program in a laptop computer. Different resistivity meters use different formats for the
control file, so you will need to refer to the manual for your system. After reading the
control file, the computer program then automatically selects the appropriate electrodes
for each measurement. In a typical survey, most of the fieldwork is in laying out the cable
and electrodes , After that, the measurements are taken automatically and stored in the
computer. Most of the survey time is spent waiting for the resistivity meter complete the
set of measurements. To obtain a good 2-D picture of the subsurface, the coverage of the
measurements must be 2-D as well. As an example, Figure 5 Shows a possible sequence
of measurements for the Wenner electrode array for a system with 20 electodes. In this
example, the spacing between adjacent electrodes is “a”. The first step is to make all the
possible measurements with the Wenner array with electrodes number 1,2,3, and 4 are
used. Notice that electrode 1 is used as the first current electrode C1, electrode 2 as the
first potential electrode P1 , electrode 3 as the second potential electrode P2 and electrode
4 as the second current electrode C2 . for the second measurement, electrodes number
2,3,4 and 5 are used for C1,P1,P2 and C2 respectively. This is repeated down the line of
electrode until electrodes 17,18,19, and 20 are used for the last measurement with
“1a”spacing . For a system with 20 electrodes, note that there are 17 (20 - 3) possible
measurements with “1a” spacing for the Wenner array. After completing the sequence of
measurements with “2a” electrode spacing is made. First electrodes 1,3,5 and 7 are used
for the first measurement. The electrodes are chosen so that the spacing between adjacent
electrodes is “2a” . For the second measurement, electrodes 2,4,6 and 8 are used. This
process is repeated down the line until electrodes 14,16,18, and 20 are used for the
lastmeasurement with spacing “2a” . For a system with 20 electrodes , note that there are
14 (20-2x3) possible measurements with “2a” spacing.
The same process is repeated for measurements with”3a” , “4a” ,”5a” and “6a”
spacings. To get the best results, the measurements in a field survey should be carried out
in a systematic manner so that, as far as possible, all the possible measurements are made,
This will affect the quality of the interpretation model obtained from the inversion of the
apparent resistivity measurements (Dahlin and Loke 1998).
Note that as the electrode spacing increases, the number of measurements
decreases. The number of measurements that can be obtained for each electrode spacing,
for a given number of electrodes along the survey line, depends on the type of array used.
The Wenner array gives the smallest number of possible measurements compared to the
other common arrays that are used in 2-D surveys.
The survey procedure with the pole- pole array is similar to that used for the Wenner
array. For a system with 20 electrodes, firstly 19 of measurement with a spacing of “1a”
is made, followed by 18 measurements with “2a” spacing,followed by 17 measurements
with”3a” spacing, and so on.

For the dipole-dipole, Wenner –Schlumberger and pole-dipole arrays (Figure20


the surveys procedure is slightly different. As an example, for the dipole-dipole array,
the measurement usually starts with a spacing of “1a” between the C1-C2 (and also the
P1-P2) electrodes. The first sequence of measurement is made with a value of 1 for the
“n” factor (which is the ratio of the distance between the C1-P1 electrodes to the C1-C2
dipole spacing), followed by “n” equals to 2 while keeping the C1-C2 dipole pair spacing
fixed at “1a”. When “n” is equals to 2, the distance of the C1 electrode form the P1
electrode is twice the C1-C2 dipole pair spacing. For subsequent measurements, the “n”
spacing factor is usually increased to a maximum value of about 6, after which accurate
measurements of the potential are difficult due to very low potential values. To increase
the depth of investigation, the spacing between the C1-C2 dipole pair is increased to “2a”
and another series of measurements with different values of “n” is made.If necessary ,
this can be repeated with larger values of the spacing of the C1-C2 (and P1-P2) dipole
pairs. A similar surveys technique can be used for the Wenner- Schlumberger and pole-
dipole arrays where different combinations of the “a” spacing and “n” factor can be used.
One techinique used to extent horizontally the area covered by the survey,
particularly for a system with a limited number of electrodes, is the roll- along method.
After completing the sequence of measurements, the cable moved past one end of the line
by several unit electrode spacings. All the measurements which involve the electrodes on
part of the cable which do not overlap the original end of the survey line are repeated
(Figure 6).
4.7 Horizontal profiling
Surveys of lateral variations in resistivity can be useful for the investigation of
any geological features that can be expected to offer resistivity contrasts with their
surroundings. Deposits of gravel, particularly if unsaturated, have high resistivity and
have been successfully prospected for by resistivity methods. Steeply dippig faults may
be located by resistivity traverses crossing the suspected fault line, if there is sufficient
resistivity contrast between the rocks on the two sides of the fault. Solution cavities or
joint openings may be detected as a high resistivity anomaly, if they are open or low
resistivity anomaly if they are filled with soil or water, Resistivity surveys for the
investigation of areal geology are made with fixed electrode spacing, by moving the array
between successive measurements. Horizontal profiling, per se, means moving the array
along a line of traverse, although horizontal variations may also be investigated by
individual measurements made at the points of a grid. If a symmetrical array, such as the
Schlmberger or Wenner array, is used, the resistivity value obtained is associated with the
location of the center of the array. Normally, a vertical survey would be made first to
determine the best electrode spacing. Any available geologicl information,, such as the
depth of the features of interest, should also be considered in making this decision which
governs the effective depth of investigation. Occasionally, a combination of vertical and
horizontal methods may be used. Where mapping of the depth to bedrock is desired, a
vertical sounding may be done at each of a set of grid points.
4.8 Pseudosection data plotting method
To plot the data from a 2-D imaging survey , the pseudosection contouring
method is normally used, In this case, the horizontal location of the point is placed at the
mid-point of the set of electrodes used to make that measurement. The vertical location
of the plotting point is placed at a distance which is proportional to the separation
between the electrodes. For IP surveys using the dipole-dipole array, one common is to
placed
The plotting point at the intersection of two lines starting form the mid-point of the C1-
C2 and P1-P2 dipole pairs, with a 450 angle to the horizontal . It is important to
emphasize that this is merely a plotting convention, and it does not imply that the depth
of investigation is given by the point of intersection of the two 450 angle line (certainly
does not imply the current flow or isopotential line have a 450 angle with the surface).
Surprisingly, this is still a common misconception, particularly in North America.
Another method is to place the vertical position of the plotting point at the median depth
of investigation (Edwards 1977), or pseudodepth , of the electrode array used, This
pseudodepth values is based on the sensitivity values or Frechet derivative for a
homogeneous half space. Since it appears to have some mathematical basis, this method
that is used in plotting the pseudosections , particularly for field data sets, in the later part
of these lecture notes. The pseudosection plot obtained by contouring the apparent
resistivity values is convenient means to display the data.
The pseudosection gives a very approximate picture of the true subsurface resistivity
distribution. However the pseudosection gives a distorted picture of the subsurface
because the shape of the contours depend on the type of array used as well as the true
subsurface resistivity (Figure 7). The pseudosection is useful as a means to present the
measured apparent resistivity values in a pictorial form, and as an initial guide for further
quantitative interpretation. One common mistake made is to try to use the pseudosection
as a final picture of the true subsurface resistivity. As Figure 7 shows, different arrays
used to map the same region can give rise to very different contour shapes in the
pseudosection plot. Figure 7 also gives you an idea of the data coverage that can be
obtained with different arrays. Note that he pole-pole array gives the widest horizontal
coverage, while the coverage obtained by the Wenner array decreases much more rapidly
with increasing electrode spacing. One useful practical application of the pseudosection
plot is for picking out bad apparent resistivity measurements. Such bad measurements
usually stand out as points with unusually high or low values.

Gravity Method Chapter 5

5.1 Introduction
According to the law enunciated by Isaac Newton, two particles of masses m1 and m2
separated by a distance d attract each other with a force F,. which is directly proportional
to the square of the distance between them. This law may be expressed in the form of the
following equation:
F=-G m1 m2 r …(5.1)
r2

Where F is the force on m2,


r1 is a unit vector directed from m1 towards m2,
r is the distance between m1 and m2 and
G is Universal Gravitational constant
=6.67 x 10-11 Nm2 / kg2
The minus sign arises because the force is always attractive.
Gravity prospecting is a natural source method in which local variations in the density of
rocks near the surface cause minute changes in the main gravitational field. This method
invoves the measurement of the variations in the gravitational field of the earth. The main
objective of the surveys is to look for variations that are generated by the differences of
density between the subsurface rocks.
5.2 Accelerations due to Gravity :
Let the mass of the earth be ‘Me ‘Re ‘be the radius of the earth . the earth’s attraction for a
small mass ‘ms at its surface by Newton’s Law of Gravitation is,
Where,Re is the unit vector extending outward from the center of the earth along the
radius. This acceleration is called acceleration due to gravity.
If the earth were spherical, non-rotating and homogenous, then gravity would be
constant. However, the Earth’s ellipsoidal shape, rotation irregular surface relief and
internal mass distribution cause gravity to vary over the Earth’s surface.
5.3 Units of Gravity
Substituting the values of G, ME and RE, in equation 2.3, we get g=9.8 m/Sec2 or=980
cm/sec2 (which is the gravity at the surface of Earth)
In honors of Galileo, the unit of acceleration of gravity,1cm/sec2 , is called the gal.
Gravity variations are very small; hence these variations are measured in
‘milligals’(mGal).
1 cm/ sec2 = 1 gal
1 mGal = 10-3 gal
Also another unit is the ‘Gravity Unit’ (g u)
1 gu = 10-6 m/ sec2
1 gu = 10-4 cm/ sec2 = 10-4 gal = 10-1 mGal
5.4 Gravity Anomalies
Gravity observations can be used tin interpret changes in mass below different
regions of the Earth. To see the mass differences, the broad changes in gravity from
equator to pole must be subtracted from station observations. This is accomplished by
predicting the gravity value for a stations’s latitude (theoretical gravity ), then
sub\tracting that value from the actual values at the station (observed gravity), yielding a
gravity anomaly.
5.5 Theoretical Gravity
The average value of gravity for a given latitude is approximated by the 1967
Reference Gravity Formula, adopted by the International Association of Grodesy:

gt = ge (1+0.005278895 Sin2 Ф +0.000023462 sin4 Ф )


Where
gt = theoretical gravity for the latitude of the observation point (mGal)
ge =theoretical gravity at the equator (978031.85 mGal)
Ф =latitude of the observation point (degrees).
The equation takes into account the fact that the Earth is an imperfect sphere, bulging out
at the equator and rotating about an axis through the poles fig, ( )
For such an oblate spheroid, it estimates that gravitational acceleration at the equator
(Ф=00 ) would be 978031.85 mGul, gradually increasing with latitude to 983217.72 MGal
at the pole (Ф =900 ).
ωω
5.6 The Geoid
In the International Reference Ellipsoid it is assumed that there are no undulations in the
Earth’s surface. In fact we have mean continental elevations of about 500 meters and
maximum land elevation and oceanic depressions of the order of 9000 meters. Obviously
the true sea level is influenced by these variations. To account for this Geodesists defined
geoid as the average sea level over the oceans and over the surface of sea water which
would lie in canals cut through land masses.
These figures show the differences between the geoid and the reference spheroid.
Differences between geoid and reference spheroid are due to the lateral density
variations.
The geoid and the reference ellipsoid do not coincide at all the points. The geoid is
curved upwards under the continents over the oceans. The displacement between the
geoid and spheroid is called ‘Geoid undulation’.
5.7 Factors affecting gravity
The shape of the earth is oblate spheroid, i.e bulging at the equator and flattened at the
poles and it is rotating also. Because of these factors the value of ‘g’ would not be same
even if Earth were made up of homogeneous material. The effect of these factors can be
accounted due to ;
1 Distance : Distance from the center of the Earth to the surface is not
same because of its ellipsoidal shape. It is more at equator than ap poles.
Thus attraction is not same at each point on the surface.
i.e. gp >ge
2 Mass:According to gravitational law
→ gp >ge
i.e. gravity at the equator is greater than at the pole as more mass is concentrated
below the equatorial regions.
3 Rotation: Net force acting on an object resting at the surface of a rotating body is
given by:
Net force = force of gravity – centrifugal force
→ = mg-mrω2
At the poles r is minimum so the net force will be more and hence
gp >ge
Therefore we can conclude that gravity is varying with latitude and it more at the
poles than at the equator.
‘g’ at poles = 983217072 mgals
‘g’ at equator = 978031.85 mgals
But in actual Earth is not made up of homogeneous material. So the various factors
affecting gravity at any point in the Earth’s surface are, latitude, elevation,
topography of the surrounding terrain, Earth Tides and variations in density in
subsurface. The last factor is the only one of significance in gravity exploration and
its effect is generally much smaller than that of the other four combined.
5.8 MEASUREMENT OF GRAVITY
Gravitational acceleration on Earth’s surface can be measured in absolute and
relative senses. Absolute gravity reflects the actual acceleration of an object as it
falls towards Earth’s surface, while relative gravity is the differences in gravitational
acceleration at one station compared to another.
5.8.1 Absolute measurement of gravity
There are two basic ways to measure absolute gravity. In the weight drop
method, the velocity and displacement are measured for an object in free fall.
5.8.2 Relative measurement of gravity:
In relative measurement of gravity, the change in gravity from one place to
another place is measured. In geophysical studies (especially in gravity prospecting) it
is necessary to measure accurately the small changes in gravity caused by
underground structures. Gravity surveying is usually carried out with a portable
instrument called ‘Gravimeter’,which determines the variation of gravity relative to
one or more reference locations. Thus in gravity surveying we measure gravity
anomalies. These require an instrumental sensitivity of the order of 0.01 mGal.
5.9 Gravity surveying
Gravity surveying uses the lateral variation in ‘g’ (i.e. gravity animalies ) to
investigate the densities and the structure of subsurface rocks. Thus, in gravity survey
we look for gravity anomalies. The difference between observed and theoretical
gravity values is called ‘Gravity Anomaly’
A gravity survey is conducted by making gravimeter readings at many locations
in the area of interest. Also the position (i.e. latitude, longitude and height) of each
observation site is noted. Also, we must note the time when each reading was made.
5.10 Instruments
The normal gravity, as has already been pointed out, varies from 978 to 983 gals,
i.e. a range of about five thousand mGals. The anomalies caused by density changes
of rocks and minerals hardly exceed a few mGals and are sometimes even less than a
tenth of mGal. For prospecting,on need not measure the absolute values of gravity ;
the relative changes of gravity from one station to the other are determined. The
gravity instruments must be very highly sensitive, as they should be capable of
measuring 0.01 mGal. Therefore,only precision instruments of the highest quality can
be useful Instruments capable of giving this accuracy are available.
The instruments for gravity prospecting may be divided into three types:
(a) Pendulum;
(b) Torsion Balance:
(c) Gravimeters (also called Gravitymeters).
5.11 GRAVIMETER OR GRAVITY METER
These are instruments designed to measure difference in gravity between the observation
site and reference site. Most of these instruments consist of some arrangement of masses
supported by springs.

Principle
The basic principle of a gravimeter is simple. Suppose that a mass ‘m’ is attached to a
spring (as shown above). The length ‘x’ that a spring stretches depends on the force
pulling that spring. Here the force ‘mg’ depends on the gravitational attraction of the
Earth. It is balanced by an upward supporting force ‘kx’ exerted by the spring (where, k =
spring constant).

Now if this system is moved to a different place, then the change in the gravity ‘gΔ’
should produce a proportional change ‘Δx’ in the stretch of the spring, i.e.
k. Δx =m. Δg …(5.6)

‘Δx’ must be measured precisely. The instrument based on this principle must be portable
and easy to operate. It must be sensitive to gravity difference of 0.1 to 0.01 mGal.
Obviously the mass should not be too heavy and the spring cannot be too long, if the
instrument is to be portable.

The sensitivity of early gravimeters called ‘Stable’ or ‘Static gravimeters was restricted,
because the spring had to serve a dual function:
• To support the mass
• To act as a measuring device
The problem was overcome in modern instruments called ‘Unstable’or ‘Astatic
gravimeters’ which employ an additional force that acts in the same sense as the
extension or contraction of the spring and thus amplifies the movement directly. These
are the different types of gravimeters used:
» La Coste – Romberg gravimeter
» Worden gravimeter
» Sodin gravimeter

The sensitivity of the instrument can be significantly increased by using a ‘Zero length
spring’. A zero length spring is defined as the one in which the tension is proportional to
the length of the spring, i.e. if all the external forces are removed then the spring will
collapse to zero length.
5.12 Gravity reduction
Before the results of a gravity survey can be interpreted it is necessary to
correct for all variations in the Earth’s gravitational field which do not result from the
differences of density in the underlying rocks. This process is known as gravity reduction
or reduction to the geoid, as sea-level is usually the most convenient datum level.
We know that gravity readings obtained in the field are influenced by
instrumental drift also. But we look for variations only due to density changes in the
subsurface. So these field observations must be corrected for the variations in latitude,
elevation, topography, earth’s tide, etc. so that they are reduced to values they would
have on some datum surface. The various corrections applied are :
1. Drift correction
The changes in the reading that would occur throughout the day with the gravimeter
kept at the same place are called ‘Drift’. This change mainly arises due to slow creep
of the spring. The drift is measured by periodically returning the gravimeter to the
base station to take a reading, drawing a curve and interpreting the changes for the
times when readings were taken at each station.
2 Latitude correction
Gravity varies with latitude, because to two factors-
a) The shape of earth
b) The variation of the angular velocity from equator to poles
The net effect of these two factors is that the gravity at the poles exceeds the gravity
at the equator by 5200 mgals. As both the factors vary with latitude ‘Ф’ of the gravity
station, hence they can be combined in one formula called the ‘International Gravity’,
given by :
gN = 978031.85(1+0.005278895 sin2 Ф+0.000023462sin4 Ф) ..(5.7)
Where ‘gN is the gravity (in mgals ) at the latitude ‘Ф’. This value gives the predicted
values of the gravity at the sea level at any point on the Earth’s surface and is
subtracted from observed gravity to correct for latitude variation.
2. Free Air Correction
It removes the effect of elevation. If the gravimeter is at an elevation , then the
gravity values so measured will decrease, due to increased distance from the
earth’s center. Thus free air correction corrects for the decrease in gravity with
height. To reduce an observation taken at a height ‘h’, to the datum, we apply the
free air correction (FAC), given by :
FAC = 0.3086h mGals
Where, ‘h’ is in meters. Thus, we can say that gravity decreases by 0.3086 mGals
per meter rise in elevation.
3. Bouguer Correction
The free air correction accounts for the effect of elevation only and it does not
account for gravitational effect of the rock present between the observation point
and the datum. The bouguer correction removes this effect, by approximating the
rock layer beneath the observation point to an infinite slab or sheet having
thickness equal to the elevation of the observation point from datum plane. If ‘p’
is the density of the rock layer below , then the bouguer correction (B.C.) is given
by :
B.C. = 2 πGph = 0.04192ph mGals …(5.9)
3
Where, ‘h’ is in meters and ‘p’ is in kg/m
Now as both B.C and FAC depends upon ‘h’ hence, they can be replaced by a
combined correction called ‘Elevation Correction’ (E.C.), such that :
E.C = (0.3086 – 0.04192p) h mGals
5. Terrain Correction :
The bouguer correction assumes that the topography around the gravity station is flat.
This is rarely the case, and so a correction myst be appklied to account for the
topographic relief near the vicinity of the gravity station.

This correction is called ‘Terrain Correction’ and it is always positive The bouguer
correction has overcorrected the data for the areas which do not consist of rock and
this effect is removed by a positive terrain correction.
A terrain correction template placed over a topographic map with the center at a
gravity observation site.
6. Tidal Correction
Gravity measured at a fixed location varies with time because of the periodical
variation in the gravitational attraction of the Sun and the Moon associated with their
orbital motions. Corrections so applied for this variation are called ‘Tidal
Corrections’. Also gravitational attraction of the Moon is larger than that of the sun
due to its proximity. These effects cause the shape of the Earth Tides’, which causes
the elvation of an observation point to be altered by a few centimeters. Such periodic
gravity variations caused by the combined effe3cts of the Sun and the Moon are
called ‘Tidal Variations’. These effects are predictable and published every year.
7. Eotvos Correction
This correction is needed only if gravity is measured on a moving vehicle like a
ship or an aircraft. Depending on the direction of motion a centripetal acceleration is
generated which either reinforces or opposes gravity. The Eotvos correction is given
by :
Where, V= velocity vehicle in k,/hr
Ф = Latitude
Ά = Direction of motion
Only East to West motion matters. The correction is positive for motion from East to
West and vice-versa.
8. Isostatic Correction
In 18th and 19th surveys were set out to measure the shape of the earth. The
deflection of plumb bob toward the mountains was not up to the expectation. It was
assumed that the observed deflection could be explained if the excess mountain mass
was matched by an equal deficiency beneath.
Although not applied in small surveys but is very important in regional studies.
Bouguer anomaly:
Bouguer anomaly forms the basis for the interpretation of gravity data. It is
calculated as :
B.A= gobs -g Ф+FAC =EC
5.13 REGIONAL AND RESIDUAL ANOMALIES
A gravity anomaly results from the inhomogeneous distribution of density in the earth.
The appearance of a gravity anomaly is affected by the dimensions, density contrast and
the depth of anomalous body. The Bouguer anomaly fields are often characterized by a
broad and gently varying, regional anomaly which may be superimposed a number of
local anomalies. Usually in gravity surveying we are interested in local anomalies. Thus,
the first step in interpretation is the removal of regional anomaly.

Chapter 6
Gas Hydrate (“The Resource of Future Fuel”)
6.1 Introduction:
» The ever increasing demand of fossil fuels resources and depletion of
global energy reserves have necessitated looking for possible alternate
resources. Among all the resources ‘GAS HYDRATES’ are being given
much attention in the present scenario.
» Gas hydrates are the solid substances composed of water and low
molecular weight hydrocarbons (mainly methane) and are also known as
METHANE CLATHRATES.
» They are found worldwide, in the polar and oceanic sediments, where
temperature is low enough and pressure is sufficiently high to crystallize the
methane into gas hydrates.
» The study of gas hydrates has attracted the attention of scientific
community worldwide because of their widespread occurrence, potential as
future energy resources, possible role in climate change and submarine
geohazards.
» Methane stored within and trapped below the hydrated sediments has large
energy potential. It is estimated to be twice the amount of the total fossil
fuel energy reserves in the world.
6.2 What is Gas Hydrate ?
Gas Hydrates also called as gas Clathrates are naturally occurring an ice-
like crystalline solids composed of water molecules forming a rigid lattice
of cages with most of the cages each containing a molecule of natural gas,
mainly methane (other gases may be either of carbon dioxide, hydrogen
sulphide or any low number hydrocarbons). So gas molecules are trapped
within the framework of the cages of water molecules form at low
temperature and high pressure.
6.3 What is Clathrate ?
(or alternatively gas clathrates, gas hydrates, clathrates, hydrates etc):-
are a class of solids in which gas molecules occupy “cages” made
up of hydrogen-bonded water molecules. These “Cages” are unstable
when empty, collapsing into conventional ice crystal structure, but they are
stabilized by the inclusion of appropriately sized molecules within them.
Most low molecular weight gases (including O2, N2, CO2, CH2, H2S, Ar,
etc.).
6.4 Necessary Conditions For the Hydrate Formation :
Hydrates are ice like solids that form when-
There are 4 necessary conditions for the hydrate formation and stability :
i) Adequate supplies of water
Given appropriate temperature and pressure conditions, hydrates will only
form if sufficient amount of water is present.
ii) Excessive availability of methane, more that its solubility in water.
At a given appropriate temperature and pressure conditions, hydrate will
only form in adequate supplies of water and sufficient methane, such that a
necessary minimal percentage (roughly 70% or more) of the structural
cavities within the hydrate lattice are filled.
iii) Suitable temperature and pressure conditions:
Given adequate supplies of water and methane, curve representing the
stability of Gas Hydrate in sea water. This shows the combination of
temperature and pressures that marks the transition from a system of
coexisting free methane gas and water/ice solid methane hydrate. Temp. and
pressure are two of the major factors controlling where the hydrate (solid )
or methane gas will be stable.
When conditions move to the left across the boundary, Hydrate formation
will occur. Moving the right across the boundary results in the dissociation
of the hydrate structure and release of free water and methane.
iv) Geo-chemical effects:-
In addition to temperatures and pressure the composition of both the water
and the gas are critically important for the stability of gas hydrates in
specific settings. Experimental data collected so far have included both
freshwater and seawater. However, natural subsurface environments exhibit
significant variations in the chemistry of formation water, and these changes
shift the pressure/temperature phase boundary. Similarly, the presence of
small amounts of other natural gases, such as carbon dioxide (CO2),
hydrogen sulphide (H2S) and hydrocarbons such as ethane (C2H6) will
increase the stability of the hydrate.
6.6 Mechanism of Hydrate formation
Methane is formed in two ways:
» Biogenic
» Thermogenic
1. Biogenic methane is the common byproduct of the bacterial ingestion of
organic matter governed by the equation:
(CH2O)106 (NH3)16 (H2PO4) 53CO2+16NH3+H2PO4
The above reaction shows how methane is produced in the shallow
subsurface environments through biological alterations of organic matter.
The equation summarizes successive stages of oxidation and reduction by
nitrates, sulphates and carbonates. The same process produces methane in
swamps, landfills and rice paddies. The digestive tracts of mammals occur
continually within buried sediments in geological environments all around
methane and are considered to be the primary source of methane trapped
below the hydrated sediments within shallow seafloor sediments.
2. Second Thermogenic methane is produced by the combined action of
heat, pressure and time on buried organic material. In the geological past,
conditions have periodically recurred in which vast amounts of organic
matter were preserved within the sediment of shallow, inland seas. Over
time and with deep burial, these organic rich source beds are literally
pressure cooked with the output being the production of large quantities of
oil and natural goes along with the oil, the gas slowly migrates upwards due
to its buoyancy. If sufficient quantities reach the zone of hydrate stability,
the gas will combine with local formation water to form hydrate.
6.7 Physical properties of Gas Hydrates:
Summary of published values for acoustic properties in pure hydrates, water-
saturated sediment, gas-hydrated sediments, and gas-bearing sediments
(modified from Anderson, 1992).
6.8 Detection of Gas Hydrates :-
Fig. Simple synthetic seismogram that reproduces the main features of the
BSRs. The seafloor reflection results from the density contrast and the BSR
mainly from the velocity contrast. Although gas hydrate has been recognized
in drilled cores, its presence over large areas can be detected much more
efficiently by acoustical methods, using seismic-reflection profiles. Hydrate
has a very strong effect on acoustic reflections because it has a high acoustic
velocity (approximately 3.3 km/s – about twice that of sea-floor sediments),
and thus grains cemented with hydrate produce a high-velocity deposit due
to the mixing of hydrate with the sediment.
6.9 Interpreted Seismic Profile:
6.9.1 The BSR:
The contrasts in velocity created by the hydrate-cemented zone
produces a strong reflection called the “bottom simulating reflection”
Lower velocities below the hydrates occur because underlying water
saturated sediments have lower velocities (water velocity is about 1.5 km/s)
and often contain gas trapped by the overlying, less porous hydrate-
cemented sediments. This contrast produces a strong reflection. Because the
base of the gas-hydrate stable zone occurs at an approximately uniform sub-
bottom depth throughout any small area, the well-defined seismic reflection
from the base of the zone roughly parallels the sea (hence “bottom
simulating”).
6.9.2 Blanking
A second significant seismic characteristic of hydrate cementation is called
“blanking”. Blanking is the reduction of the amplitude(strength) of seismic
reflections that apparently is caused by cementation by hydrate of the strata
that form reflectors. The blanking effect occurs throughout the entire
hydrate-cemented zone and can be quantified to estimate the amount of gas
hydrate that is present.
6.10 Classification of Gas Hydrates
Hydrates are mainly of three types :-
I) Structure -I
II) Structure -II
III) Structure –H
Other types are also known and proposed, but they are not so common. The
crystal structures of hydrates are three- dimensional and are very
complicated.
6.10 Structure – I:
Smaller guest molecules form the Structure –I hydrates, Structure – 1
hydrates formers include :
Methane (CH4)
Ethane (C2H6)
Carbondioxide (CO2)
Hydrogen Sulphide (H2S)
Structure – I hydrates are made up of 8 polyhedral cages- 6 larger & 2
Smaller ones. They are made up of 46 water molecules and thus have a
theoretical; composition of 8X.4H2O. Where X is guest molecules.
Fig. Structure – I hydrate crystals form cages that can only hold small
hydrocarbon molecules inside. These commonly hold a single molecule of
methane (CH4)
6.10.2 Structure – II:
Relatively larger larger guest molecules form Structure – II hydrates.
Structure – II hydrates formers include :
Propane
Isobutane
However Nitrogen – a relatively smaller molecules may also form a
Structure- II hydrates. They are made up of 24 Polyhedral cages-8 larger
and 16 smaller ones. They are made up of 136 water molecules and thus
have a theoretical composition of 24X.136H2O.
Fig. Crystal with more complex structures can contain larger hydrocarbon
molecules. We predicted Structure II hydrates would exist based on
laboratory experiments, then discovered them in nature at Jolliet Field in the
Gulf of Mexico.
6.10.3 Structure – H :
Structure – H hydrates are formed only in the presence of both large and
small molecules. Structure- H hydrates are made up of 6 polyhedral
cages. 1 larger, 3 medium and 2 smaller ones. The larger molecules
occupies the small and medium cages. They are made up of 34 water
molecules and have a theoretical composition of X.5Y.34H2O. Where X is
large molecule and Y is small. Fig. Structure – H crystal cages can contain
iso- pentane, a relatively large, branched –chain hydrocarbon.
» The worldwide amount of methane in gas hydrates is considered to contain
at least 1x104 gigatons of carbon in a very conservative estimate). This is
about twice the amount of carbon held in all fossil fuels on earth.
» Locations of known and inferred gas hydrate occurrences in oceanic
sediments of outer continental margins and permafrost regions. Only a
limited number of gas hydrate deposits have been examined in any detail.
6.11 Important of Gas Hydrates :
(Gas Hydrate : Why do we study it?
Gas hydrate is an important topic for study for following three reasons :
1) It contains a great volume of methane, which indicates a potential as a future
energy resources:
When hydrate fills the pore space of sediment, it can reduce permeability and
create a gas trap. Such trapping of gas beneath may cause the formation of the most
concentrated hydrate deposits, due to the presence of a reservoir of gas below the hydrate
zone. The gas can continually migrate upwards to fill any open pore spaces. This process,
in turn, causes the trap to become more effective, producing highly concentrated
methane and methane hydrate reservoirs.
Gas from hydrate might become a major energy resource if economically profitable
techniques could be devised to extract its methane.
2) It may function as a source or sink for atmospheric methane, which may
influence global climate:
Methane from the hydrate reservoir might significantly modify the global
greenhouse, because methane is ~20 times as effective a greenhouse gas as carbon
dioxide, and gas hydrate may contain three orders of magnitude more methane than exists
in the present-day atmosphere. Because hydrate breakdown, causing release of methane
to the atmosphere can be related to pressure changes caused by glacial sea-level
fluctuations, gas hydrate may play a role in controlling long –term global climate change.
3) It can affect sediment strength, which can initiate landslides on the slope and
rise :
Gas hydrate apparently cements sediment, and therefore, it can have a significant effect
on sediment strength; its formation and breakdown may influence the occurrence and
location of submarine landslides. Such landslides may release methane into the
atmosphere, which may affect global climate.
Changes in water pressure due to sea level changes may generate landslides by
converting the hydrate to gas plus water, causing significant weakening of the sediments,
and generating a rise of pore pressure, Conversely, sea-floor landslides can cause
breakdown of hydrate by reducing the pressure in sediments. These interacting processes
may cause cascading slides, which would result in breakdown of hydrate and release of
methane to the atmosphere.
6.12 Gas Storage Capacity :
The structure of methane hydrate comprises methane molecules into a very dense
and compact arrangement. When dissociated at normal surface temperature and
pressures, a exist volume of solid methane hydrate with 100 percent void occupancy by
methane will release roughly 104 volume of methane gas at STP, It is to be stated that the
methane occupancy typically ranges to 70%. The maximum amount of methane that can
occur in methane hydrate is fined by the clathrates or lattice- structure geometry.
6.13 Fire in the Ice:
It’s fun to light gas hydrates with a match and watch the hydrocarbons burn like a
candle, leaving behind slightly salty water . Watching gas hydrate burn and produce heat
shows its value as an energy source. Rightly now there is worldwide interest in
exploiting energy from hydrates.

Magneto Mulluric Method Chapter 7


7.1 Introduction
Magnetotelluric (MT) method is a natural source electromagnetic
method. Within and around the earth there exists large scale, low frequency,
natural magnetic fields known as the ‘Magneto Telluric fields’. These fields
induce natural alternating electric fields to flow within the earth, known as
‘Telluric currents’. These natural fields can be used in geophysical
prospecting. The Magnetotelluric field yields conductivity information from
much grater depths than artificial source induction methods. It has been
applied in the search of petroleum and deep zones of mineralization in the
upper crust. It is an important method for the investigation of the structure of
the crust and upper mantle.
7.2 Generation of Magneto Telluric fields
Ultraviolet radiation from the sun ionizes the molecules of air in the
thin upper atmosphere of the earth. The ions accumulate in several layers,
forming the ionosphere. Electric current in the ionosphere arise from the
motion of the ions which are affected by various factors, like seasonal
variations in isolation,11-year sunspot cycle and tides. The current produce
varying magnetic fields with the same frequencies called ‘primary magnetic
field’ thus, fluctuating electromagnetic fields originate in the ionosphere.
• These are partly reflected at the earth’s surface and the returning
fields are again reflected off the conducting ionosphere. This happens
repeatedly so that the fields eventually have a strong vertical
component and may be regard as vertically propagating plane waves
with a wide spectrum of frequencies. These fields penetrate into the
ground and induce currents known as the ‘Telluric currents’. These
currents flow in horizontal layers in the crust and the mantle. These
telluric currents generate magnetictelluric fields.
• High frequency signals originate in lightning activity
• Intermediate frequency signals come from ionospheric resonances
• Low frequency signals are generated by sun-spots
7.3 Field Equipment and Procedure
This method involves a comparison of amplitudes and phases of the
electrical and magnetic fields associated with the flow of telluric currents.
The telluric currents are detected with two pair of electrodes, usually
oriented in the North- South and East-West directions. Also, three
components of the magnetic field are measured; a vertical components. The
equipment for magnetotelluric work is quite complicated.
It is usually measured with a coil, consisting of many turns on a large
diameter frame, or by using a sensitive fluxgate magnetometer. Two
components are measured at each station. Since the magnetic variations are
in the milligamma range, the sensitivity requirements on the magnetic set
are very high.
The depth ‘z’ to which a magnetotelluric field penetrates depends upon its
frequency ‘f’ and the resistivity ‘p’ of subsurface. The depth is given by the
relation.

Z = k (p/f)1/2 …(7.1)

Where, k is a constant. Also the depth of penetration increases as frequency


decreases. The amplitudes of the electric and magnetic field, ‘E’ and ‘B’ are
related as

Pa = (0.2/f ) (E/H )2 Ωm …(7.2)

Where ‘pa’ is the apparent resistivity, E is the electric field intensity and H is
magnetic field intensity. Thus, apparent resistivity varies inversely with
frequency. The calculation of ‘pa’ for a number of decreasing frequencies
thus provides resistivity information at progressively increasing depths and
is a kind of vertical sounding. Hence, this method yields conductivity
information from much greater depths.

7.4 Interpretation
The end result of an MT survey is a paper and/or magnetic tape
record of electric and magnetic field variations. The recorded magnetic
fields consist of an external part from the ionosphere and internal part
related to the induced current distribution. These components must be
separated analytically. The electric and magnetic records contain numerous
frequencies, of which some are simply noise and some are of geophysical
interest. As a result, sophisticated data processing is required, involving
power spectrum analysis and filtering.

The interpretation of the magneto telluric data can be done


either by inversion or modeling. Modeling is a direct approach to solve the
conductivity distribution information. It assumes a conductivity model for
which a theoretical response is calculated and compared with the real
response. The parameters of the model are adjusted repeatedly to obtain the
most favorable fit to the observation. The interpretation via modeling

Proceeds is a similar manner to curve matching techniques in the


resistivity method. Cagnifard (1953) has given an expression for apparent
resistivity in the case of two horizontal layers in the form

Where z is thickenss of upper layer, m = (ωµ/p1 ) and

n = (p2 +p1 )e2zm (p2 - p1 ). With pa /p1 as ordinate and T as abscissa,


both on log scales, Master curves have been drawn for various values of P2
/p1 . By plotting final results in the form of pa versus T on the same log
papers the master curve, it is possible to solve for p2 , p1 and z by
superposition of the two curves.

Hydrology Chapter 8

8.1 Water cycle

The movement of water around, over, and through the Earth is called
the water cycle.

The Earth’s water is always in movement, and the always in


movement, and the water cycle, also known as the hydrologic cycle,
describes the continuous movement of water on , above , and below the
surface of the Earth. Since the water cycle is truly a “cycle,” there is no
beginning or end. Water can change states among liquid , vapor, and ice at
Various places in the water cycle, with these processes happening in
the blink of an eye and over millions of years. Although the balance of
water on Earth remains fairly constant over time, individual water molecules
can come and go in a hurry.

Description

The water cycle has no starting or ending point. The sun, which drives
the water cycle, heats water in the oceans. Some of it evaporates as vapor
into the air, Ice and snow can sublimate directly into water vapor. Rising air
currents takes the vapor up into the atmosphere, along with water from
evapotranspiration which is water transpired from plants and evaporated
from the soil. The vapor rises into the air where cooler temperatures cause it
to condense into clouds. Air currents move clouds around the globe, cloud
particles collide, grow, and fall out of the sky as precipitation. Some
precipitation falls as snow and can accumulate as ice caps and glaciers,
which can store frozen water for thousand of years. Snow packs in warmer
climates often thaw and melt when spring arrives, and the melted water
flows overland as snowmelt. Most precipitation falls back into the oceans or
onto land, where, due to gravity, the precipitation flows over the ground as
surface runoff. A portion of runoff enters rivers in valleys in the landscape ,
with streamflow moving water towards the oceans. Runoff, and ground-
water seepage, accumulate and are stored as freshwater in lakes. Not all
runoff flows into rivers, much of it soaks into the ground as infiltration.
Some water infiltrates deep into the ground and replenishes faquifers
(saturated subsurface rock), which store huge amounts of freshwater for long
periods of time . some infiltration stays close to the land surface and can
seep back into surface-water bodies (and the ocean) as ground-water
discharge, and some ground water finds openings in the land surface and
emerges as freshwater springs. Over time, the water continues flowing, some
to reenter the ocean, where the water cycle renews itself.

8.2 The different processes are as follows

» Precipitation is condenses water vapor that falls to the Earth’s surface.


Most precipitation occurs as rain, but also includes show, hail fog drip,
graupel, and sleet. Approximately 505,000 km3 of it over the oceans.

» Canopy interception is the precipitation that is intercepted by plant


foliage and eventually evaporates back to the atmosphere rather than falling
to the ground.
» Snowmelt refers to the runoff produced by melting snow.

» Runoff includes the variety of ways by which water moves across the
land. This includes both surface runoff and channel runoff. As it flows the
water may infiltrate into the ground, evaporate into the air, become stored in
lakes or reservoirs, or be extracted for agricultural or other human uses.

» Infiltration is the flow of water from the ground surface into the ground.
Once infiltrated, the water becomes soil moisture or groundwater.

» Subsurface flow is the flow of water underground, in the vadose zone and
aquifers. Subsurface water may return to the surface (eg. As a spring or by
being pumped) or eventually seep into the oceans. Water returns to the land
surface at lower elevation than where it infiltrated, under the force of gravity
or gravity induced pressures. Groundwater tends to move slowly, and is
replenished slowly, so it can remain in aquifers for thousands of years.

» Evaporation is the transformation of water from liquid to gas phases as it


moves from the ground r bodies of water into the overlying atmosphere. The
source of energy for evaporation is primarily solarradiation. Evaporation
often implicity includes transpiration from plants, though together they are
specifically referred to as evapotranspiration. Approximately 90% of
atmospheric water comes from evaporation, while the remaining 10% is
from transpiration. Total annual evapotranspiration amounts to
approximately 505,000 km3 of water, 34,000 km3 of which evaporates from
the oceans.

» Sublimation is the state change directly from soid water (snow or ice) to
water vapor.

» Advection is the movement of water – in solid, liquid or vapour states –


through the atmosphere. Without advection, water that evaporated over the
oceans could not precipitate over land

» Condensation is the transformation of water vapour to liquid water


droplets in the air, producing clouds and fog.

» Reservoirs

In the context of the water cycle, a reservoir represents the water


contained in different steps within the cycle. The largest reservoir is the
collection of oceans, accounting of 97% of Earth’s water. The next largest
quantity (2%) is stored in solid form in the ice caps and glaciers. The water
contained within all living organisms represents the smallest reservoir.

8.3 GROUND WATER AND SUBSURFACE WATER

Most rock or soil near the earth’s surface is composed of solids and
voids. The voids are spaces between grains of sand, or cracks in dense rock.
All water beneath the land surface occurs within such void spaces and is
referred to as underground or subsurface water, Subsurface water occurs in
two different zones. One zone, located immediately beneath the land
surface4 in most areas, contains both water and air in the voids. This zone is
referred to as the unsaturated zone. Other names for the unsaturated zone
are zone of aeration and vadose zone.

The unsaturated zone is almost always underlain by a second zone in


which all voids are full of water. This zone is defined as the saturated zone.
Water in the saturated zone is referred to as ground water and is the only
subsurface water available to supply wells and springs.

Water table is often misused as a synonym for ground water. However the
water table is actually the boundary between the unsaturated and saturated
zones. It represents the upper surface of the ground water. Technically
speaking, it is the level at which the hydraulic pressure is equal to
atmospheric pressure. The water level found in unused wells is ofter the
same level as the water table, as shown in Figure 2.2.

8.4 AQUIFERS AND CONFINING BEDS

All geologic material beneath the earth’s surface is either a potential aquifer
or a confining bed. An aquifer is a saturated geologic formation that will
yield a usable quantity of water to a well or spring. A confining bed is a
geologic unit which is relatively impermeable and does not yield usable
quantities of water. Confining beds, also referred to as aquitards, restrict the
movement of ground water into and out of adjacent aquifers.

Ground water occurs in aquifers under two conditions: confined and


unconfined. A confined aquifer is overlain by a confining bed, such as an
impermeable layer of clay or rock. An unconfined aquifer has no confining
bed above it and is usually open to infiltration from the surface.

Unconfined aquifers are often shallow and frequently overlie one or more
confined aquifers. They are recharged through permeable soils and
subsurface materials above the aquifer. Because they are usually the
uppermost aquifer, unconfined aquifers are also called water table aquifers.

Confined aquifers usually occur at considerable depth and may overlie other
confined aquifers. They are often recharged through cracks or openings in
impermeable layers above or below them. Confined aquifers in complex
geological formations may be exposed at the land surface and can be directly
recharged from infiltrating precipitation. Confined aquifers can also receive
recharge from an adjacent highland area such as a mountain range. Water
infiltrating fractured rock in the mountains may flow downward and then
move laterally into confined aquifers.

Windows are important for transmitting water between aquifers, particularly


in glaciated areas such as the Puget Sound region. A window is an area
where the confining bed is missing.

The water level in a confined aquifer does not rise and fall freely because it
is bounded by the confining bed—like a lid. Being bounded causes the water
to become pressurized. In some cases, the pressure in a confined aquifer is
sufficient for a well to spout water several feet above the ground. Such wells
are called flowing artesian wells. Confined aquifers are also sometimes
called artesian aquifers.

When a well is drilled into an unconfined aquifer, its water level is generally
at the same level as the upper surface of the aquifer. This is, in most cases,
the water table. By contrast, when a well is drilled into a confined aquifer, its
water level will be at some height above the top of the aquifer and perhaps
above the surface of the land –depending on how much the water is
pressurized. If a number of wells are drilled into a confined aquifer, the
water level will rise in each well to a certain level. These well levels form an
imaginary surface called the potentiometric surface. The potentiometric
surface is to a confined aquifer what the water table is to an unconfined
aquifer. It describes at what level the upper surface of a confined aquifer
would occur if the confining bed were removed.

The most productive aquifers, whether confined or unconfined, are


generally in sand and gravel deposits. These tend to have large void spaces
for holding water. Rocks with large openings such as solution cavities or
fractures can also be highly productive aquifers. Generally, the smaller the
grain size or the less fracturing, the less water an aquifer will produce. This
is because there are fewer void spaces for holding water.

8.4 GROUND WATER RECHARGE AND DISCHARGE

Recharge is the process by which ground water is replenished. A recharge


area is where water from precipitation is transmitted downward to an
aquifer.

Most areas, unless composed of solid rock or covered by development ,


allow a certain percentage of total precipitation to reach the water table.
However, in some areas more precipitation will infiltrate than in others
Areas which transmit the most precipitation are often referred as “high”or
“critical” recharge areas.

As described earlier, how much water infiltrates depends on vegetation


cover, slope, soil composition, depth to the water table, the presence or
absence of confining beds and other factors. Recharge is promoted by
natural vegetation cover, flat topography, permeable soils, a deep water table
and the absence of confining beds.

Discharge areas are the opposite of recharge areas, They are the
locations at which ground water leaves the aquifer and flows to the surface.
Ground water discharge occurs where the water table or potentiometric
surface intersects the land surface. Where this happens, springs or seeps are
found. Springs and seeps may flow into fresh water bodies, such as lakes or
streams, or they may flow into saltwater bodies.

Under the force of gravity, ground water generally flows from high
areas to low areas. Consequently, high areas-such as hills or plateaus-are
typically where aquifers are recharged and low areas-such as river valleys-
are where they discharge. However, in many instances aquifers occur
beneath rover valleys, so river valleys can also be important recharge areas.
Typically recharge and discharge areas are depicted in Figure 2.4.

8.6 GROUND WATER MOVEMENT

Gravity is the force that moves ground water which generally means it
moves downward. However, ground water can also move upwards if the
pressure in a deeper aquifer is higher than that of the aquifer above it .This
often occurs where pressurized beneath unconfined aquifers.
A ground water divide, like a surface water divide, indicates distinct ground
water flow regions within an aquifer. A divide is defined by a line on the
either side of which ground water moves in opposite directions. Ground
water divides often occur in highland areas, and in some geologic
environments coincide with surface water divides. This is common where
aquifers are shallow and strongly influenced by surface water flow. Where
there are deep aquifers, surface and ground water flows may have little or no
relationship.

As ground water flows downwards in an aquifer, its upper surface


slopes in the direction of flow. This slope is known as the hydraulic gradient
and it determined by measuring the water elevation in wells tapping the
aquifer. For confined aquifers, the hydraulic gradient is the slope of the
potentiometric surface. For unconfined aquifers, it is the slope of the water
table.

The velocity at which ground water moves is a function of three main


variables : hydraulic conductivity, (commonly called permeability ) porosity,
and the hydraulic gradient. The hydraulic conductivity is a measure of the
water transmitting capability of an aquifer. High hydraulic conductivity
values indicate an aquifer can readily transmit; low values indicate poor
transmitting. Because geologic materials vary in their ability to transmit
water, hydraulic conductivity values range through 12 orders of magnitude.
Some clays, for example, have hydraulic conductivities of .00000001
centimeters per second (cm/sec), whereas gravel hydraulic conductivities
can range up to 10,000 cm/sec. Hydraulic conductivity values should not be
confused with velocity even though they appear to have similar units.
Cm/sec, for example, is not a velocity but is actually a contraction of cubic
centimeters per square centimeter per second (cm3/cm2-sec).

In general, course-grained sands and gravels readily transmit water and have
high hydraulic conductivities (in the range of 50-1000 m/day). Fine grained
silts and clays transmit water poorly and have low hydraulic conductivities
(in the range of .001-0.1 m/day).

The porosity of an aquifer also has a bearing on its ability to transmit water.
Porosity is a measure of the amount of open space in an aquifer. Both clays
and gravels typically have high porosities, while silts, sands and mixtures of
different grain sizes tend to have low porosities.
The velocity at which water travels through an aquifer is proportional to the
hydraulic conductivity and hydraulic gradient, and inversely proportional to
the porosity. Of these three factors, hydraulic conductivity generally has the
most effect on velocity. Thus, aquifers with high hydraulic conductivities,
such as sand and gravel deposits, will generally transmit water faster than
aquifers with lower hydraulic conductivities, such as silt or clay beds.

Ground water velocities are typically very slow, ranging from around a
centimeter per day to almost a meter per day. However, some very rapid
flow can occur in rock with solution cavities or in fractured rock. Very high
flow rates (more than 15 m/day) are associated, for example, with some
parts of the Columbia River basalt in eastern Washington.

The volume ground water flow is controlled by the hydraulic conductivity


and gradient, and in addition is controlled by the volume of the aquifer. A
large aquifer will have a grater volume of ground water flow than a smaller
aquifer with similar hydraulic properties. But if the cross-sectional area-that
is, the height and width-are the same for both aquifers, the aquifer with a
greater hydraulic conductivity and hydraulic gradient will produce a greater
volume of water.

8.7 WATER SUPPLY WELLS

How aquifers respond when water is withdrawn from a well is an important


topic in ground water hydrology. It explains how a well gets its water, how it
can deplete adjacent wells, or how it can induce contamination.

When water is withdrawn from a well, its water level drops. When the water
level falls below the water level of the surrounding aquifer, ground water
flows into the well. The rate of inflow increases until it equals the rate of
withdrawal.

The movement of water from an aquifer into a well alters the surface of the
aquifer around the well. It forms what is called a cone of depression. A
cone of depression a funnel-shaped drop in the aquifer’s surface. The well
itself penetrates the bottom of the cone. Within a cone of depression, all
ground water flows to the well. The outer limits of the cone define the well’s
area of influence.

The rate of groundwater flow is controlled flow is controlled by two


properties of the rock: porosity and permeability .
Porosity ( Ф ) is the capacity of rock or sediment to store fluids with in
pores. Rock and sediment contain spaces between grains (pore spaces), in
fractures, or in dissolved cavities (limestone) that may become filled with
water. By definition, porosity is the void volume of the rock divided by the
total rock volume and is given as

ø= Vp
Vm (8.1)
Where Vp is the non solid volume (pores and liquid) and Vm is the
total volume of

Material, including the solid and non solid parts.

Porosity is a fraction between 0 and 1, typically ranging from less than 0.01
forsolid granite to more than 0.5 for peat and clay, although it may also be
represented in percent terms by multiplying the fraction by 100%.

The porosity of a rock, or sedimentary layer, is an important consideration


when attempting to evaluate the potential volume of fluid it may contain.
Sedimentary porosities are a complex function of many factors, including
but not limited to: rate of burial, depth of burial and the nature of the connate
fluids. The porosity of a rock is also affected by sorting, packing,
cementation and angularity or roundness of the grains.

There are mainly three types of porosity which are described below –

Primary porosity – The porosity which is present in sediment at the time of


deposition or formed during sedimentation.

Secondary porosity – The porosity which occur after the rock forming
process complete. This can be a result of chemical leeching of minerals or
the generation of a fracture system. This can replace the primary porosity or
coexist with it.

Effective porosity – This is interconnected pore volume occupied by free


fluids. This is very important in solute transport.

Although the porosity of well-sorted, unconsolidated sand may be as high as


percent, the porosity of most sandstone is considerably less. During the
process of conversion of sand into sand stone (lithification) , compaction by
the weight of overlying material reduces not only the volume of pore space
as the sand grains become rearranged and more tightly packed, but also the
interconnection between pores (permeability). The deposition of cementing
materials such as calcite or silica between the sand grains further decreases
porosity and permeability. Sandstones retain primary porosity unless
cementation has filled all the pores. Secondary porosity in these
consolidated rocks develops due to joints, fractures, and bedding planes. The
fluid movement in igneous and metamorphic rocks porosity is usually low
because the minerals tend to be intergrown , leaving little free space. Highly
fractured igneous and metamorphic rocks, however, could have high
porosity.80

8.8 PERMEABILITY
In geology, permeability is a measure of the ability of a material to transmit
fluids through it. It is of great importance in determining the flow
characteristics of hydrocarbons in oil and gas reservoirs, and of groundwater
in aquifers. The usual unit for permeability is the darcy, or more commonly
the milli-darcy or md (1 darcy ≈ 10 -12 m2 ).
Permeability is part of the proportionality constant in Darcy’s law
which relates discharge (flow rate) and fluid physical properties (e.g
viscosity), to a pressure gradient applied to the porous media. The
proportionality constant specifically for the flow of water through a porous
media is the hydraulic conductivity; permeability is a portion of this, and is a
property of the porous media only, not the fluid. In naturally occurring
materials, it ranges over many orders of magnitude.
High permeability often goes hand in hand with high porosity and
large grain size. Connections between pore spaces are wider in coarse-
grained sediment (sand, gravel ) and rock (sandstone, conglomerate ) and
are narrower in fine-grained materials (silt, clay, shale, and mudstone).
However, not all pore spaces may be connected while others may contain
clay minerals that can expand in the presence of water to block passageways
and reduce permeability. Surface water films in fine-grained materials may
fill the narrow connections between pore spaces blocking the passage of
fluid.
There are different types of permeability which are mention below-
Absolute permeability – is the permeability when one type of fluid is
present in the pores.
Effective permeability – is the permeability with more than one fluid is
present in the pore space and sit is less than the absolute permeability.
Relative permeability – is a ratio of the effective permeability of specific
fluid and the absolute permeability.

Permeability in sandstone and in limestone is sufficient to make them


a reservoir but in case of shale, the porosity is high and the permeability is
almost zero that make shale as a poor reservoir rock. So, high porosity does
not mean that the permeability is also high. Permeability of reservoir is
improved by hydro-fracturing in the areas where it is marginal to moderate.
A good example of a rock with high porosity and low
permeability is a vesicular volcanic rock, where the bubbles that once
contained gas give the rock a high porosity, but since these holes are not
connected to one another the rock has low permeability.

A thin layer of water will always be attracted to mineral grains due to the
unsatisfied ionic charge on the surface. This is called the force of molecular
attraction. If the size of interconnections is not as large as the zone of
molecular attraction, the water can’t move. Thus, coarse-grained rocks are
usually more permeable than fine-grained rocks, and sands are more
permeable than clays.
Water Quality Groundwater Contamination
Water quality refers to such things as the temperature of the water, he
amount of dissolved solids, and lack of toxic and biological pollutants.
Water that contains a high amount of dissolved material through the action
of chemical weathering can have a bitter taste, and is commonly referred to
as hard water. Hot water can occur if water comes from a deep source or
encounters a cooling magma body on its traverse through the groundmal
energy, but is not usually desirable for human consumption or agricultural
purpose. Most pollution of groundwater is the result of biological activity,
much of it human. Among the sources of contamination are :
• Sewers and septic tanks
• Waste dumps (both industrial and residential ).
• Gasoline Tanks (like occur beneath all service stations).
• Biological waste products – Biological contaminates can be removed
from the groundwater by natural processes if the aquifer has
interconnections between pores that are smaller than the microbes.
For example a sandy aquifer may act as a filter for biological
contaminates.
• Agricultural pollutants such as fertilizers and pesticides.
• Salt water contamination – results from excessive discharge of fresh
groundwater in coastal areas.
Paleomagnetism Chapter 9

9.1 Introduction
That science of palaeomagnetism is concerned with studies of the
magnetism that is retained in rocks. In the late 19th century geologists
discovered that rocks can carry a stable record of the geomagnetic field
direction of the time of their formation. From the magnetization direction it
is possible to calculate the position of the magnetic pole at that time.
Measurement of its direction can be used to determine the latitude at which
the rocks was created. If this latitude differs from the present latitude at
which rock found, s strong evidence of its movement can be made. In this
way Palaeomagnetism study provides the quantitative estimates of the
relative continental movements.
9.2 Remanent Magnetism in rocks
Palaeomagnetic technique is based on the phenomenon that certain
minerals are capable of retaining record of passed direction of the Earth’s
magnetic fields. In practice remanent often contributes to the total
magnetization in rocks , both in magnitude and direction. The effect is very
complicated because of dependence upon the magnetic history of the rocks.
The untreated remanence of a rock is called its natural remanent
magnetization (NRM). It may be made up of several components acquired in
different ways and at different times.
A remanence acquired at or close to the time of tis
formation of the rock (e.g. TRM ) is called a primary magnetization; a
remanence acquired at a later time is called secondary magnetization.
Various types of remanent magnetisms are :
1) Thermo remanent magnetization
At high temperature a ferromagnetic material exhibits paramagnetic
behavior. As rock cools below the curie temperature, some minerals
( perticulaly magnetite) changes from paramagnetic to the much stronger,
ferromagnetic behavior.
The rocks acquire a large, thermoremanant magnetization as magnetic
domains orient Themselvesto earth’s ambient field.
2) Detrital remanent magnetization

When sediments settle in water, ferromagnetic mineral grain tend to


orient themselves along the ambient magnetic field of the earth. The rock
thus aquire a detrital remanent magnetization.
3) Chemical remanent magnetization
Chemical remanent magnetization (CRM) is usually a secondary form
of remanence in a rock. It is acquired during the growth or recrystalization
of mineral grains. As example is the precipitation of hematite from a
goethite precursor or from iron saturated fluids that pass through the rock.

4) Isothermal remanent magnetization


It results from the exposure to a strong magnetic fiels for a short
period of time at relatively constant temperature such as lightning strike.
5) Viscous remanent magnetization
On exposure to a magnetic field for a long time, thermal fluctuations
gradually favor direction of applied field. VRM is probably more
characteristic of fine grained rocks than coarse.
Palaeomagnetism is not a direct concern of the applied geophysicist.
However, the information obtained through this discipline concerning
residual magnetism in rocks has been of considerable value in the
interpretation of magnetic anomalies.
One problem in interpreting Palaeomagnetic data is in deciding how
much the magnetization has been altered by lateral changes.
9.3 Method of Palaeomagnetism
The requirement that the mean Palaeomagnetic pole position derived
for a collection of rocks should present the axial geocentric is taken into
account in the methodology of Palaeomagnetic alalysis. This begins with the
sampling of the rock formation on a hierarchical scheme designer to
eliminate or minimize non systematic errors and to average out the effects of
secular variations of the Palaeomagnetic fields.
Ideally, a palaeomagnetic collection should contain a large
number of samples per site. In practice, about 6-10 samples are enough to
define the mean direction for the site.
A further assumption of Palaeomagnetism is that the natural remnant
magnetization of a rock is acquired at the time of formation of the rock and
has since remained unaltered. Actually the Natural Remnant Magnetization
(NRM) is usually made up of several component acquired at different times,
including during the procedure of sampling and preparation. Lab techniques
must be applied that eliminate the undesirable components and isolate the
primary magnetization. This process is called ‘Magnetic Cleaning’.
9.4 Measurements of remnant magnetization
Measurements of the natural remnant magnetism of rocks with an
astatic magnetometer were laborious and time consuming and the
instrument has now fallen into disuse. In modern palaeomagnetic
laboratories more effective spinner magnetometers and cryogenic
magnetometers are common in common use.
9.5 Spinner Magnetometer
It is originally consisted of large sensor coil containing many turns of
wire in which an alternating signal was induced by rotating the sample at
high frequency (around 100 Hz) with in the coil. Rapid rotation was needed
because the voltage induced was proportional to the rate of change of flux in
the coil. After phase lock detection and electronic amplification of the
signal, the calibrated output yielded two components of remnance in the
plain normal to the rotational axis.
The flux gate spinner magnetometer is a subsequent refinement in
which the sensor coil is replaced with flux gate sensors. These detect directly
the external magnetic field of the sample. The signal strength is not
dependent on the rotational speed, which could be reduced at about 5-10 Hz.

9.6 Cryogenic Magnetometer


It is the most sensitive and rapid instrument in current use. Its sensor
consists of a coil immersed in liquid helium. At a temperature of about 4
Kelvin the coil superconducting. Small change in the magnetic field induces
a comparatively large current, which because of the super conducting
condition is persistent until the sample is removed. By counting the number
of flux jumps electronically the external magnetic field of the rock specimen
can be inferred, and from this its magnetization is computed.

9.7 SETPWISE PROGRESSIVE DEMAGNETIZATION


The NRM of rock may contain several components, some related to
the geological history of the rock and others related to the sampling and
handling procedures. It is necessary to ‘magnetically clean’ the natural
magnetization so that the structure of the NRM can be analyzed and stable
components isolated. This is done in a stepwise procedure in which
progressively more and more of the original magnetization.
Damagnetizer is removed. There are two methods of doing this:
The first method is the progressive alternating field demagnetization.
An alternating magnetic field can be produced in a coil by passing an
alternating electric current through it. The field fluctuates between equal and
opposite peak values. When a rock sample is placed in the alternating
magnetic field, the grain magnetic moments with coercivities less than the
peak values of the field are demagnetized in the new direction. The intensity
of the alternating field is reduced slowly and uniformly to zero. The AF
demagnetizing coil must be surrounded by magnetic shields or the special
additional coils to cancel out the Earth’s magnetic field.
The part of the remnance that remains after a demagnetizing treatment
has been ‘Magnetically Cleaned’. The demagnetization process is repeated
using successively higher values of the peak alternating field, and
remeasuring the remaining magnetization after each step, until the
magnetization is reduced to zero.
An alternative method of ‘magnetic cleaning’ is progressive thermal
demagnetization.
This method is more effective than AF demagnetization, because it is only
necessary to heat a sample above the highest Curie temperature of its
constituent minerals to destroy all of the NRM.
Palaeomagnetism has made important contributions in documenting local
and regional tectonics as well as the motions of lithospheric plates.
Recording the magnetic filed strength over the ocean floor it
was found that there was a distinct banding of weak fields alternating with
strong fields.The field strength was then related to the original magnetic
orientation of the ocean crust (weak when crust has ‘locked in’ reversed
polarity & strong when ‘normal’ polarity is ‘locked in’ ). The banding was
found to be remarkably symmetrical on either side of spreading ridges,
which was strong evidence of spreading into two directions, while the
solidifying rock at the ridge was registering the magnetic polarity at the time
it was cooling below ~ 350 C. As the history of magnetic changes through
geologicl time had also been established (and radiometrically dated), the
banding patterns could be related to time, and hence spreading rates could be
established. Plate movements on average are roughly as fast as the growth of
your fingernails. New crust is formed at the spreading ridges at rates of 10s
of mm to 10s of cm per year, with an average rate of 7 cm per year.
By determining past polar position for rock of different ages,
geomagnetists found that the apparent polar positions have changed
considerably through geological time. This polar wandering was initially
seen as movement in the earth’s dipole field relative to static continents.
However, it was later realized that it was more a case of the poles being
fixed and the continents wandering around, giving rise to a change in
terminology to apparent polar wandering. Geologists then started to compare
apparent polar wandering paths for different continents, and found that the
paths for- for example – Europe and North America were virtually identical
between 400 (especially 280 ) and 180 Ma. At those times, the drift of these
two continents behaved exactlyalike, relative to the poles they were part of
one landmass! After 180 Ma, the paths of Europe and North America strated
to differ – it was later found that this was due to the opening up of the N
Atlantic, splitting the supercontinent.
These discoveries settled the fierce arguments that had been going on
ever since the moving plates concept was proposed in 1915 by Alfred
Wegener as “continental drift”. In 1968 this concept became formally
accepted, under the name “plates tectonics”.

Tsunami Chapter 10
10.1 WHAT IS TSUNAMI ?
Tsunami is a Japanese word represented by two characters ‘tsu’
(meaning ‘harbor’) and ‘name’ (meaning ‘wave’). When major earthquake
occurs under the ocean , it can trigger a long period waves. These waves has
less amplitude in the deep oceans but amplitude grows as these head towards
the coastal areas and may become about 30 feer high on rhe coasts. Over the
open ocean velocity of these waves is around 800 km/hr and wavelength 200
km. Sometimes
10.2 Some facts about tsunami:
» tsunami is a serious of gravity waves formed in the sea as a result of a
large scale disturbance of sea level over a short duration of time. In the
process of sea level returning to equilibrium through a serious of
oscillations, waves are generated which propagate in all directions.
» Tsunami travel outward in all directions from the generating area,
with the direction of the main energy propagation being orthogonal to the
direction of earthquake fracture zone.
A tsunami is not one wave, but a series of waves. The time interval between
the passages of successive waves crests at a given point usually varies from
» Speed of tsunami = √ (g.h)
Where g: acceleration due to gravity,.
H: Water Depth
» As the rate of which a wave losses its energy is inversely proportional to
its wavelength, so tsunami cannot only propagate with high speeds, but they
can travel great distances with very little energy losses.

Shoaling Effect : In deep waters, tsunami will travel at high speeds with
little loss of energy. As s tsunami leaves deep water of open sea and arrives
at shallow waters near the coast, it under goes a transformation. Since the
speed of tsunami is related to water depth, so its speed decreases with
decrease in the depth of wate and also sue to friction. On the coast the speed
of the waves decreases of about 50 to 60 km/hr. When a tsunami ahs reached
the shore, successive waves pile up onto each other forming a pile of waves
due to which the tsunami waves get compressed near the coast. This result in
shortening of their wavelength and their wave energy is directed upwards.
As the total energy of the tsunami remains constant, hence, the height of
these waves grow tremendously. This is known as ‘Shoaling Effect’.

Run up : The maximum height of a tsunami reaches on shore is called the


‘Run Up’ . Thus, run up is the vertical distance between the maximum
heights reached by the killer wave on the shore and the mean sea level
surface. After run up, part of tsunami energy is dissipated back to open
ocean.
10.3 Classification of tsunamis
For the Tsunami Warning System, tsunamis can be characterized into
three types, based on the extent of the potential destruction relative to the
source area.
1) Local Tsunamis – These are associates with the tsunamis generated by
submarine landslide or volcanic explosions . The destruction caused is
confined to a limited area e.g tsunami of July 9, 1958 at Lituya Bay,
Alasks.
2) Regional Tsunamis – These are most common. The destruction caused
may be limited in areal extent either because the energy released was not
sufficient to generate a destructive Pacific wide tsunami or because the
geomorphology of the source area limited the destructive potential of the
tsunami.
3) Pacific-wide Tsunami – They are less frequent but of far great
destructive potential in that waves are larger initially and can moves
across the Pacific Ocean e.g tsunami of May 22, 1960 spread death and
destruction across the Pacific from Chile to Hawaii, Japan and
Philippines.
10.4 Generation of tsunamis
1. undersea earthquakes
2. Submarine landslides
3. Volcanic eruptions
4. Asteroid and Meteorite impacts

1. Undersea Earthquakes : According to the plate Tectonic theory, the


Earth’s outer surface, known as ‘Lithosphere’ is broken into pieces
called ‘Tectonic.
Tsunamis can be generated by massive undersea earthquakes, when
the sea floor shifts abruptly and vertically displaces the overlying
water from its equilibrium position. Vertical movement of the column
that generated tsunamis. As from the law of conservation of energy,
we energy can neither be created destroyed but can only be from one
form to other, potential energy that results pushing water above mean
converted kinetic energy that horizontal propagation of the wave.

All earthquakes do not generate tsunamis. To generate tsunamis, earthquakes


must occur underneath or near the ocean and must be large (usually above
7.5 magnitudes) and create movements in the sea floor.
2. Submarine landslides
Landslides moving into the oceans, bays or lakes and falling of rocks
and ice can also generate tsunamis. Also, massive earthquakes may be
accompanied by underwater landslides that may also contribute to tsunami
generation. Usually, energy of tsunami generated from landslides or rock
falls is rapidly dissipated as they travel away from source e.g.
• Large tsunami ever observed was caused by a rock fall in the Lituya
Bay, Alaska on July 9, 1958. As approximately 40 million cubic meter
rock fall at the head of the Bay was triggered by an earthquake along
the Fair-weather fault, which generated a wave which reached the
incredible height of 520m (1720 feet) on the o pposite side of the
inlet.
• In 1980’s, earth moving and construction work of an airport run away
along the coast of the southern France an underwater landslide, which
generated tsunami waves in the Thebes harbor. ]
• Is is believed that the July 17, 1998 tsunami that killed thousands of
people and destroyed coastal villages along the northern coast of
Papua, New Guinea was generated by a large underwater slump of
sediments, triggered by an earthquake.
3.Volcanic Eruptions
Volcanoes that occur along the coastal zones like in Japan and island
arcs throughout the world can cause several effects that might generate
tsunamis. Violent submarine volcanic eruptions which cause sudden
displacement of a large volume of water can also give rise to destructive
tsunami waves, also, when the roof of volcano, having an empty magma
chamber, collapses then a crater of formed. The sea gushes into this
crater and so the water column of sea disturbed that can give rise to
tsunami waves e.g.
4. Asteroid and Meteorite Impacts
All of the meteorites and the asteroids in the oceans have the potential
of generating dreadful tsunamis. Fortunately, no asteroid has fallen on the
Earth within the recorded history. Most of the meteorites burn as they reach
the Earth’s atmosphere. However , large meteorites have hit Earth’s surface
in the distant past. This is indicated by large craters formed in different parts
of the Earth.
10.4 Destruction caused by Tsunamis
The main damage from tsunami comes from the destructive nature of
the waves themselves. It causes inundation and may even lead to erosion of
foundations and collapse of bridges and seawalls. Secondary effects include
the debris (including boats and cars) acting as projectiles that may crash into
buildings, break power lines , may start fires, etc. tertiary effects include loss
of crops and water which may lead to famine and disease.
The areas at greatest risks are the coastal regions less than 8 m above sea
level. Destruction caused by a tsunami along any coast whether near the
source area or thousands of kilometers away from it, depends on the
following factors :
• Size of the earthquake
• Configuration of the coastline
• Shape of ocean floor
• Character of advancing waves

10.5 Detection of Tsunami


In case of a major undersea earthquake a tsunami could reach the
beach in a few minutes. For the people living near the coast, the shaking
of the ground is sufficient warning for an impending tsunami. A
noticeable rapid and fall in coastal waters is also assign of an
approaching tsunami.
The first visible indication of an approaching tsunami is usually a
recession of water caused by the trough preceding an advancing wave.
About 5 to 30 minutes later the retreat of water is followed by huge
waves capable of extending hundreds of meters inland. Sometimes, a rise
in water level may be the first event.

Why aren’t tsunamis seen at sea or from the air?


Tsunamis cannot be detected from air or from ships at sea. This is
because in deep ocean, tsunami wave amplitude is usually less than 1m
(3.3 feet). The crests of tsunami waves may be more than a hundred
kilometers or more away than each other. Hence, passengers on boats or
ships at sea, far away from shore will not feel nor see the tsunami waves as
they pass underneath at high speeds. Similarly, tsunami waves cannot be
distinguished from ordinary ocean waves from the sky.
Due to advancement in technology, it is now possible to predict the
occurrence of tsunamis. Also, with computer simulation it can be known in
advance that how high the tsunami waves will be along the coast for
different kinds of earthquakes. These predictions can guide people to
evacuate the areas to be hit by the tsunamis.

Refraction Method Chapter 11


11.1 Introduction
The seismic refraction method is one of the powerful methods of
delineating the subsurface structure. Seismic refraction method utilizes the
seismic energy that returns to the surface after traveling through the ground
along refracted ray paths. This method is normally used to locate refractors,
separating layers of different seismic velocity, but the method is also
applicable in cases where velocity varies smoothly as a function of depth or
laterally.
Refraction seismology is applied to a very wide range of scientific and
technical problems, from site investigation surveys to large scale
experiments designed to study the structure of entire crusty or lithosphere.
11.2 Principle
Refraction method is based on the times of arrival of the initial ground
movement generated by a source recorded at a variety of distances. Later
arriving complications in the recorded ground motion are discarded. Thus,
the data set derived from refraction method consists of a series of times
versus distances. These are then interpreted in terms of the depths to
subsurface interfaces and the speeds at which motion travels through the
subsurface within each layer. These speeds are controlled by a set of
physical constants, called elastic parameters that describe the material.
Seismic refraction method is based on the Snell’s law. In refraction
method, we make use of the waves, which are refracted at critical angles.
These critically refracted waves travel immediately below the interface with
higher velocity of the lower medium and are called ‘ Head Waves’ or
‘Mintrop Waves’. This head wave acts as a moving source of secondary
waves in the upper layer, which interface constructively and return to the
surface at critical angles.
Huygen’s Principle :
Every point on a wavefront can be considered a secondary source of
spherical waves, and the potion of the wavefront after a given time is the
envelope of these secondary.
Snell’s Law:
Consider a single horizontal interface, two homogenous layers having
seismic velocities V1 and V2 respectively. Consider some energy incident on
thos interface at an angle ‘i’, then it gets refracted inot the lower layer, such
that the angle of refraction is ‘r’ Then, according to the Snell’s Law, the
ratio of the sine of angle of incidence to the sine of angle of refraction is
equal to the ration of the velocities in the two media, i.e.

sin(i) = v1
sin(r) v2 ..(11.1)
from equation (12.1) two cases can arise :
Case (i) if V1 > V2 then from (12.1) we have
Sin (i) > sin (r)

Angle of incidence > Angle of refraction


That is, the refracted ray bends towards the normal when it moves from
region of higher velocity to a region of lower velocity.
Case (ii) if V2 > V1, then from (11.1) we have
Sin (i) < sin (r)
Angle of incidence < Angle of refraction
That is , the refracted ray bends away from the normal when it moves
from the normal when it moves from a region of lower velocity to a region
of higher velocity.

Critical Refraction
Critical refraction requires an increase in velocity with depth. If not,
then there is no critical refraction. When the rays are incident on an interface
(V2 > V1 ) at such an angle ‘ic’ such that rays are refracted along the
interface, then the phenomenon is called ‘Critical Refraction’. Here , angle
‘ic’ is called ‘Critical Angle’. Thus ‘Critical Angle’ is that angle of
incidence at which the angle of refraction is 900 . For critical refraction, we
have
i=ic and = 900
Hence, equation (12.1) becomes
Sin(ic) = V1
Sin (900 ) V2
Sin(ic ) = V1 /V2 …(11.2)
This phenomenon is the basis of reflection surveying method.

The energy from the source travels directly through the upper layer and
also, it is critically refracted in the lower layer (as shown above ). Let, ‘x’ be
the distance the source ‘S’ and the receiver ‘G’. The direct ray travels
horizontally through the top of the upper layer from the source to receiver,
with velocity ‘V1’. Hence, the travel time for the direct waves is,
T direct = x/V1 …(11.3)
The equation (11.3) represents the equation of a straight line having slope,
m1 = 1/V1 and passes through the origin.
The critically refracted ray travels just below the interface, in the
lower layer with velocity ‘V2 and returns to the surface with velocity ‘V1 ‘ at
the geophone. Thus the total travel time for the refracted ray is,
trefracted = SA + AB + BG …(11.4)
V1 V2 V3
Applying the Snell’s Law, we finally get the travel time of the refracted ray
as,

trefracted = x + 2hcos(ic ) ...(11.5)


V2 V1
Or

trefracted = x + 2h(V22 – V12 )1/2 …(11.6)

V2 V1 V2

These equations (11.5) and (11.6) are called the ‘Travel Time
Equation’ for a single horizontal interface. This equation also represents a
straight line having slope, m2 = 1/ V2.

11.3 Travel Time Curves

Now the travel time versus distance curve is obtained by plotting the
different travel times corresponding to the different between ‘S’ and ‘G’.
The time – distance curve for refraction are as shown below.

Intercept Time
By backward interpolation, the refraction T – x curve is found to
intersect the time axis at the ‘Intercept Time (ti ). Actually it has no physical
significance. This time represents the travel times for zero offset distance
(i.e., for x=0). Putting x=0 in the travel time equation, we get the
expression for the intercept time, as
ti = 2h(V22 – V12 )1/2 or ti = 2hcos(ic ) …(11.7)

V1 V2 V1 V2

As V1 ,V2 and ti are known ; hence we can calculate the thickness (h) of the
upper layer, from equation (11.7).

Critical Distance:

It is defined as the minimum distance from the energy source at which the
first critical refraction or reflection is received. It is denoted by ‘ x crit’

and is given by
x crit = 2htan (ic ) …(2.8)

Cross Over Distance :


The distance from the source at which both the direct and therefracted
waves reach at the same time is called ‘Cross over distance’. It is denoted by
‘xco and is given by

Xco =
2h(V2 V1 )1/2
…(11.9)
V2 -V1

Seismic Field Record :


Dynamite shot recorded using a 120- channel recording spread
Fig. (11.6)
11.4 Interpretation of Refraction Travel time Data
After completion of refraction survey first arrival times are picked
from seismograms and plotted as traveltime curves. Interpretation objective
is to infer interface depths and layers velocities. Data interpretation requires
making assumption about layering in subsurface.

Assumption
• Subsurface composed of stack of layers, usually separated
• Seismic velocity is uniform in each layer
• Layer velocities increase in depth
• All ray paths are located in vertical plane, i.e. no 3-D
Effects with layers dipping out plane of profile

11.5 Interpretation of Two layer Case


By plotting travel time of direct arrival and critical refraction (Travel
time curves as shown in fig. no. (11.5), we can find velocities of two layers
and depth to interface:
1. Velocity of layer 1 given by slope of direct arrival
2. Velocity of layer 2 given by slope of critical refraction
3. Estimate ti from plot and solve for Z using equation (11.7)

11.6 Interpretation of Three Layer Case


Geometry of the ray path in the case of three layer case is

The travel time Equation in three layer case is


t=x + 2 ∑ hk (v32 -vk2)2 1/2 …(11.10)
k=1
v3 vk v3
The thickness of second layer is given by
h2 = [T2 /2 – ( h1 /v1 ) cos θ1 ] (v2 / cos θ2 ) …(11.11)

Where T2 is the intercept time for arrivals from second refractor.


In three layer case, the arrivals are :
1. Direct arrival in first layer
2. Critical refraction at top of second layer
3. Critical refraction at top of third layer
Because, intercept time of traveltime curve from third layer is a
function of the two overlying layer thicknesses, we must solve for these first.

Figure (11.8)
Use a layer – stripping approach
1. Solve two - layer case using direct arrival and critical refraction from
second layer to get thickness of first layer.
2. Solve for thickness of second layer using all three velocities and
thickness of first layer using equation no. (11.11).
Layers may not be detected by first arrival analysis
(A) Velocity inversion produces no critical refraction from
second layer
(B) Insufficient velocity contrast refraction difficult to denitrify
(C) Refraction from thin layer does not become first arrival
(D) Geophone spacing too large to identify second refraction

11.7 APPLICATIONS OF SEISMIC REFRACTION SURVEY


a) To locate salt domes and other shallow structures associated with oil
reserves.
b) To determine depth of the weathered layer and obtain velocity
information required for the interpretation of reflection data
c) To delineate the depth of water table.
d) The most important applications of refraction surveying are for
geotechnical purposes.
e) Large scale refraction surveys have been very important in exploring
the deep structure of earth crust and upper mantle.

Tomography Chapter 12
12.1 Introduction
‘Tomography’ means ‘representation in a cross section’. Any procedure
that allows constructing a 3- dimensional image of the object being
modeled is a called ‘Tomography’.
The use of computer – aided tomography (CAT ) in medical diagnosis
is a process of examining the internal organs for abnormal regions within
the human body. X- rays or ultrasonic waves are used for this purpose,
as they are absorbed unequally by different materials. Thus, CAT scan
consists of studying the attenuation of X-rays or ultrasonic waves that
pass through the body in planar sections. ‘Seismic Tomography’ is based
on the same principle. The difference is that here the travel times of the
signals are observed and not the attenuation of signals is observed. Also,
illumination is produced by earthquake rays (i.e., seismic waves) instead
of X- rays or ultrasonic waves.
Hence, seismic tomography is described as the 3- dimensional
modeling of velocity distribution of seismic waves in the Earth. This
technique requires powerful computational facilities. This method
constitutes a powerful approach in studying the internal structure of Earth
and various tectonic processes.
First step in seismic tomography consists of back-projecting the ray
from the recording station to its source so as to construct the path along
which an observed anomaly is to be distributed. Quantities investigated
are the travel times and amplitudes of some particular wave types. The
travel time of a seismic wave from an earthquake focus to a seismograph
is determined by the velocity distribution along its path. Also, seismic
body wave’s from distant earthquake reach the recording station at times
which are significantly different from those predicted for radially
symmetrical standard earth model. The difference between the observed
and the calculated travel times is called ‘Teleseismic Residual’.
These residual are found to be a function of geographic location of the
receiver, its distance from epicenter and azimuth from which seismic
waves arrive. Teleseismic record by a dense array of seismometers
located over the region of interest provides data for tomography. The
observed arrival times of first seismic phases (i.e. , P, PKP) are read
from the seismograms. While theoretical travel times are estimated from
the knowledge of hypocenter coordinates using a standard seismological
table.
Thus,
Travel Time Residual = Tij obs. - Tij th
…(12.1)
obs.
Tij = Observed travel time at station ‘i’ for event ‘j’
Tij th = Theoretical or computed travel time for the same event.

12.2 Average relative Residual or Relative Residual


The Average relative residual at a station is obtained by subtracting
event average residual from the absolute residual at the station, i.e.
RRij = Rij – (1/n ∑ Rij ) where I = 1 to n …(12.2)
Relative Residuals have an advantage that they are free from al those
factors and errors that are common to various ray paths. Hence, we
generally use relative residuals rather than absolute residuals.
Negative residuals indicate presence of higher velocity feature which
positive residuals are caused by low velocity features. Fro, a single
observation of such a time residual, it is not possible to resolve various
slow and fast regions that lie along the ray path. But by looking at an
ensemble of ray paths from different azimuths, which sample the region
under investigation, one can delineate the seismically fast and slow
regions in the earth and thus obtain a 3- dimensional image of its
structure.
It is now possible to analyze a large number of such residuals through
a single inverse formalism, which constitutes the basic approach to
seismic tomography for imaging earth’s interior.
In all seismic tomography methods the medium is subdivided into
blocks.

References

» Keary, P., M, Brooks, (1991). An introduction to Geophysical


Exploration, Blackwell Scientific publication, Oxford, England.

» Dobrin, M.B., and Savit.C.H (1988). Introduction to Geophysics


Propecting, 4th edition. McGraw- Hill, New York.

» Robinson, E.S and Coruh, C. (1988). Basic exploration Geophysics,


John Willey and sons, New York.

» Loweri, W., (1997). Fundamentals of Geophysics, Cambridge


University press.

» Telford, W.M., Geldart, L.P., Sheriff, R.E., Keys, D.A., (1990).


Applied Geophysics, Cambridge University press, Cambridge.

» Lay, T. and Wallace, T.C. (1995). Modern Global Seismology,


Academic Press, San Diego.

» Lillie, Robert j., Whole Earth Geophyscs.

You might also like