You are on page 1of 87

MODULE III

33.3.0 ELECTROMAGNETIC FIELD THEORY (44 Hours)


33.3.0.1 Introduction
33.3.0 Module Summary and Time Allocation
33.3.1 Introduction to Electromagnetic Waves (2 Hours)
33.3.2 Electrodynamics (8 Hours)
33.3.3 Maxwell’s Equation (12 Hours)
33.3.4 Properties of Electromagnetic Waves (10 Hours)
33.3.5 Energy and Momentum in the Electromagnetic Fields (12 Hours)

Electromagnetic field
An electromagnetic field (also EMF or EM field) is a physical field produced by electrically
charged objects. It affects the behavior of charged objects in the vicinity of the field. The
electromagnetic field extends indefinitely throughout space and describes the electromagnetic
interaction. It is one of the four fundamental forces of nature (the others are gravitation, weak
interaction and strong interaction).

The field can be viewed as the combination of an electric field and a magnetic field. The electric
field is produced by stationary charges, and the magnetic field by moving charges (currents);
these two are often described as the sources of the field. The way in which charges and currents
interact with the electromagnetic field is described by Maxwell's equations and the Lorentz force
law.

From a classical perspective in the history of electromagnetism, the electromagnetic field can be
regarded as a smooth, continuous field, propagated in a wavelike manner; whereas from the
perspective of quantum field theory, the field is seen as quantized, being composed of individual
particles.[citation needed]

Contents
 1 Structure of the electromagnetic field
o 1.1 Continuous structure

o 1.2 Discrete structure

 2 Dynamics of the electromagnetic field


 3 Electromagnetic field as a feedback loop
 4 Mathematical description
 5 Properties of the field
o 5.1 Reciprocal behavior of electric and magnetic fields
o 5.2 Light as an electromagnetic disturbance

 6 Relation to and comparison with other physical fields


o 6.1 Electromagnetic and gravitational fields

 7 Applications
o 7.1 Static E and M fields and static EM fields

o 7.2 Time-varying EM fields in Maxwell’s equations

 8 Health and safety


 9 See also
 10 References
 11 Further reading
 12 External links

Structure of the electromagnetic field


The electromagnetic field may be viewed in two distinct ways: a continuous structure or a
discrete structure.

Continuous structure

Classically, electric and magnetic fields are thought of as being produced by smooth motions of
charged objects. For example, oscillating charges produce electric and magnetic fields that may
be viewed in a 'smooth', continuous, wavelike fashion. In this case, energy is viewed as being
transferred continuously through the electromagnetic field between any two locations. For
instance, the metal atoms in a radio transmitter appear to transfer energy continuously. This view
is useful to a certain extent (radiation of low frequency), but problems are found at high
frequencies (see ultraviolet catastrophe).

Discrete structure

The electromagnetic field may be thought of in a more 'coarse' way. Experiments reveal that in
some circumstances electromagnetic energy transfer is better described as being carried in the
form of packets called quanta (in this case, photons) with a fixed frequency. Planck's relation
links the energy E of a photon to its frequency ν through the equation:[1]

where h is Planck's constant, named in honor of Max Planck, and ν is the frequency of the
photon . Although modern quantum optics tells us that there also is a semi-classical explanation
of the photoelectric effect—the emission of electrons from metallic surfaces subjected to
electromagnetic radiation—the photon was historically (although not strictly necessarily) used to
explain certain observations. It is found that increasing the intensity of the incident radiation (so
long as one remains in the linear regime) increases only the number of electrons ejected, and has
almost no effect on the energy distribution of their ejection. Only the frequency of the radiation
is relevant to the energy of the ejected electrons.

This quantum picture of the electromagnetic field (which treats it as analogous to harmonic
oscillators) has proved very successful, giving rise to quantum electrodynamics, a quantum field
theory describing the interaction of electromagnetic radiation with charged matter. It also gives
rise to quantum optics, which is different from quantum electrodynamics in that the matter itself
is modelled using quantum mechanics rather than quantum field theory.

Dynamics of the electromagnetic field


In the past, electrically charged objects were thought to produce two different, unrelated types of
field associated with their charge property. An electric field is produced when the charge is
stationary with respect to an observer measuring the properties of the charge, and a magnetic
field (as well as an electric field) is produced when the charge moves (creating an electric
current) with respect to this observer. Over time, it was realized that the electric and magnetic
fields are better thought of as two parts of a greater whole — the electromagnetic field. Recall
that "until 1831 electricity and magnetism had been viewed as unrelated phenomena. In 1831,
Michael Faraday, one of the great thinkers of his time, made the seminal observation that time-
varying magnetic fields could induce electric currents and then, in 1864, James Clerk Maxwell
published his famous paper on a dynamical theory of the electromagnetic field. See Maxwell
1864 5, page 499; also David J. Griffiths (1999), Introduction to electrodynamics, third Edition,
ed. Prentice Hall, pp. 559-562"(as quoted in Gabriela, 2009).

Once this electromagnetic field has been produced from a given charge distribution, other
charged objects in this field will experience a force (in a similar way that planets experience a
force in the gravitational field of the Sun). If these other charges and currents are comparable in
size to the sources producing the above electromagnetic field, then a new net electromagnetic
field will be produced. Thus, the electromagnetic field may be viewed as a dynamic entity that
causes other charges and currents to move, and which is also affected by them. These
interactions are described by Maxwell's equations and the Lorentz force law. (This discussion
ignores the radiation reaction force.)

Electromagnetic field as a feedback loop


The behavior of the electromagnetic field can be resolved into four different parts of a loop:

 the electric and magnetic fields are generated by electric charges,


 the electric and magnetic fields interact with each other,
 the electric and magnetic fields produce forces on electric charges,
 the electric charges move in space.

A common misunderstanding is that (a) the quanta of the fields act in the same manner as (b) the
charged particles that generate the fields. In our everyday world, charged particles, such as
electrons, move slowly through matter, typically on the order of a few inches (or centimeters) per
second[citation needed], but fields propagate at the speed of light - approximately 300 thousand
kilometers (or 186 thousand miles) a second. The mundane speed difference between charged
particles and field quanta is on the order of one to a million, more or less. Maxwell's equations
relate (a) the presence and movement of charged particles with (b) the generation of fields. Those
fields can then affect the force on, and can then move other slowly moving charged particles.
Charged particles can move at relativistic speeds nearing field propagation speeds, but, as
Einstein showed[citation needed], this requires enormous field energies, which are not present in our
everyday experiences with electricity, magnetism, matter, and time and space.

The feedback loop can be summarized in a list, including phenomena belonging to each part of
the loop:

 charged particles generate electric and magnetic fields


 the fields interact with each other
o changing electric field acts like a current, generating 'vortex' of magnetic field

o Faraday induction: changing magnetic field induces (negative) vortex of electric


field
o Lenz's law: negative feedback loop between electric and magnetic fields

 fields act upon particles


o Lorentz force: force due to electromagnetic field

 electric force: same direction as electric field


 magnetic force: perpendicular both to magnetic field and to velocity of
charge
 particles move
o current is movement of particles

 particles generate more electric and magnetic fields; cycle repeats

Mathematical description
Mathematical descriptions of the electromagnetic field

There are different mathematical ways of representing the electromagnetic field. The first one
views the electric and magnetic fields as three-dimensional vector fields. These vector fields
each have a value defined at every point of space and time and are thus often regarded as
functions of the space and time coordinates. As such, they are often written as E(x, y, z, t)
(electric field) and B(x, y, z, t) (magnetic field).

If only the electric field (E) is non-zero, and is constant in time, the field is said to be an
electrostatic field. Similarly, if only the magnetic field (B) is non-zero and is constant in time, the
field is said to be a magnetostatic field. However, if either the electric or magnetic field has a
time-dependence, then both fields must be considered together as a coupled electromagnetic field
using Maxwell's equations.[2]

With the advent of special relativity, physical laws became susceptible to the formalism of
tensors. Maxwell's equations can be written in tensor form, generally viewed by physicists as a
more elegant means of expressing physical laws.

The behaviour of electric and magnetic fields, whether in cases of electrostatics, magnetostatics,
or electrodynamics (electromagnetic fields), is governed by Maxwell's equations. In the vector
field formalism, these are:

(Gauss's law)
(Gauss's law for magnetism)

(Faraday's law)

(Ampère-Maxwell law)

where is the charge density, which can (and often does) depend on time and position, is the
permittivity of free space, is the permeability of free space, and J is the current density vector,
also a function of time and position. The units used above are the standard SI units. Inside a
linear material, Maxwell's equations change by switching the permeability and permittivity of
free space with the permeability and permittivity of the linear material in question. Inside other
materials which possess more complex responses to electromagnetic fields, these terms are often
represented by complex numbers, or tensors.

The Lorentz force law governs the interaction of the electromagnetic field with charged matter.

When a field travels across to different media, the properties of the field change according to the
various boundary conditions. These equations are derived from Maxwell's equations. The
tangential components of the electric and magnetic fields as they relate on the boundary of two
media are as follows:[3]

(current-free)
(charge-free)
The angle of refraction of an electric field between media is related to the permittivity of each
medium:

The angle of refraction of a magnetic field between media is related to the permeability of
each medium:

Properties of the field


Reciprocal behavior of electric and magnetic fields

The two Maxwell equations, Faraday's Law and the Ampère-Maxwell Law, illustrate a very
practical feature of the electromagnetic field. Faraday's Law may be stated roughly as 'a
changing magnetic field creates an electric field'. This is the principle behind the electric
generator.

Ampere's Law roughly states that 'a changing electric field creates a magnetic field'. Thus, this
law can be applied to generate a magnetic field and run an electric motor.

Light as an electromagnetic disturbance

Maxwell's equations take the form of an electromagnetic wave in a volume of space not
containing charges or currents (free space) – that is, where and J are zero. Under these
conditions, the electric and magnetic fields satisfy the electromagnetic wave equation:[4]

James Clerk Maxwell was the first to obtain this relationship by his completion of Maxwell's
equations with the addition of a displacement current term to Ampere's Circuital law.

Relation to and comparison with other physical fields


Fundamental forces
Being one of the four fundamental forces of nature, it is useful to compare the electromagnetic
field with the gravitational, strong and weak fields. The word 'force' is sometimes replaced by
'interaction' because modern particle physics models electromagnetism as an exchange of
particles known as gauge bosons.

Electromagnetic and gravitational fields

Sources of electromagnetic fields consist of two types of charge – positive and negative. This
contrasts with the sources of the gravitational field, which are masses. Masses are sometimes
described as gravitational charges, the important feature of them being that there are only
positive masses and no negative masses. Further, gravity differs from electromagnetism in that
positive masses attract other positive masses whereas same charges in electromagnetism repel
each other.

Applications
Static E and M fields and static EM fields

When an EM field (see electromagnetic tensor) is not varying in time, it may be seen as a purely
electrical field or a purely magnetic field, or a mixture of both. However the general case of a
static EM field with both electric and magnetic components present, is the case that appears to
most observers. Observers who see only an electric or magnetic field component of a static EM
field, have the other (electric or magnetic) component suppressed, due to the special case of the
immobile state of the charges that produce the EM field in that case. In such cases the other
component becomes manifest in other observer frames.

A consequence of this, is that any case that seems to consist of a "pure" static electric or
magnetic field, can be converted to an EM field, with both E and M components present, by
simply moving the observer into a frame of reference which is moving with regard to the frame
in which only the “pure” electric or magnetic field appears. That is, a pure static electric field
will show the familiar magnetic field associated with a current, in any frame of reference where
the charge moves. Likewise, any new motion of a charge in a region that seemed previously to
contain only a magnetic field, will show that that the space now contains an electric field as well,
which will be found to produces an additional Lorentz force upon the moving charge.

Thus, electrostatics, as well as magnetism and magnetostatics, are now seen as studies of the
static EM field when a particular frame has been selected to suppress the other type of field, and
since an EM field with both electric and magnetic will appear in any other frame, these "simpler"
effects are merely the observer's. The "applications" of all such non-time varying (static) fields
are discussed in the main articles linked in this section.

Time-varying EM fields in Maxwell’s equations


An EM field that varies in time has two “causes” in Maxwell’s equations. One is charges and
currents (so-called “sources”), and the other cause for an E or M field is a change in the other
type of field (this last cause also appears in “free space” very far from currents and charges).

An electromagnetic field very far from currents and charges (sources) is called electromagnetic
radiation (EMR) since it radiates from the charges and currents in the source, and has no
"feedback" effect on them, and is also not affected directly by them in the present time (rather, it
is indirectly produced by a sequences of changes in fields radiating out from them in the past).
EMR consists of the radiations in the electromagnetic spectrum, including radio waves,
microwave, infrared, visible light, ultraviolet light, X-rays, and gamma rays. The many
commercial applications of these radiations are discussed in the named and linked articles.

A notable application of visible light is that this type of energy from the Sun powers all life on
Earth that either makes or uses oxygen.

A changing electromagnetic field which is physically close to currents and charges (see near and
far field for a definition of “close”) will have a dipole characteristic that is dominated by either a
changing electric dipole, or a changing magnetic dipole. This type of dipole field near sources is
called an electromagnetic near-field.

Changing electric dipole fields, as such, are used commercially as near-fields mainly as a source
of dielectric heating. Otherwise, they appear parasitically around conductors which absorb EMR,
and around antennas which have the purpose of generating EMR at greater distances.

Changing magnetic dipole fields (i.e., magnetic near-fields) are used commercially for many
types of magnetic induction devices. These include motors and electrical transformers at low
frequencies, and devices such as metal detectors and MRI scanner coils at higher frequencies.
Sometimes these high-frequency magnetic fields change at radio frequencies without being far-
field waves and thus radio waves; see RFID tags. See also near-field communication. Further
uses of near-field EM effects commercially, may be found in the article on virtual photons, since
at the quantum level, these fields are represented by these particles. Far-field effects (EMR) in
the quantum picture of radiation, are represented by ordinary photons.

Health and safety


The potential health effects of the very low frequency EMFs surrounding power lines and
electrical devices are the subject of on-going research and a significant amount of public debate.
The US National Institute for Occupational Safety and Health (NIOSH) has issued some
cautionary advisories but stresses that the data is currently too limited to draw good conclusions.
[5]

The potential effects of electromagnetic fields on human health vary widely depending on the
frequency and intensity of the fields. For more information on the health effects due to specific
parts of the electromagnetic spectrum, see the following articles:

 Static electric fields: see Electric shock


 Static magnetic fields: see MRI#Safety
 Extremely low frequency (ELF): see Power lines#Health concerns
 Radio frequency (RF): see Electromagnetic radiation and health
 Light: see Laser safety
 Ultraviolet (UV): see Sunburn
 Gamma rays: see Gamma ray
 Mobile telephony: see Mobile phone radiation and health

Maxwell's equations
Maxwell's equations are a set of partial differential equations that, together with the Lorentz
force law, form the foundation of classical electrodynamics, classical optics, and electric circuits.
These fields in turn underlie modern electrical and communications technologies. Maxwell's
equations describe how electric and magnetic fields are generated and altered by each other and
by charges and currents. They are named after the Scottish physicist and mathematician James
Clerk Maxwell, who published an early form of those equations between 1861 and 1862.

The equations have two major variants. The "microscopic" set of Maxwell's equations uses total
charge and total current, including the complicated charges and currents in materials at the
atomic scale; it has universal applicability but may be unfeasible to calculate. The "macroscopic"
set of Maxwell's equations defines two new auxiliary fields that describe large-scale behavior
without having to consider these atomic scale details, but it requires the use of parameters
characterizing the electromagnetic properties of the relevant materials.

The term "Maxwell's equations" is often used for other forms of Maxwell's equations. For
example, space-time formulations are commonly used in high energy and gravitational physics.
These formulations, defined on space-time rather than space and time separately, are
manifestly[note 1] compatible with special and general relativity. In quantum mechanics and
analytical mechanics, versions of Maxwell's equations based on the electric and magnetic
potentials are preferred.

Since the mid-20th century, it has been understood that Maxwell's equations are not exact laws of
the universe, but are a classical approximation to the more accurate and fundamental theory of
quantum electrodynamics. In most cases, though, quantum deviations from Maxwell's equations
are immeasurably small. Exceptions occur when the particle nature of light is important or for
very strong electric fields.

Contents
 1 Formulation in terms of electric and magnetic fields
 2 Conventional formulation in SI units
 3 Relationship between differential and integral formulations
o 3.1 Flux and divergence

o 3.2 Circulation and curl

o 3.3 Time evolution

 4 Conceptual descriptions
o 4.1 Gauss's law

o 4.2 Gauss's law for magnetism

o 4.3 Faraday's law

o 4.4 Ampère's law with Maxwell's addition

 5 Vacuum equations, electromagnetic waves and speed of light


 6 "Microscopic" versus "macroscopic"
o 6.1 Bound charge and current

o 6.2 Auxiliary fields, polarization and magnetization

o 6.3 Constitutive relations

 7 Equations in Gaussian units


 8 Alternative formulations
 9 Solutions
 10 Limitations for a theory of electromagnetism
 11 Variations
o 11.1 Magnetic monopoles

 12 See also
 13 Notes
 14 References
 15 Historical publications
 16 External links
o 16.1 Modern treatments

o 16.2 Other

Formulation in terms of electric and magnetic fields


To describe electromagnetism in this formulation, the powerful language of vector calculus is
used throughout this article. Symbols in bold represent vector quantities, and symbols in italics
represent scalar quantities, unless otherwise indicated.

The equations introduce the electric field E, a vector field, and the magnetic field B, a
pseudovector field, where each generally have time-dependence. The sources of these fields are
electric charges and electric currents, which can be expressed as local densities namely charge
density ρ and current density J. A separate law of nature, the Lorentz force law, describes how
the electric and magnetic field act on charged particles and currents. A version of this law was
included in the original equations by Maxwell but, by convention, is no longer.

In the electric-magnetic field formulation there are four equations. Two of them describe how the
fields vary in space due to sources, if any; electric fields emanating from electric charges in
Gauss's law, and magnetic fields as closed field lines not due to magnetic monopoles in Gauss's
law for magnetism. The other two describe how the fields "circulate" around their respective
sources; the magnetic field "circulates" around electric currents and time varying electric fields
in Ampère's law with Maxwell's addition, while the electric field "circulates" around time
varying magnetic fields in Faraday's law.

The precise formulation of Maxwell's equations depends on the precise definition of the
quantities involved. Conventions differ with the unit systems; because various definitions and
dimensions are changed by absorbing dimensionful factors like the speed of light c. This makes
constants come out differently.

Conventional formulation in SI units


The equations in this section are given in the convention used with SI units. Other units
commonly used are Gaussian units based on the cgs system,[1] Lorentz–Heaviside units (used
mainly in particle physics), and Planck units (used in theoretical physics). See below for the
formulation with Gaussian units.

Name Integral equations Differential equations


Gauss's
law
Gauss's
law for
magnetis
m
Maxwell

Faraday
equation
(Faraday
's law of
inductio
n)
Ampère'
s
circuital
law (with
Maxwell'
s
addition)
Where:
 ∇ is the Del
 E is the electric field,
 B is the magnetic field,
 J is the current density,
 ρ is the total charge density,
 ε0 is the permittivity of free space,
 µ0 is the permeability of free space,

and in the integral equations,

 ∫ denotes an integral,
 Ω denotes a volume, and ∂Ω is the closed surface enclosing it, with normal
directed outwards
 dV denotes a differential volume element of Ω,
 Σ denotes a non-closed surface (assumed to be time independent),
 dS denotes a differential vector area element of ∂Ω or Σ, parallel to the surface
normal, and

 ∂Σ is the closed loop circulating around Σ, counterclockwise (in accordance to


dS).

The universal constants appearing in the equations are the permittivity of free space ε0 and the
permeability of free space μ0, a general characteristic of fundamental field equations.

In the differential equations, a local description of the fields, the nabla symbol ∇ denotes the
three-dimensional gradient operator, and from it ∇· is the divergence operator and ∇× the curl
operator. The sources are taken to be as local densities of charge and current.

In the integral equations; a description of the fields within a region of space, Ω is any fixed
volume with boundary surface ∂Ω, and Σ is any fixed open surface with boundary curve ∂Σ.
Here "fixed" means the volume or surface do not change in time. Although it is possible to
formulate Maxwell's equations with time-dependent surfaces and volumes, this is not actually
necessary: the equations are correct and complete with time-independent surfaces. The sources
are correspondingly the total amounts of charge and current within these volumes and surfaces,
found by integration. The volume integral of the total charge density ρ over any fixed volume Ω
is the total electric charge contained in Ω:

and the net electrical current is the surface integral of the electric current density J, passing
through any open fixed surface Σ:

where dS denotes the differential vector element of surface area S normal to surface Σ. (Vector
area is also denoted by A rather than S, but this conflicts with the magnetic potential, a separate
vector field).

The "total charge or current" refers to including free and bound charges, or free and bound
currents. These are used in the macroscopic formulation below.
Relationship between differential and integral formulations
The differential and integral formulations of the equations are mathematically equivalent, by the
divergence theorem in the case of Gauss's law and Gauss's law for magnetism, and by the
Kelvin–Stokes theorem in the case of Faraday's law and Ampère's law. Both the differential and
integral formulations are useful. The integral formulation can often be used to simply and
directly calculate fields from symmetric distributions of charges and currents. On the other hand,
the differential formulation is a more natural starting point for calculating the fields in more
complicated (less symmetric) situations, for example using finite element analysis.[2]

Flux and divergence

Closed volume Ω and its boundary ∂Ω, enclosing a source (+) and sink (−) of a vector field F.
Here, F could be the E field with source electric charges, but not the B field which has no
magnetic charges as shown. The outward unit normal is n.

The "fields emanating from the sources" can be inferred from the surface integrals of the fields

through the closed surface ∂Ω, defined as the electric flux and magnetic flux
, as well as their respective divergences ∇ · E and ∇ · B. These surface integrals and
divergences are connected by the divergence theorem.

Circulation and curl


Open surface Σ and boundary ∂Σ. F could be the E or B fields. Again, n is the unit normal.
(The curl of a vector field doesn't literally look like the "circulations", this is a heuristic
depiction).

The "circulation of the fields" can be interpreted from the line integrals of the fields around the
closed curve ∂Σ:

where dℓ is the differential vector element of path length tangential to the path/curve, as well as
their curls:

These line integrals and curls are connected by Stokes' theorem, and are analogous to quantities
in classical fluid dynamics: the circulation of a fluid is the line integral of the fluid's flow
velocity field around a closed loop, and the vorticity of the fluid is the curl of the velocity field.

Time evolution

The "dynamics" or "time evolution of the fields" is due to the partial derivatives of the fields
with respect to time:

These derivatives are crucial for the prediction of field propagation in the form of
electromagnetic waves. Since the surface is taken to be time-independent, we can make the
following transition in Faraday's law:
see differentiation under the integral sign for more on this result.

Conceptual descriptions
Gauss's law

Gauss's law describes the relationship between a static electric field and the electric charges that
cause it: The static electric field points away from positive charges and towards negative charges.
In the field line description, electric field lines begin only at positive electric charges and end
only at negative electric charges. 'Counting' the number of field lines passing though a closed
surface, therefore, yields the total charge (including bound charge due to polarization of material)
enclosed by that surface divided by dielectricity of free space (the vacuum permittivity). More
technically, it relates the electric flux through any hypothetical closed "Gaussian surface" to the
enclosed electric charge.

Gauss's law for magnetism: magnetic field lines never begin nor end but form loops or extend to
infinity as shown here with the magnetic field due to a ring of current.

Gauss's law for magnetism

Gauss's law for magnetism states that there are no "magnetic charges" (also called magnetic
monopoles), analogous to electric charges.[3] Instead, the magnetic field due to materials is
generated by a configuration called a dipole. Magnetic dipoles are best represented as loops of
current but resemble positive and negative 'magnetic charges', inseparably bound together,
having no net 'magnetic charge'. In terms of field lines, this equation states that magnetic field
lines neither begin nor end but make loops or extend to infinity and back. In other words, any
magnetic field line that enters a given volume must somewhere exit that volume. Equivalent
technical statements are that the sum total magnetic flux through any Gaussian surface is zero, or
that the magnetic field is a solenoidal vector field.
Faraday's law

In a geomagnetic storm, a surge in the flux of charged particles temporarily alters Earth's
magnetic field, which induces electric fields in Earth's atmosphere, thus causing surges in
electrical power grids. Artist's rendition; sizes are not to scale.

The Maxwell-Faraday's equation version of Faraday's law describes how a time varying
magnetic field creates ("induces") an electric field.[3] This dynamically induced electric field has
closed field lines just as the magnetic field, if not superposed by a static (charge induced) electric
field. This aspect of electromagnetic induction is the operating principle behind many electric
generators: for example, a rotating bar magnet creates a changing magnetic field, which in turn
generates an electric field in a nearby wire.

Ampère's law with Maxwell's addition

Magnetic core memory (1954) is an application of Ampère's law. Each core stores one bit of
data.

Ampère's law with Maxwell's addition states that magnetic fields can be generated in two
ways: by electrical current (this was the original "Ampère's law") and by changing electric fields
(this was "Maxwell's addition").

Maxwell's addition to Ampère's law is particularly important: it shows that not only does a
changing magnetic field induce an electric field, but also a changing electric field induces a
magnetic field.[3][4] Therefore, these equations allow self-sustaining "electromagnetic waves" to
travel through empty space (see electromagnetic wave equation).

The speed calculated for electromagnetic waves, which could be predicted from experiments on
charges and currents,[note 2] exactly matches the speed of light; indeed, light is one form of
electromagnetic radiation (as are X-rays, radio waves, and others). Maxwell understood the
connection between electromagnetic waves and light in 1861, thereby unifying the theories of
electromagnetism and optics.

Vacuum equations, electromagnetic waves and speed of light

This 3D diagram shows a plane linearly polarized wave propagating from left to right with the
same wave equations where E = E0 sin(−ωt + k ⋅ r) and B = B0 sin(−ωt + k ⋅ r)

In a region with no charges (ρ = 0) and no currents (J = 0), such as in a vacuum, Maxwell's


equations reduce to:

Taking the curl (∇×) of the curl equations, and using the curl of the curl identity ∇×(∇×X) =
∇(∇·X) − ∇2X we obtain the wave equations

which identify

with the speed of light in free space. In materials with relative permittivity εr and relative
permeability μr, the phase velocity of light becomes
which is usually less than c.

In addition, E and B are mutually perpendicular to each other and the direction of wave
propagation, and are in phase with each other. A sinusoidal plane wave is one special solution of
these equations. Maxwell's equations explain how these waves can physically propagate through
space. The changing magnetic field creates a changing electric field through Faraday's law. In
turn, that electric field creates a changing magnetic field through Maxwell's addition to Ampère's
law. This perpetual cycle allows these waves, now known as electromagnetic radiation, to move
through space at velocity c.

"Microscopic" versus "macroscopic"


The microscopic variant of Maxwell's equation expresses the electric E field and the magnetic B
field in terms of the total charge and total current present including the charges and currents at
the atomic level. It is sometimes called the general form of Maxwell's equations or "Maxwell's
equations in a vacuum". The macroscopic variant of Maxwell's equation is equally general,
however, with the difference being one of bookkeeping.

"Maxwell's macroscopic equations", also known as Maxwell's equations in matter, are more
similar to those that Maxwell introduced himself.

Name Integral equations Differential equations


Gauss's
law
Gauss's
law for
magnetism
Maxwell–
Faraday
equation
(Faraday's
law of
induction)
Ampère's
circuital
law (with
Maxwell's
addition)
Unlike the "microscopic" equations, the "macroscopic" equations separate out the bound charge
Qb and current Ib to obtain equations that depend only on the free charges Qf and currents If.
This factorization can be made by splitting the total electric charge and current as follows:

The cost of this factorization is that additional fields, the displacement field D and the
magnetizing field-H, are defined and need to be determined. Phenomenological constituent
equations relate the additional fields to the electric field E and the magnetic B-field, often
through a simple linear relation.

For a detailed description of the differences between the microscopic (total charge and current
including material contributes or in air/vacuum)[note 3] and macroscopic (free charge and current;
practical to use on materials) variants of Maxwell's equations, see below.

Bound charge and current

Main articles: Current density, Bound charge and Bound current

Left: A schematic view of how an assembly of microscopic dipoles produces opposite surface
charges as shown at top and bottom. Right: How an assembly of microscopic current loops add
together to produce a macroscopically circulating current loop. Inside the boundaries, the
individual contributions tend to cancel, but at the boundaries no cancelation occurs.

When an electric field is applied to a dielectric material its molecules respond by forming
microscopic electric dipoles – their atomic nuclei move a tiny distance in the direction of the
field, while their electrons move a tiny distance in the opposite direction. This produces a
macroscopic bound charge in the material even though all of the charges involved are bound to
individual molecules. For example, if every molecule responds the same, similar to that shown in
the figure, these tiny movements of charge combine to produce a layer of positive bound charge
on one side of the material and a layer of negative charge on the other side. The bound charge is
most conveniently described in terms of the polarization P of the material, its dipole moment per
unit volume. If P is uniform, a macroscopic separation of charge is produced only at the surfaces
where P enter and leave the material. For non-uniform P, a charge is also produced in the bulk.[5]

Somewhat similarly, in all materials the constituent atoms exhibit magnetic moments that are
intrinsically linked to the angular momentum of the components of the atoms, most notably their
electrons. The connection to angular momentum suggests the picture of an assembly of
microscopic current loops. Outside the material, an assembly of such microscopic current loops
is not different from a macroscopic current circulating around the material's surface, despite the
fact that no individual charge is traveling a large distance. These bound currents can be described
using the magnetization M.[6]

The very complicated and granular bound charges and bound currents, therefore, can be
represented on the macroscopic scale in terms of P and M which average these charges and
currents on a sufficiently large scale so as not to see the granularity of individual atoms, but also
sufficiently small that they vary with location in the material. As such, the Maxwell's
macroscopic equations ignores many details on a fine scale that can be unimportant to
understanding matters on a gross scale by calculating fields that are averaged over some suitable
volume.

Auxiliary fields, polarization and magnetization

The definitions (not constitutive relations) of the auxiliary fields are:

where P is the polarization field and M is the magnetization field which are defined in terms of
microscopic bound charges and bound current respectively. The macroscopic bound charge
density ρb and bound current density Jb in terms of polarization P and magnetization M are then
defined as

If we define the free, bound, and total charge and current density by

and use the defining relations above to eliminate D, and H, the "macroscopic" Maxwell's
equations reproduce the "microscopic" equations.
Constitutive relations

Main article: Constitutive equation § Electromagnetism

In order to apply 'Maxwell's macroscopic equations', it is necessary to specify the relations


between displacement field D and the electric field E, as well as the magnetizing field H and the
magnetic field B. Equivalently, we have to specify the dependence of the polarisation P (hence
the bound charge) and the magnetisation M (hence the bound current) on the applied electric and
magnetic field. The equations specifying this response are called constitutive relations. For real-
world materials, the constitutive relations are rarely simple, except approximately, and usually
determined by experiment. See the main article on constitutive relations for a fuller description.

For materials without polarisation and magnetisation ("vacuum"), the constitutive relations are

for scalar constants ε0 and μ0. Since there is no bound charge, the total and the free charge and
current are equal.

More generally, for linear materials the constitutive relations are

where ε is the permittivity and μ the permeability of the material. Even the linear case can have
various complications, however.

 For homogeneous materials, ε and μ are constant throughout the material, while for
inhomogeneous materials they depend on location within the material (and perhaps time).
 For isotropic materials, ε and μ are scalars, while for anisotropic materials (e.g. due to
crystal structure) they are tensors.

 Materials are generally dispersive, so ε and μ depend on the frequency of any incident
EM waves.

Even more generally, in the case of non-linear materials (see for example nonlinear optics), D
and P are not necessarily proportional to E, similarly B is not necessarily proportional to H or
M. In general D and H depend on both E and B, on location and time, and possibly other
physical quantities.

In applications one also has to describe how the free currents and charge density behave in terms
of E and B possibly coupled to other physical quantities like pressure, and the mass, number
density, and velocity of charge-carrying particles. E.g., the original equations given by Maxwell
(see History of Maxwell's equations) included Ohms law in the form

Equations in Gaussian units


Main article: Gaussian units

Gaussian units are a popular system of units, that is part of the centimetre–gram–second system
of units (cgs). When using cgs units it is conventional to use a slightly different definition of
electric field Ecgs = c−1 ESI. This implies that the modified electric and magnetic field have the
same units (in the SI convention this is not the case: e.g. for EM waves in vacuum, |E|SI, making
dimensional analysis of the equations different). Then it uses a unit of charge defined in such a
way that the permittivity of the vacuum ε0 = 1/(4πc), hence μ0 = 4π/c. Using these different
conventions, the Maxwell equations become:[7]

Equations in Gaussian units


Name Microscopic equations Macroscopic equations
Gauss's law
Gauss's law for
same as microscopic
magnetism
Maxwell–Faraday
equation (Faraday's same as microscopic
law of induction)
Ampère's law (with
Maxwell's
extension)

Alternative formulations
For an overview, see Mathematical descriptions of the electromagnetic field.
For the equations in special relativity, see classical electromagnetism and special relativity and
covariant formulation of classical electromagnetism.
For the equations in general relativity, see Maxwell's equations in curved spacetime.
For the equations in quantum field theory, see quantum electrodynamics.

Following is a summary of some of the numerous other ways to write the microscopic Maxwell's
equations, showing they can be formulated using different points of view and mathematical
formalisms that describe the same physics. Often, they are also called the Maxwell equations.
The direct space-time formulations make manifest that the Maxwell equations are relativistically
invariant (in fact studying the hidden symmetry of the vector calculus formulation was a major
source of inspiration for relativity theory). In addition, the formulation using potentials was
originally introduced as a convenient way to solve the equations but with all the observable
physics contained in the fields. The potentials play a central role in quantum mechanics,
however, and act quantum mechanically with observable consequences even when the fields
vanish (Aharonov–Bohm effect). See the main articles for the details of each formulation. SI units are
used throughout.

Formalis
Formulation Homogeneous equations Non-homogeneous equations
m
Fields
3D
Euclidean
space + time
Potentials
(any gauge)
3D
Vector Euclidean
calculus space + time

Potentials
(Lorenz
gauge)
3D
Euclidean
space + time

Tensor Fields
calculus Minkowski
space
Potentials
(any gauge)
Minkowski
space
Potentials
(Lorenz gau
ge)
Minkowski
space
Fields
any space-
time
Potentials
(any gauge)
any space-
time
Potentials
(Lorenz
gauge)
any space-
time
Fields
any space-
time
Potentials
(any gauge)
Differenti any space-
al forms time
Potentials
(Lorenz gau
ge)
any space-
time

where

 In the vector formulation on Euclidean space + time, is the electrical potential, A is the

vector potential and is the D'Alembert operator.


 In the tensor calculus formulation, the electromagnetic tensor is an antisymmetric
covariant rank 2 tensor, the four-potential is a covariant vector, the current is a
vector density, the square bracket [ ] denotes antisymmetrization of indices, is the
derivative with respect to the coordinate . On Minkowski space coordinates are chosen
with respect to an inertial frame; , so that the metric tensor used to
raise and lower indices is . The D'Alembert operator on
Minkowskispace is as in the vector formulation. On general space-times, the
coordinate system is arbitrary, the covariant derivative , the Ricci tensor and
raising and lowering of indices are defined by the Lorentzian metric and the
D'Alembert operator is defined as .

 In the differential form formulation on arbitrary space times, is


the electromagnetic tensor considered as two form, is the potential 1 form,
is the current (pseudo) 3 form, d is the exterior derivative, and are the Hodge stars
on forms defined by the Lorentzian metric of space-time (the Hodge star on two forms
only depends on the metric up to a local scale i.e. is conformally invariant). The operator
is the d'Alembert-Laplace-Beltrami operator on 1-forms on an
arbitrary Lorentzian space-time.
Other formulations include the geometric algebra formulation and a matrix representation of
Maxwell's equations. Historically, a quaternionic formulation[8][9] was used.

Solutions
Maxwell's equations are partial differential equations that relate the electric and magnetic fields
to each other and to the electric charges and currents. Often, the charges and currents are
themselves dependent on the electric and magnetic fields via the Lorentz force equation and the
constitutive relations. These all form a set of coupled partial differential equations, which are
often very difficult to solve. In fact, the solutions of these equations encompass all the diverse
phenomena in the entire field of classical electromagnetism. A thorough discussion is far beyond
the scope of the article, but some general notes follow.

Like any differential equation, boundary conditions[10][11][12] and initial conditions[13] are necessary
for a unique solution. For example, even with no charges and no currents anywhere in spacetime,
many solutions to Maxwell's equations are possible, not just the obvious solution E = B = 0.
Another solution is E = constant, B = constant, while yet other solutions have
electromagnetic waves filling spacetime. In some cases, Maxwell's equations are solved through
infinite space, and boundary conditions are given as asymptotic limits at infinity.[14] In other
cases, Maxwell's equations are solved in just a finite region of space, with appropriate boundary
conditions on that region: For example, the boundary could be an artificial absorbing boundary
representing the rest of the universe,[15][16] or periodic boundary conditions, or (as with a
waveguide or cavity resonator) the boundary conditions may describe the walls that isolate a
small region from the outside world.[17]

Jefimenko's equations (or the closely related Liénard–Wiechert potentials) are the explicit
solution to Maxwell's equations for the electric and magnetic fields created by any given
distribution of charges and currents. It assumes specific initial conditions to obtain the so-called
"retarded solution", where the only fields present are the ones created by the charges.
Jefimenko's equations are not so helpful in situations when the charges and currents are
themselves affected by the fields they create.

Numerical methods for differential equations can be used to approximately solve Maxwell's
equations when an exact solution is impossible. These methods usually require a computer, and
include the finite element method and finite-difference time-domain method.[10][12][18][19][20] For
more details, see Computational electromagnetics.

Maxwell's equations seem overdetermined, in that they involve six unknowns (the three
components of E and B) but eight equations (one for each of the two Gauss's laws, three vector
components each for Faraday's and Ampere's laws). (The currents and charges are not unknowns,
being freely specifiable subject to charge conservation.) This is related to a certain limited kind
of redundancy in Maxwell's equations: It can be proven that any system satisfying Faraday's law
and Ampere's law automatically also satisfies the two Gauss's laws, as long as the system's initial
condition does.[21][22] Although it is possible to simply ignore the two Gauss's laws in a numerical
algorithm (apart from the initial conditions), the imperfect precision of the calculations can lead
to ever-increasing violations of those laws. By introducing dummy variables characterizing these
violations, the four equations become not overdetermined after all. The resulting formulation can
lead to more accurate algorithms that take all four laws into account.[23]

Limitations for a theory of electromagnetism


While Maxwell's equations (along with the rest of classical electromagnetism) are extraordinarily
successful at explaining and predicting a variety of phenomena, they are not exact, but
approximations. In some special situations, they can be noticeably inaccurate. Examples include
extremely strong fields (see Euler–Heisenberg Lagrangian) and extremely short distances (see
vacuum polarization). Moreover, various phenomena occur in the world even though Maxwell's
equations predict them to be impossible, such as "nonclassical light" and quantum entanglement
of electromagnetic fields (see quantum optics). Finally, any phenomenon involving individual
photons, such as the photoelectric effect, Planck's law, the Duane–Hunt law, single-photon light
detectors, etc., would be difficult or impossible to explain if Maxwell's equations were exactly
true, as Maxwell's equations do not involve photons. For the most accurate predictions in all
situations, Maxwell's equations have been superseded by quantum electrodynamics.

Variations
Popular variations on the Maxwell equations as a classical theory of electromagnetic fields are
relatively scarce because the standard equations have stood the test of time remarkably well.

Magnetic monopoles

Main article: Magnetic monopole

Maxwell's equations posit that there is electric charge, but no magnetic charge (also called
magnetic monopoles), in the universe. Indeed, magnetic charge has never been observed (despite
extensive searches)[note 4] and may not exist. If they did exist, both Gauss's law for magnetism and
Faraday's law would need to be modified, and the resulting four equations would be fully
symmetric under the interchange of electric and magnetic fields

Electromagnetic radiation

The electromagnetic waves that compose electromagnetic radiation can be imagined as a self-
propagating transverse oscillating wave of electric and magnetic fields. This diagram shows a
plane linearly polarized EMR wave propagating from left to right. The electric field is in a
vertical plane and the magnetic field in a horizontal plane. The electric and magnetic fields in
EMR waves are always in phase and at 90 degrees to each other.

Electromagnetic radiation (EM radiation or EMR) is a fundamental phenomenon of


electromagnetism, behaving as waves and also as particles called photons which travel through
space carrying radiant energy. In a vacuum, it propagates at the speed of light, normally in
straight lines. EMR is emitted and absorbed by charged particles. As an electromagnetic wave, it
has both electric and magnetic field components, which synchronously oscillate perpendicular to
each other and perpendicular to the direction of energy and wave propagation.

In classical physics, EMR is produced when charged particles are accelerated by forces acting on
them. Electrons are responsible for emission of most EMR because they have low mass, and
therefore are easily accelerated by a variety of mechanisms. Quantum processes can also produce
EMR, such as when atomic nuclei undergo gamma decay, and processes such as neutral pion
decay.

EMR carries energy—sometimes called radiant energy—through space continuously away from
the source (this is not true of the near-field part of the EM field). EMR also carries both
momentum and angular momentum. These properties may all be imparted to matter with which it
interacts. When created, EMR is produced from other types of energy and it is converted to other
types of energy when it is destroyed.

The electromagnetic spectrum, in order of increasing frequency and decreasing wavelength, can
be divided, for practical engineering purposes, into radio waves, microwaves, infrared radiation,
visible light, ultraviolet radiation, X-rays and gamma rays. The eyes of various organisms sense a
relatively small range of frequencies of EMR near and including the visible spectrum or light.
Visible light is that part of the spectrum to which human eyes respond. Higher frequencies
(shorter wavelengths) have more energy in the photons, according to the well-known law E=hν,
where E is the energy per photon, ν is the frequency carried by the photon, and h is Planck's
constant. A single gamma ray photon carries far more energy than a single photon of visible
light.

The photon is the quantum of the electromagnetic interaction, and is the basic constituent of all
forms of EMR. The quantum nature of light becomes more apparent at high frequencies (thus
high photon energy). Such photons behave more like particles than lower-frequency photons do.

Electromagnetic waves in free space must be solutions of Maxwell's electromagnetic wave


equation. Two main classes of solutions are known, namely plane waves and spherical waves.
The plane waves may be viewed as the limiting case of spherical waves at a very large (ideally
infinite) distance from the source. Both types of waves can have a waveform which is an
arbitrary time function (so long as it is sufficiently differentiable to conform to the wave
equation). As with any time function, this can be decomposed by means of Fourier analysis into
its frequency spectrum, or individual sinusoidal components, each of which contains a single
frequency, amplitude, and phase. Such a component wave is said to be monochromatic. A
monochromatic electromagnetic wave can be characterized by its frequency or wavelength, its
peak amplitude, its phase relative to some reference phase, its direction of propagation, and its
polarization.

Electromagnetic radiation is associated with EM fields that are free to propagate themselves
without the continuing influence of the moving charges that produced them, because they have
achieved sufficient distance from those charges. Thus, EMR is sometimes referred to as the far
field. In this language, the near field refers to EM fields near the charges and current that directly
produced them, as for example with simple magnets and static electricity phenomena. In EMR,
the magnetic and electric fields are each induced by changes in the other type of field, thus
propagating itself as a wave. This close relationship assures that both types of fields in EMR
stand in phase and in a fixed ratio of intensity to each other, with maxima and nodes in each
found at the same places in space.

The effects of EMR upon biological systems (and also to many other chemical systems, under
standard conditions) depend both upon the radiation's power and frequency. For lower
frequencies of EMR up to those of visible light (i.e., radio, microwave, infrared), the damage
done to cells and also to many ordinary materials under such conditions is determined mainly by
heating effects, and thus by the radiation power. By contrast, for higher frequency radiations at
ultraviolet frequencies and above (i.e., X-rays and gamma rays) the damage to chemical
materials and living cells by EMR is far larger than that done by simple heating, due to the
ability of single photons in such high frequency EMR to damage individual molecules
chemically.

Contents
 1 Physics
o 1.1 Theory

 1.1.1 Maxwell’s equations for EM fields far from sources


 1.1.2 Near and far fields
o 1.2 Properties

o 1.3 Wave model

o 1.4 Particle model and quantum theory

o 1.5 Wave–particle duality

o 1.6 Wave and particle effects of electromagnetic radiation

o 1.7 Speed of propagation

o 1.8 Special theory of relativity

 2 History of discovery
 3 Electromagnetic spectrum
o 3.1 Radio and microwave heating and currents, and infrared heating

o 3.2 Reversible and nonreversible molecular changes from visible light

o 3.3 Molecular damage from ultraviolet

o 3.4 Ionization and extreme types of molecular damage from X-rays and gamma
rays
 4 Propagation and absorption in the Earth's atmosphere and magnetosphere
 5 Types and sources, classed by spectral band (frequency)
o 5.1 Radio waves

o 5.2 Microwaves

o 5.3 Infrared

o 5.4 Visible light

o 5.5 Ultraviolet

o 5.6 X-rays

o 5.7 Gamma rays

o 5.8 Thermal radiation and electromagnetic radiation as a form of heat

 6 Biological effects
 7 Derivation from electromagnetic theory

Physics
Theory
Shows the relative wavelengths of the electromagnetic waves of three different colors of light
(blue, green, and red) with a distance scale in micrometers along the x-axis.
Main articles: Maxwell's equations and Near and far field

Maxwell’s equations for EM fields far from sources

James Clerk Maxwell first formally postulated electromagnetic waves. These were subsequently
confirmed by Heinrich Hertz. Maxwell derived a wave form of the electric and magnetic
equations, thus uncovering the wave-like nature of electric and magnetic fields, and their
symmetry. Because the speed of EM waves predicted by the wave equation coincided with the
measured speed of light, Maxwell concluded that light itself is an EM wave.

According to Maxwell's equations, a spatially varying electric field is always associated with a
magnetic field that changes over time. Likewise, a spatially varying magnetic field is associated
with specific changes over time in the electric field. In an electromagnetic wave, the changes in
the electric field are always accompanied by a wave in the magnetic field in one direction, and
vice versa. This relationship between the two occurs without either type field causing the other;
rather, they occur together in the same way that time and space changes occur together and are
interlinked in special relativity. In fact, magnetic fields may be viewed as relativistic distortions
of electric fields, so the close relationship between space and time changes here is more than an
analogy. Together, these fields form a propagating electromagnetic wave, which moves out into
space and need never again affect the source. The distant EM field formed in this way by the
acceleration of a charge carries energy with it that "radiates" away through space, hence the term
for it.

Near and far fields

Main article: Liénard–Wiechert potential

In electromagnetic radiation (such as microwaves from an antenna, shown here) the term applies
only to the parts of the electromagnetic field that radiate into infinite space and decrease in
intensity by an inverse-square law of power, so that the total radiation energy that crosses
through an imaginary spherical surface is the same, no matter how far away from the antenna the
spherical surface is drawn. Electromagnetic radiation thus includes the far field part of the
electromagnetic field around a transmitter. A part of the "near-field" close to the transmitter,
forms part of the changing electromagnetic field, but does not count as electromagnetic radiation.
Maxwell's equations established that some charges and currents ("sources") produce a local type
of electromagnetic field near them that does not have the behavior of EMR. In particular,
according to Maxwell, currents directly produce a magnetic field, but it is of a magnetic dipole
type which dies out rapidly with distance from the current. In a similar manner, moving charges
being separated from each other in a conductor by a changing electrical potential (such as in an
antenna) produce an electric dipole type electrical field, but this also dies away very quickly with
distance. Both of these fields make up the near-field near the EMR source. Neither of these
behaviors are responsible for EM radiation. Instead, they cause electromagnetic field behavior
that only efficiently transfers power to a receiver very close to the source, such as the magnetic
induction inside a transformer, or the feedback behavior that happens close to the coil of a metal
detector. Typically, near-fields have a powerful effect on their own sources, causing an increased
“load” (decreased electrical reactance) in the source or transmitter, whenever energy is
withdrawn from the EM field by a receiver. Otherwise, these fields do not “propagate” freely out
into space, carrying their energy away without distance-limit, but rather oscillate back and forth,
returning their energy to the transmitter if it is not received by a receiver.[citation needed]

By contrast, the EM far-field is composed of radiation that is free of the transmitter in the sense
that (unlike the case in an electrical transformer) the transmitter requires the same power to send
these changes in the fields out, whether the signal is immediately picked up, or not. This distant
part of the electromagnetic field is "electromagnetic radiation" (also called the far-field). The far-
fields propagate without ability for the transmitter to affect them, and this causes them to be
independent in the sense that their existence and their energy, after they have left the transmitter,
is completely independent of both transmitter and receiver. Because such waves conserve the
amount of energy they transmit through any spherical boundary surface drawn around their
source, and because such surfaces have an area that is defined by the square of the distance from
the source, the power of EM radiation always varies according to an inverse-square law. This is
in contrast to dipole parts of the EM field close to the source (the near-field), which varies in
power according to an inverse cube power law, and thus does not transport a conserved amount
of energy over distances, but instead dies away rapidly with distance, with its energy (as noted)
either rapidly returning to the transmitter, or else absorbed by a nearby receiver (such as a
transformer secondary coil).[citation needed]

The far-field (EMR) depends on a different mechanism for its production than the near-field, and
upon different terms in Maxwell’s equations. Whereas the magnetic part of the near-field is due
to currents in the source, the magnetic field in EMR is due only to the local change in the electric
field. In a similar way, while the electric field in the near-field is due directly to the charges and
charge-separation in the source, the electric field in EMR is due to a change in the local magnetic
field. Both of these processes for producing electric and magnetic EMR fields have a different
dependence on distance than do near-field dipole electric and magnetic fields, and that is why the
EMR type of EM field becomes dominant in power “far” from sources. The term “far from
sources” refers to how far from the source (moving at the speed of light) any portion of the
outward-moving EM field is located, by the time that source currents are changed by the varying
source potential, and the source has therefore begun to generate an outwardly moving EM field
of a different phase.[citation needed]
A more compact view of EMR is that the far-field that composes EMR is generally that part of
the EM field that has traveled sufficient distance from the source, that it has become completely
disconnected from any feedback to the charges and currents that were originally responsible for
it. Now independent of the source charges, the EM field, as it moves farther away, is dependent
only upon the accelerations of the charges that produced it. It no longer has a strong connection
to the direct fields of the charges, or to the velocity of the charges (currents).[citation needed]

In the Liénard–Wiechert potential formulation of the electric and magnetic fields due to motion
of a single particle (according to Maxwell's equations), the terms associated with acceleration of
the particle are those that are responsible for the part of the field that is regarded as
electromagnetic radiation. By contrast, the term associated with the changing static electric field
of the particle and the magnetic term that results from the particle's uniform velocity, are both
seen to be associated with the electromagnetic near-field, and do not comprise EM radiation.
[citation needed]

Properties

Electromagnetic waves can be imagined as a self-propagating transverse oscillating wave of


electric and magnetic fields. This 3D diagram shows a plane linearly polarized wave propagating
from left to right
This 3D diagram shows a plane linearly polarized wave propagating from left to right. Note that
the electric and magnetic fields in such a wave are in-phase with each other, reaching minima
and maxima together

The physics of electromagnetic radiation is electrodynamics. Electromagnetism is the physical


phenomenon associated with the theory of electrodynamics. Electric and magnetic fields obey
the properties of superposition. Thus, a field due to any particular particle or time-varying
electric or magnetic field contributes to the fields present in the same space due to other causes.
Further, as they are vector fields, all magnetic and electric field vectors add together according to
vector addition. For example, in optics two or more coherent lightwaves may interact and by
constructive or destructive interference yield a resultant irradiance deviating from the sum of the
component irradiances of the individual lightwaves.[citation needed]

Since light is an oscillation it is not affected by travelling through static electric or magnetic
fields in a linear medium such as a vacuum. However, in nonlinear media, such as some crystals,
interactions can occur between light and static electric and magnetic fields — these interactions
include the Faraday effect and the Kerr effect.[citation needed]

In refraction, a wave crossing from one medium to another of different density alters its speed
and direction upon entering the new medium. The ratio of the refractive indices of the media
determines the degree of refraction, and is summarized by Snell's law. Light of composite
wavelengths (natural sunlight) disperses into a visible spectrum passing through a prism, because
of the wavelength dependent refractive index of the prism material (dispersion); that is, each
component wave within the composite light is bent a different amount.[citation needed]

EM radiation exhibits both wave properties and particle properties at the same time (see wave-
particle duality). Both wave and particle characteristics have been confirmed in a large number
of experiments. Wave characteristics are more apparent when EM radiation is measured over
relatively large timescales and over large distances while particle characteristics are more evident
when measuring small timescales and distances. For example, when electromagnetic radiation is
absorbed by matter, particle-like properties will be more obvious when the average number of
photons in the cube of the relevant wavelength is much smaller than 1. It is not too difficult to
experimentally observe non-uniform deposition of energy when light is absorbed, however this
alone is not evidence of "particulate" behavior of light. Rather, it reflects the quantum nature of
matter.[1] Demonstrating that the light itself is quantized, not merely its interaction with matter, is
a more subtle problem.

There are experiments in which the wave and particle natures of electromagnetic waves appear in
the same experiment, such as the self-interference of a single photon. True single-photon
experiments (in a quantum optical sense) can be done today in undergraduate-level labs.[2] When
a single photon is sent through an interferometer, it passes through both paths, interfering with
itself, as waves do, yet is detected by a photomultiplier or other sensitive detector only once.

A quantum theory of the interaction between electromagnetic radiation and matter such as
electrons is described by the theory of quantum electrodynamics.[citation needed]
Wave model

Electromagnetic radiation is a transverse wave, meaning that the oscillations of the waves are
perpendicular to the direction of energy transfer and travel. The electric and magnetic parts of the
field stand in a fixed ratio of strengths in order to satisfy the two Maxwell equations that specify
how one is produced from the other. These E and B fields are also in phase, with both reaching
maxima and minima at the same points in space (see illustrations). A common misconception is
that the E and B fields in electromagnetic radiation are out of phase because a change in one
produces the other, and this would produce a phase difference between them as sinusoidal
functions (as indeed happens in electromagnetic induction, and in the near-field close to
antennas). However, in the far-field EM radiation which is described by the two source-free
Maxwell curl operator equations, a more correct description is that a time-change in one type of
field is proportional to a space-change in the other. These derivatives require that the E and B
fields in EMR are in-phase (see math section below).[citation needed]

An important aspect of the nature of light is frequency. The frequency of a wave is its rate of
oscillation and is measured in hertz, the SI unit of frequency, where one hertz is equal to one
oscillation per second. Light usually has a spectrum of frequencies that sum to form the resultant
wave. Different frequencies undergo different angles of refraction, a phenomenon known as
dispersion.

A wave consists of successive troughs and crests, and the distance between two adjacent crests or
troughs is called the wavelength. Waves of the electromagnetic spectrum vary in size, from very
long radio waves the size of buildings to very short gamma rays smaller than atom nuclei.
Frequency is inversely proportional to wavelength, according to the equation:[citation needed]

where v is the speed of the wave (c in a vacuum, or less in other media), f is the frequency and λ
is the wavelength. As waves cross boundaries between different media, their speeds change but
their frequencies remain constant.

Interference is the superposition of two or more waves resulting in a new wave pattern. If the
fields have components in the same direction, they constructively interfere, while opposite
directions cause destructive interference. An example of interference caused by EMR is
electromagnetic interference (EMI) or as it is more commonly known as, radio-frequency
interference (RFI).[citation needed]

The energy in electromagnetic waves is sometimes called radiant energy.[citation needed]

Particle model and quantum theory

See also: Quantization (physics) and Quantum optics

An anomaly arose in the late 19th century involving a contradiction between the wave theory of
light on the one hand, and on the other, observers' actual measurements of the electromagnetic
spectra that were being emitted by thermal radiators known as black bodies. Physicists struggled
with this problem, which later became known as the ultraviolet catastrophe, unsuccessfully for
many years. In 1900, Max Planck developed a new theory of black-body radiation that explained
the observed spectrum. Planck's theory was based on the idea that black bodies emit light (and
other electromagnetic radiation) only as discrete bundles or packets of energy. These packets
were called quanta. Later, Albert Einstein proposed that the quanta of light might be regarded as
real particles, and (still later) the particle of light was given the name photon, to correspond with
other particles being described around this time, such as the electron and proton. A photon has an
energy, E, proportional to its frequency, f, by

where h is Planck's constant, is the wavelength and c is the speed of light. This is sometimes
known as the Planck–Einstein equation.[3] In quantum theory (see first quantization) the energy
of the photons is thus directly proportional to the frequency of the EMR wave.[4]

Likewise, the momentum p of a photon is also proportional to its frequency and inversely
proportional to its wavelength:

The source of Einstein's proposal that light was composed of particles (or could act as particles in
some circumstances) was an experimental anomaly not explained by the wave theory: the
photoelectric effect, in which light striking a metal surface ejected electrons from the surface,
causing an electric current to flow across an applied voltage. Experimental measurements
demonstrated that the energy of individual ejected electrons was proportional to the frequency,
rather than the intensity, of the light. Furthermore, below a certain minimum frequency, which
depended on the particular metal, no current would flow regardless of the intensity. These
observations appeared to contradict the wave theory, and for years physicists tried in vain to find
an explanation. In 1905, Einstein explained this puzzle by resurrecting the particle theory of light
to explain the observed effect. Because of the preponderance of evidence in favor of the wave
theory, however, Einstein's ideas were met initially with great skepticism among established
physicists. Eventually Einstein's explanation was accepted as new particle-like behavior of light
was observed, such as the Compton effect.[citation needed]

As a photon is absorbed by an atom, it excites the atom, elevating an electron to a higher energy
level (on average, one that is farther from the nucleus). When an electron in an excited molecule
or atom descends to a lower energy level, it emits a photon of light equal to the energy
difference. Since the energy levels of electrons in atoms are discrete, each element and each
molecule emits and absorbs its own characteristic frequencies. When the emission of the photon
is immediate, this phenomenon is called fluorescence, a type of photoluminescence. An example
is visible light emitted from fluorescent paints, in response to ultraviolet (blacklight). Many other
fluorescent emissions are known in spectral bands other than visible light. When the emission of
the photon is delayed, the phenomenon is called phosphorescence.[citation needed]
Wave–particle duality

The modern theory that explains the nature of light includes the notion of wave–particle duality.
More generally, the theory states that everything has both a particle nature and a wave nature,
and various experiments can be done to bring out one or the other. The particle nature is more
easily discerned if an object has a large mass, and it was not until a bold proposition by Louis de
Broglie in 1924 that the scientific community realised that electrons also exhibited wave–particle
duality.[citation needed]

Wave and particle effects of electromagnetic radiation

Together, wave and particle effects explain the emission and absorption spectra of EM radiation,
wherever it is seen. The matter-composition of the medium through which the light travels
determines the nature of the absorption and emission spectrum. These bands correspond to the
allowed energy levels in the atoms. Dark bands in the absorption spectrum are due to the atoms
in an intervening medium between source and observer, absorbing certain frequencies of the
light between emitter and detector/eye, then emitting them in all directions, so that a dark band
appears to the detector, due to the radiation scattered out of the beam. For instance, dark bands in
the light emitted by a distant star are due to the atoms in the star's atmosphere. A similar
phenomenon occurs for emission, which is seen when the emitting gas is glowing due to
excitation of the atoms from any mechanism, including heat. As electrons descend to lower
energy levels, a spectrum is emitted that represents the jumps between the energy levels of the
electrons, but lines are seen because again emission happens only at particular energies after
excitation. An example is the emission spectrum of nebulae.[citation needed] Rapidly moving electrons
are most sharply accelerated when they encounter a region of force, so they are responsible for
producing much of the highest frequency electromagnetic radiation observed in nature.

Today, scientists use these phenomena to perform various chemical determinations for the
composition of gases lit from behind (absorption spectra) and for glowing gases (emission
spectra). Spectroscopy (for example) determines what chemical elements a star is composed of.
Spectroscopy is also used in the determination of the distance of a star, using the red shift.[citation
needed]

Speed of propagation

Main article: Speed of light

Any electric charge that accelerates, or any changing magnetic field, produces electromagnetic
radiation. Electromagnetic information about the charge travels at the speed of light. Accurate
treatment thus incorporates a concept known as retarded time (as opposed to advanced time,
which is not physically possible in light of causality), which adds to the expressions for the
electrodynamic electric field and magnetic field. These extra terms are responsible for
electromagnetic radiation.[citation needed]

When any wire (or other conducting object such as an antenna) conducts alternating current,
electromagnetic radiation is propagated at the same frequency as the electric current. In many
such situations it is possible to identify an electrical dipole moment that arises from separation of
charges due to the exciting electrical potential, and this dipole moment oscillates in time, as the
charges move back and forth. This oscillation at a given frequency gives rise to changing electric
and magnetic fields, which then set the electromagnetic radiation in motion.[citation needed]

At the quantum level, electromagnetic radiation is produced when the wavepacket of a charged
particle oscillates or otherwise accelerates. Charged particles in a stationary state do not move,
but a superposition of such states may result in transition state which has an electric dipole
moment that oscillates in time. This oscillating dipole moment is responsible for the
phenomenon of radiative transition between quantum states of a charged particle. Such states
occur (for example) in atoms when photons are radiated as the atom shifts from one stationary
state to another.[citation needed]

Depending on the circumstances, electromagnetic radiation may behave as a wave or as particles.


As a wave, it is characterized by a velocity (the speed of light), wavelength, and frequency.
When considered as particles, they are known as photons, and each has an energy related to the
frequency of the wave given by Planck's relation E = hν, where E is the energy of the photon, h
= 6.626 × 10−34 J·s is Planck's constant, and ν is the frequency of the wave.[citation needed]

One rule is always obeyed regardless of the circumstances: EM radiation in a vacuum always
travels at the speed of light, relative to the observer, regardless of the observer's velocity. (This
observation led to Albert Einstein's development of the theory of special relativity.)[citation needed]

In a medium (other than vacuum), velocity factor or refractive index are considered, depending
on frequency and application. Both of these are ratios of the speed in a medium to speed in a
vacuum.[citation needed]

Special theory of relativity

Main article: Special theory of relativity

By the late nineteenth century, however, a handful of experimental anomalies remained that
could not be explained by the simple wave theory. One of these anomalies involved a
controversy over the speed of light. The speed of light and other EMR predicted by Maxwell's
equations did not appear unless the equations were modified in a way first suggested by
FitzGerald and Lorentz (see history of special relativity), or else otherwise it would depend on
the speed of observer relative to the "medium" (called luminiferous aether) which supposedly
"carried" the electromagnetic wave (in a manner analogous to the way air carries sound waves).
Experiments failed to find any observer effect, however. In 1905, Albert Einstein proposed that
space and time appeared to be velocity-changeable entities, not only for light propagation, but all
other processes and laws as well. These changes then automatically accounted for the constancy
of the speed of light and all electromagnetic radiation, from the viewpoints of all observers—
even those in relative motion.
Electromagnetic spectrum

In general, EM radiation (the designation 'radiation' excludes static electric and magnetic and
near fields) is classified by wavelength into radio, microwave, infrared, the visible spectrum we
perceive as visible light, ultraviolet, X-rays, and gamma rays. Arbitrary electromagnetic waves
can always be expressed by Fourier analysis in terms of sinusoidal monochromatic waves, which
in turn can each be classified into these regions of the EMR spectrum.

For certain classes of EM waves, the waveform is most usefully treated as random, and then
spectral analysis must be done by slightly different mathematical techniques appropriate to
random or stochastic processes. In such cases, the individual frequency components are
represented in terms of their power content, and the phase information is not preserved. Such a
representation is called the power spectral density of the random process. Random
electromagnetic radiation requiring this kind of analysis is, for example, encountered in the
interior of stars, and in certain other very wideband forms of radiation such as the Zero-Point
wave field of the electromagnetic vacuum.

The behavior of EM radiation depends on its frequency. Lower frequencies have longer
wavelengths, and higher frequencies have shorter wavelengths, and are associated with photons
of higher energy. There is no fundamental limit known to these wavelengths or energies, at either
end of the spectrum, although photons with energies near the Planck energy or exceeding it (far
too high to have ever been observed) will require new physical theories to describe.

Sound waves are not electromagnetic radiation. At the lower end of the electromagnetic
spectrum, about 20 Hz to about 20 kHz, are frequencies that might be considered in the audio
range. However, electromagnetic waves cannot be directly perceived by human ears. Sound
waves are the oscillating compression of molecules. To be heard, electromagnetic radiation must
be converted to pressure waves of the fluid in which the ear is located (whether the fluid is air,
water or something else).

Radio and microwave heating and currents, and infrared heating

When EM radiation interacts with matter, its behavior changes qualitatively as its frequency
changes. At radio and microwave frequencies, EMR interacts with matter largely as a bulk
collection of charges which are spread out over large numbers of affected atoms. In electrical
conductors, such induced bulk movement of charges (electric currents) results in absorption of
the EMR, or else separations of charges that cause generation of new EMR (effective reflection
of the EMR). An example is absorption or emission of radio waves by antennas, or absorption of
microwaves by water or other molecules with an electric dipole moment, as for example inside a
microwave oven. These interactions produce either electric currents or heat, or both. Infrared
EMR interacts with dipoles present in single molecules, which change as atoms vibrate at the
ends of a single chemical bond. For this reason, infrared is reflected by metals (as is most EMR
into the ultraviolet) but is absorbed by a wide range of substances, causing them to increase in
temperature as the vibrations dissipate as heat. In the same process, bulk substances radiate in the
infrared spontaneously (see thermal radiation section below).

Reversible and nonreversible molecular changes from visible light

As frequency increases into the visible range, photons of EMR have enough energy to change the
bond structure of some individual molecules. It is not a coincidence that this happens in the
"visible range," as the mechanism of vision involves the change in bonding of a single molecule
(retinal) which absorbs light in the rhodopsin the retina of the human eye. Photosynthesis
becomes possible in this range as well, for similar reasons, as a single molecule of chlorophyll is
excited by a single photon. Animals which detect infrared do not use such single molecule
processes, but are forced to make use of small packets of water which change temperature, in an
essentially thermal process that involves many photons (see infrared sensing in snakes). For this
reason, infrared, microwaves, and radio waves are thought to damage molecules and biological
tissue only by bulk heating, not excitation from single photons of the radiation (however, there
does remain controversy about possible non-thermal biological damage from low frequency EM
radiation, see below).

Visible light is able to affect a few molecules with single photons, but usually not in a permanent
or damaging way, in the absence of power high enough to increase temperature to damaging
levels. However, in plant tissues that carry on photosynthesis, carotenoids act to quench
electronically excited chlorophyll produced by visible light in a process called non-
photochemical quenching, in order to prevent reactions which would otherwise interfere with
photosynthesis at high light levels. There is also some limited evidence that some reactive
oxygen species are created by visible light in skin, and that these may have some role in
photoaging, in the same manner as ultraviolet A does.[6]
Molecular damage from ultraviolet

As a photon interacts with single atoms and molecules, the effect depends on the amount of
energy the photon carries. As frequency increases beyond visible into the ultraviolet, photons
now carry enough energy (about three electron volts or more) to excite certain doubly bonded
molecules into permanent chemical rearrangement. If these molecules are biological molecules
in DNA, this causes lasting damage. DNA is also indirectly damaged by reactive oxygen species
produced by ultraviolet A (UVA), which has energy too low to damage DNA directly. This is
why ultraviolet at all wavelengths can damage DNA, and is capable of causing cancer, and (for
UVB) skin burns (sunburn) which are far worse than would be produced by simple heating
(temperature increase) effects. This property of causing molecular damage that is far out of
proportion to all temperature-changing (i.e., heating) effects, is characteristic of all EMR with
frequencies at the visible light range and above. These properties of high-frequency EMR are due
to quantum effects which cause permanent damage to materials and tissues at the single
molecular level.[citation needed]

Ionization and extreme types of molecular damage from X-rays and gamma rays

At the higher end of the ultraviolet range, the energy of photons becomes large enough to impart
enough energy to electrons to cause them to be liberated from the atom, in a process called
photoionisation. The energy required for this is always larger than about 10 electron volts (eV)
corresponding with wavelengths smaller than 124 nm (some sources suggest a more realistic
cutoff of 33 eV, which is the energy required to ionize water). This high end of the ultraviolet
spectrum with energies in the approximate ionization range, is sometimes called "extreme UV."
(Most of this is filtered by the Earth's atmosphere).[citation needed]

Electromagnetic radiation composed of photons that carry minimum-ionization energy, or more,


(which includes the entire spectrum with shorter wavelengths), is therefore termed ionizing
radiation. (There are also many other kinds of ionizing radiation made of non-EM particles).
Electromagnetic-type ionizing radiation extends from the extreme ultraviolet to all higher
frequencies and shorter wavelengths, which means that all X-rays and gamma rays are ionizing
radiation. These are capable of the most severe types of molecular damage, which can happen in
biology to any type of biomolecule, including mutation and cancer, and often at great depths
from the skin, since the higher end of the X-ray spectrum, and all of the gamma ray spectrum,
are penetrating to matter. It is this type of damage which causes these types of radiation to be
especially carefully monitored, due to their hazard, even at comparatively low-energies, to all
living organisms.[citation needed]

Derivation from electromagnetic theory


Electromagnetic wave equation
Electromagnetic waves as a general phenomenon were predicted by the classical laws of
electricity and magnetism, known as Maxwell's equations. Inspection of Maxwell's equations
without sources (charges or currents) results in, along with the possibility of nothing happening,
nontrivial solutions of changing electric and magnetic fields. Beginning with Maxwell's
equations in free space:

where
is a vector differential operator (see Del).

One solution,

is trivial.

For a more useful solution, we utilize vector identities, which work for any vector, as follows:

To see how we can use this, take the curl of equation (2):

Evaluating the left hand side:

where we simplified the above by using equation (1).

Evaluate the right hand side:

Equations (6) and (7) are equal, so this results in a vector-valued differential equation for the
electric field, namely
Applying a similar pattern results in similar differential equation for the magnetic field:

These differential equations are equivalent to the wave equation:

where
c0 is the speed of the wave in free space and
f describes a displacement

Or more simply:

where is d'Alembertian:

Notice that, in the case of the electric and magnetic fields, the speed is:

This is the speed of light in vacuum. Maxwell's equations unified the vacuum permittivity , the
vacuum permeability , and the speed of light itself, c0. This relationship had been discovered
by Wilhelm Eduard Weber and Rudolf Kohlrausch prior to the development of Maxwell's
electrodynamics, however Maxwell was the first to produce a field theory consistent with waves
traveling at the speed of light.

But these are only two equations and we started with four, so there is still more information
pertaining to these waves hidden within Maxwell's equations. Let's consider a generic vector
wave for the electric field.
Here, is the constant amplitude, is any second differentiable function, is a unit vector in

the direction of propagation, and is a position vector. We observe that is a


generic solution to the wave equation. In other words

for a generic wave traveling in the direction.

This form will satisfy the wave equation, but will it satisfy all of Maxwell's equations, and with
what corresponding magnetic field?

The first of Maxwell's equations implies that electric field is orthogonal to the direction the wave
propagates.

The second of Maxwell's equations yields the magnetic field. The remaining equations will be
satisfied by this choice of .

Not only are the electric and magnetic field waves in the far-field traveling at the speed of light,
but they always have a special restricted orientation and proportional magnitudes, ,
which can be seen immediately from the Poynting vector. The electric field, magnetic field, and
direction of wave propagation are all orthogonal, and the wave propagates in the same direction
as . Also, E and B far-fields in free space, which as wave solutions depend primarily on
these two Maxwell equations, are always in-phase with each other. This is guaranteed since the
generic wave solution is first order in both space and time, and the curl operator on one side of
these equations results in first-order spacial derivatives of the wave solution, while the time-
derivative on the other side of the equations, which gives the other field, is first order in time,
resulting in the same phase shift for both fields in each mathematical operation.

From the viewpoint of an electromagnetic wave traveling forward, the electric field might be
oscillating up and down, while the magnetic field oscillates right and left; but this picture can be
rotated with the electric field oscillating right and left and the magnetic field oscillating down
and up. This is a different solution that is traveling in the same direction. This arbitrariness in the
orientation with respect to propagation direction is known as polarization. On a quantum level, it
is described as photon polarization. The direction of the polarization is defined as the direction of
the electric field.

More general forms of the second-order wave equations given above are available, allowing for
both non-vacuum propagation media and sources. A great many competing derivations exist, all
with varying levels of approximation and intended applications. One very general example is a
form of the electric field equation,[18] which was factorized into a pair of explicitly directional
wave equations, and then efficiently reduced into a single uni-directional wave equation by
means of a simple slow-evolution approximation

-----------------------------------------------------------------------------------------------------------------

Properties of Electromagnetic Waves

Electromagnetic waves are composed of oscillating electric and magnetic fields at right angles to
each other and both are perpendicular to the direction of propagation of the wave.
Electromagnetic waves differ in wavelength (or frequency).

In an electro-negative wave the electric field E(vector) and the Magnetic field B(vector)
oscillate perpendicular to each other and both are perpendicular to direction of propagation of
wave.

The source that produce them and methods of their detection are different, but they have the
following common properties :

1. Electromagnetic waves are propagated by oscillating electric and


magnetic fields oscillating at right angles to each other.
2. Electromagnetic waves travel with a constant velocity of 3 x 108 ms-1 in vacuum.
3. Electromagnetic waves are not deflected by electric or magnetic field.
4. Electromagnetic waves can show interference or diffraction.
5. Electromagnetic waves are transverse waves.
6. Electromagnetic waves may be polarized.
7. Electromagnetic waves need no medium of propagation. The energy from the sun is
received by the earth through electromagnetic waves.
8. The wavelength (λ) and the frequency (v) of electromagnetic wave is related as
a. c = v λ = ω/k
The S.I. unit of frequency is Hertz.

1 Hertz = 1 c / s

The S.I. unit of wavelength is metre.


We however, often express wavelength in Angstrom unit [ Å ]

1 Å = 10-10 m

Also, 1 nanometer = l nm = 10-9 m

Electromagnetic stress–energy tensor


Contents
 1 Definition
o 1.1 SI units

o 1.2 CGS units

 2 Algebraic properties
 3 Conservation laws
 4 See also
 5 References

Definition
SI units

In free space and flat space-time, the electromagnetic stress–energy tensor in SI units is[2]

where is the electromagnetic tensor. This expression is when using a metric of signature (-,
+,+,+). If using the metric with signature (+,-,-,-), the expression for will have opposite sign.

Explicitly in matrix form:


where is the Minkowski metric tensor of metric signature (−+++),

is the Poynting vector,

is the Maxwell stress tensor, and c is the speed of light. Thus, is expressed and measured in
SI pressure units (pascals).

CGS units

The permittivity of free space and permeability of free space in cgs-Gaussian units are

then:

and in explicit matrix form:

where Poynting vector becomes:


The stress–energy tensor for an electromagnetic field in a dielectric medium is less well
understood and is the subject of the unresolved Abraham–Minkowski controversy.[3]

The element of the stress–energy tensor represents the flux of the μth-component of the four-
momentum of the electromagnetic field, , going through a hyperplane ( is constant). It
represents the contribution of electromagnetism to the source of the gravitational field (curvature
of space-time) in general relativity.

Algebraic properties
This tensor has several noteworthy algebraic properties. First, it is a symmetric tensor:

Second, the tensor is traceless:

Third, the energy density is positive-definite:

These three algebraic properties have varying importance in the context of modern physics, and
they remove or reduce ambiguity of the definition of the electromagnetic stress-energy tensor.
The symmetry of the tensor is important in General Relativity, because the Einstein tensor is
symmetric. The tracelessness is regarded as important for the masslessness of the photon.[4]

Conservation laws
Main article: Conservation laws

The electromagnetic stress–energy tensor allows a compact way of writing the conservation laws
of linear momentum and energy in electromagnetism. The divergence of the stress energy tensor
is:

where is the (3D) Lorentz force per unit volume on matter.

This equation is equivalent to the following 3D conservation laws


respectively describing the flux of electromagnetic energy density

and electromagnetic momentum density

where J is the electric current density and ρ the electric charge density.

Momentum

Excepting very small losses due to friction and heat


transfer, momentum is conserved in cue sports such as
pool (break-off shot shown above). When one ball hits
another and is stopped, all its momentum has, in effect,
been transferred to the other ball. If, however, it is
deflected rather than stopped, its momentum is shared
between the two balls.

Common symbols p, p

SI unit kg m/s or N s

Classical mechanics
In classical mechanics, linear momentum or translational momentum (pl. momenta; SI unit
kg m/s, or equivalently, N s) is the product of the mass and velocity of an object. For example, a
heavy truck moving quickly has a large momentum—it takes a large or prolonged force to get
the truck up to this speed, and it takes a large or prolonged force to bring it to a stop afterwards.
If the truck were lighter, or moving more slowly, then it would have less momentum.

Like velocity, linear momentum is a vector quantity, possessing a direction as well as a


magnitude

Linear momentum is also a conserved quantity, meaning that if a closed system is not affected by
external forces, its total linear momentum cannot change. In classical mechanics, conservation of
linear momentum is implied by Newton's laws; but it also holds in special relativity (with a
modified formula) and, with appropriate definitions, a (generalized) linear momentum
conservation law holds in electrodynamics, quantum mechanics, quantum field theory, and
general relativity.

Contents
 1 Newtonian mechanics
o 1.1 Single particle

o 1.2 Many particles

o 1.3 Relation to force

o 1.4 Conservation

o 1.5 Dependence on reference frame

o 1.6 Application to collisions

 1.6.1 Elastic collisions


 1.6.2 Inelastic collisions
o 1.7 Multiple dimensions

o 1.8 Objects of variable mass

 2 Relativistic mechanics
o 2.1 Lorentz invariance

o 2.2 Four-vector formulation

 3 Quantum mechanics
 4 Generalized coordinates
o 4.1 Lagrangian mechanics
o 4.2 Hamiltonian mechanics

o 4.3 Symmetry and conservation

 5 Electromagnetism
o 5.1 Vacuum

o 5.2 Media

o 5.3 Particle in field

 5.3.1 Lagrangian and Hamiltonian formulation


 5.3.2 Canonical commutation relations
 6 Deformable bodies and fluids
o 6.1 Conservation in a continuum

o 6.2 Acoustic waves

Newtonian mechanics
Momentum has a direction as well as magnitude. Quantities that have both a magnitude and a
direction are known as vector quantities. Because momentum has a direction, it can be used to
predict the resulting direction of objects after they collide, as well as their speeds. Below, the
basic properties of momentum are described in one dimension. The vector equations are almost
identical to the scalar equations (see multiple dimensions).

Single particle

The momentum of a particle is traditionally represented by the letter p. It is the product of two
quantities, the mass (represented by the letter m) and velocity (v):[1]

The units of momentum are the product of the units of mass and velocity. In SI units, if the mass
is in kilograms and the velocity in meters per second, then the momentum is in kilogram
meters/second (kg m/s). Being a vector, momentum has magnitude and direction. For example, a
model airplane of 1 kg, traveling due north at 1 m/s in straight and level flight, has a momentum
of 1 kg m/s due north measured from the ground.

Many particles

The momentum of a system of particles is the sum of their momenta. If two particles have
masses m1 and m2, and velocities v1 and v2, the total momentum is
The momenta of more than two particles can be added in the same way.

A system of particles has a center of mass, a point determined by the weighted sum of their
positions:

If all the particles are moving, the center of mass will generally be moving as well. If the center
of mass is moving at velocity vcm, the momentum is:

This is known as Euler's first law.[2][3]

Relation to force

If a force F is applied to a particle for a time interval Δt, the momentum of the particle changes
by an amount

In differential form, this gives Newton's second law: the rate of change of the momentum of a
particle is equal to the force F acting on it:[1]

If the force depends on time, the change in momentum (or impulse) between times t1 and t2 is

The second law only applies to a particle that does not exchange matter with its surroundings,[4]
and so it is equivalent to write

so the force is equal to mass times acceleration.[1]


Example: a model airplane of 1 kg accelerates from rest to a velocity of 6 m/s due north in 2 s.
The thrust required to produce this acceleration is 3 newton. The change in momentum is
6 kg m/s. The rate of change of momentum is 3 (kg m/s)/s = 3 N.

Conservation

A Newton's cradle demonstrates conservation of momentum.

In a closed system (one that does not exchange any matter with the outside and is not acted on by
outside forces) the total momentum is constant. This fact, known as the law of conservation of
momentum, is implied by Newton's laws of motion.[5] Suppose, for example, that two particles
interact. Because of the third law, the forces between them are equal and opposite. If the particles
are numbered 1 and 2, the second law states that F1 = dp1/dt and F2 = dp2/dt. Therefore

or

If the velocities of the particles are u1 and u2 before the interaction, and afterwards they are v1
and v2, then

This law holds no matter how complicated the force is between particles. Similarly, if there are
several particles, the momentum exchanged between each pair of particles adds up to zero, so the
total change in momentum is zero. This conservation law applies to all interactions, including
collisions and separations caused by explosive forces.[5] It can also be generalized to situations
where Newton's laws do not hold, for example in the theory of relativity and in electrodynamics.
[6]

Dependence on reference frame


Newton's apple in Einstein's elevator. In person A's frame of reference, the apple has non-zero
velocity and momentum. In the elevator's and person B's frames of reference, it has zero velocity
and momentum.

Momentum is a measurable quantity, and the measurement depends on the motion of the
observer. For example, if an apple is sitting in a glass elevator that is descending, an outside
observer looking into the elevator sees the apple moving, so to that observer the apple has a
nonzero momentum. To someone inside the elevator, the apple does not move, so it has zero
momentum. The two observers each have a frame of reference in which they observe motions,
and if the elevator is descending steadily they will see behavior that is consistent with the same
physical laws.

Suppose a particle has position x in a stationary frame of reference. From the point of view of
another frame of reference moving at a uniform speed u, the position (represented by a primed
coordinate) changes with time as

This is called a Galilean transformation. If the particle is moving at speed dx/dt = v in the first
frame of reference, in the second it is moving at speed

Since u does not change, the accelerations are the same:

Thus, momentum is conserved in both reference frames. Moreover, as long as the force has the
same form in both frames, Newton's second law is unchanged. Forces such as Newtonian gravity,
which depend only on the scalar distance between objects, satisfy this criterion. This
independence of reference frame is called Newtonian relativity or Galilean invariance.[7]
A change of reference frame can often simplify calculations of motion. For example, in a
collision of two particles a reference frame can be chosen where one particle begins at rest.
Another commonly used reference frame is the center of mass frame, one that is moving with the
center of mass. In this frame, the total momentum is zero.

Application to collisions

By itself, the law of conservation of momentum is not enough to determine the motion of
particles after a collision. Another property of the motion, kinetic energy, must be known. This is
not necessarily conserved. If it is conserved, the collision is called an elastic collision; if not, it is
an inelastic collision.

Elastic collisions

Main article: Elastic collision

Elastic collision of equal masses

Elastic collision of unequal masses

An elastic collision is one in which no kinetic energy is lost. Perfectly elastic "collisions" can
occur when the objects do not touch each other, as for example in atomic or nuclear scattering
where electric repulsion keeps them apart. A slingshot maneuver of a satellite around a planet can
also be viewed as a perfectly elastic collision from a distance. A collision between two pool balls
is a good example of an almost totally elastic collision, due to their high rigidity; but when
bodies come in contact there is always some dissipation.[8]

A head-on elastic collision between two bodies can be represented by velocities in one
dimension, along a line passing through the bodies. If the velocities are u1 and u2 before the
collision and v1 and v2 after, the equations expressing conservation of momentum and kinetic
energy are:

A change of reference frame can often simplify the analysis of a collision. For example, suppose
there are two bodies of equal mass m, one stationary and one approaching the other at a speed v
(as in the figure). The center of mass is moving at speed v/2 and both bodies are moving towards
it at speed v/2. Because of the symmetry, after the collision both must be moving away from the
center of mass at the same speed. Adding the speed of the center of mass to both, we find that the
body that was moving is now stopped and the other is moving away at speed v. The bodies have
exchanged their velocities. Regardless of the velocities of the bodies, a switch to the center of
mass frame leads us to the same conclusion. Therefore, the final velocities are given by[5]

In general, when the initial velocities are known, the final velocities are given by[9]

If one body has much greater mass than the other, its velocity will be little affected by a collision
while the other body will experience a large change.

Inelastic collisions

Main article: Inelastic collision

a perfectly inelastic collision between equal masses

In an inelastic collision, some of the kinetic energy of the colliding bodies is converted into other
forms of energy such as heat or sound. Examples include traffic collisions,[10] in which the effect
of lost kinetic energy can be seen in the damage to the vehicles; electrons losing some of their
energy to atoms (as in the Franck–Hertz experiment);[11] and particle accelerators in which the
kinetic energy is converted into mass in the form of new particles.

In a perfectly inelastic collision (such as a bug hitting a windshield), both bodies have the same
motion afterwards. If one body is motionless to begin with, the equation for conservation of
momentum is

so

In a frame of reference moving at the speed v), the objects are brought to rest by the collision
and 100% of the kinetic energy is converted.
One measure of the inelasticity of the collision is the coefficient of restitution CR, defined as the
ratio of relative velocity of separation to relative velocity of approach. In applying this measure
to ball sports, this can be easily measured using the following formula:[12]

The momentum and energy equations also apply to the motions of objects that begin together and
then move apart. For example, an explosion is the result of a chain reaction that transforms
potential energy stored in chemical, mechanical, or nuclear form into kinetic energy, acoustic
energy, and electromagnetic radiation. Rockets also make use of conservation of momentum:
propellant is thrust outward, gaining momentum, and an equal and opposite momentum is
imparted to the rocket.[13]

Multiple dimensions

Two-dimensional elastic collision. There is no motion perpendicular to the image, so only two
components are needed to represent the velocities and momenta. The two blue vectors represent
velocities after the collision and add vectorially to get the initial (red) velocity.

Real motion has both direction and magnitude and must be represented by a vector. In a
coordinate system with x, y, z axes, velocity has components vx in the x direction, vy in the y
direction, vz in the z direction. The vector is represented by a boldface symbol:[14]

Similarly, the momentum is a vector quantity and is represented by a boldface symbol:

The equations in the previous sections work in vector form if the scalars p and v are replaced by
vectors p and v. Each vector equation represents three scalar equations. For example,

represents three equations:[14]


The kinetic energy equations are exceptions to the above replacement rule. The equations are still
one-dimensional, but each scalar represents the magnitude of the vector, for example,

Each vector equation represents three scalar equations. Often coordinates can be chosen so that
only two components are needed, as in the figure. Each component can be obtained separately
and the results combined to produce a vector result.[14]

A simple construction involving the center of mass frame can be used to show that if a stationary
elastic sphere is struck by a moving sphere, the two will head off at right angles after the
collision (as in the figure).[15]

Objects of variable mass

The concept of momentum plays a fundamental role in explaining the behavior of variable-mass
objects such as a rocket ejecting fuel or a star accreting gas. In analyzing such an object, one
treats the object's mass as a function that varies with time: m(t). The momentum of the object at
time t is therefore p(t) = m(t)v(t). One might then try to invoke Newton's second law of motion
by saying that the external force F on the object is related to its momentum p(t) by F = dp/dt,
but this is incorrect, as is the related expression found by applying the product rule to d(mv)/dt:
[16]

This equation does not correctly describe the motion of variable-mass objects. The correct
equation is

where u is the velocity of the ejected/accreted mass as seen in the object's rest frame.[16] This is
distinct from v, which is the velocity of the object itself as seen in an inertial frame.

This equation is derived by keeping track of both the momentum of the object as well as the
momentum of the ejected/accreted mass. When considered together, the object and the mass
constitute a closed system in which total momentum is conserved.
Relativistic mechanics
See also: Mass in special relativity and Tests of relativistic energy and momentum

Lorentz invariance

Newtonian physics assumes that absolute time and space exist outside of any observer; this gives
rise to the Galilean invariance described earlier. It also results in a prediction that the speed of
light can vary from one reference frame to another. This is contrary to observation. In the special
theory of relativity, Einstein keeps the postulate that the equations of motion do not depend on
the reference frame, but assumes that the speed of light c is invariant. As a result, position and
time in two reference frames are related by the Lorentz transformation instead of the Galilean
transformation.[17]

Consider, for example, a reference frame moving relative to another at velocity v in the x
direction. The Galilean transformation gives the coordinates of the moving frame as

while the Lorentz transformation gives[18]

where γ is the Lorentz factor:

Newton's second law, with mass fixed, is not invariant under a Lorentz transformation. However,
it can be made invariant by making the inertial mass m of an object a function of velocity:

m0 is the object's invariant mass.[19]

The modified momentum,

obeys Newton's second law:


Within the domain of classical mechanics, relativistic momentum closely approximates
Newtonian momentum: at low velocity, γm0v is approximately equal to m0v, the Newtonian
expression for momentum.

Four-vector formulation

Main article: Four-momentum

In the theory of relativity, physical quantities are expressed in terms of four-vectors that include
time as a fourth coordinate along with the three space coordinates. These vectors are generally
represented by capital letters, for example R for position. The expression for the four-momentum
depends on how the coordinates are expressed. Time may be given in its normal units or
multiplied by the speed of light so that all the components of the four-vector have dimensions of
length. If the latter scaling is used, an interval of proper time, τ, defined by[20]

is invariant under Lorentz transformations (in this expression and in what follows the (+ − − −)
metric signature has been used, different authors use different conventions). Mathematically this
invariance can be ensured in one of two ways: by treating the four-vectors as Euclidean vectors
and multiplying time by the square root of -1; or by keeping time a real quantity and embedding
the vectors in a Minkowski space.[21] In a Minkowski space, the scalar product of two four-
vectors U = (U0,U1,U2,U3) and V = (V0,V1,V2,V3) is defined as

In all the coordinate systems, the (contravariant) relativistic four-velocity is defined by

and the (covariant) four-momentum is

where m0 is the invariant mass. If R = (ct,x,y,z) (in Minkowski space), then[note 1]

Using Einstein's mass-energy equivalence, E = mc2, this can be rewritten as


Thus, conservation of four-momentum is Lorentz-invariant and implies conservation of both
mass and energy.

The magnitude of the momentum four-vector is equal to m0c:

and is invariant across all reference frames.

The relativistic energy–momentum relationship holds even for massless particles such as
photons; by setting m0 = 0 it follows that

In a game of relativistic "billiards", if a stationary particle is hit by a moving particle in an elastic


collision, the paths formed by the two afterwards will form an acute angle. This is unlike the
non-relativistic case where they travel at right angles.[22]

Quantum mechanics
Further information: Momentum operator

In quantum mechanics, momentum is defined as an operator on the wave function. The


Heisenberg uncertainty principle defines limits on how accurately the momentum and position of
a single observable system can be known at once. In quantum mechanics, position and
momentum are conjugate variables.

For a single particle described in the position basis the momentum operator can be written as

where ∇ is the gradient operator, ħ is the reduced Planck constant, and i is the imaginary unit.
This is a commonly encountered form of the momentum operator, though the momentum
operator in other bases can take other forms. For example, in momentum space the momentum
operator is represented as
where the operator p acting on a wave function ψ(p) yields that wave function multiplied by the
value p, in an analogous fashion to the way that the position operator acting on a wave function
ψ(x) yields that wave function multiplied by the value x.

For both massive and massless objects, relativistic momentum is related to the de Broglie
wavelength λ by

Electromagnetic radiation (including visible light, ultraviolet light, and radio waves) is carried by
photons. Even though photons (the particle aspect of light) have no mass, they still carry
momentum. This leads to applications such as the solar sail. The calculation of the momentum of
light within dielectric media is somewhat controversial (see Abraham–Minkowski controversy).
[23]

Generalized coordinates
See also: Analytical mechanics

Newton's laws can be difficult to apply to many kinds of motion because the motion is limited by
constraints. For example, a bead on an abacus is constrained to move along its wire and a
pendulum bob is constrained to swing at a fixed distance from the pivot. Many such constraints
can be incorporated by changing the normal Cartesian coordinates to a set of generalized
coordinates that may be fewer in number.[24] Refined mathematical methods have been developed
for solving mechanics problems in generalized coordinates. They introduce a generalized
momentum, also known as the canonical or conjugate momentum, that extends the concepts of
both linear momentum and angular momentum. To distinguish it from generalized momentum,
the product of mass and velocity is also referred to as mechanical, kinetic or kinematic
momentum.[6][25][26] The two main methods are described below.

Lagrangian mechanics

In Lagrangian mechanics, a Lagrangian is defined as the difference between the kinetic energy T
and the potential energy V:

If the generalized coordinates are represented as a vector q = (q1, q2, ... , qN) and time
differentiation is represented by a dot over the variable, then the equations of motion (known as
the Lagrange or Euler–Lagrange equations) are a set of N equations:[27]
If a coordinate qi is not a Cartesian coordinate, the associated generalized momentum component
pi does not necessarily have the dimensions of linear momentum. Even if qi is a Cartesian
coordinate, pi will not be the same as the mechanical momentum if the potential depends on
velocity.[6] Some sources represent the kinematic momentum by the symbol Π.[28]

In this mathematical framework, a generalized momentum is associated with the generalized


coordinates. Its components are defined as

Each component pj is said to be the conjugate momentum for the coordinate qj.

Now if a given coordinate qi does not appear in the Lagrangian (although its time derivative
might appear), then

This is the generalization of the conservation of momentum.[6]

Even if the generalized coordinates are just the ordinary spatial coordinates, the conjugate
momenta are not necessarily the ordinary momentum coordinates. An example is found in the
section on electromagnetism.

Hamiltonian mechanics

In Hamiltonian mechanics, the Lagrangian (a function of generalized coordinates and their


derivatives) is replaced by a Hamiltonian that is a function of generalized coordinates and
momentum. The Hamiltonian is defined as

where the momentum is obtained by differentiating the Lagrangian as above. The Hamiltonian
equations of motion are[29]
As in Lagrangian mechanics, if a generalized coordinate does not appear in the Hamiltonian, its
conjugate momentum component is conserved.[30]

Symmetry and conservation

Conservation of momentum is a mathematical consequence of the homogeneity (shift symmetry)


of space (position in space is the canonical conjugate quantity to momentum). That is,
conservation of momentum is a consequence of the fact that the laws of physics do not depend
on position; this is a special case of Noether's theorem.[31]

Electromagnetism
In Newtonian mechanics, the law of conservation of momentum can be derived from the law of
action and reaction, which states that every force has a reciprocating equal and opposite force.
Under some circumstances one moving charged particle can exert a force on another without any
return force.[disputed – discuss][32] Moreover, Maxwell's equations, the foundation of classical
electrodynamics, are Lorentz-invariant. Nevertheless, the combined momentum of the particles
and the electromagnetic field is conserved.

Vacuum

In Maxwell's equations, the forces between particles are mediated by electric and magnetic
fields. The electromagnetic force (Lorentz force) on a particle with charge q due to a
combination of electric field E and magnetic field (as given by the "B-field" B) is

This force imparts a momentum to the particle, so by Newton's second law the particle must
impart a momentum to the electromagnetic fields.[33]

In a vacuum, the momentum per unit volume is

where μ0 is the vacuum permeability and c is the speed of light. The momentum density is
proportional to the Poynting vector S which gives the directional rate of energy transfer per unit
area:[33][34]

If momentum is to be conserved in a volume V, changes in the momentum of matter through the


Lorentz force must be balanced by changes in the momentum of the electromagnetic field and
outflow of momentum. If Pmech is the momentum of all the particles in a volume V, and the
particles are treated as a continuum, then Newton's second law gives

The electromagnetic momentum is

and the equation for conservation of each component i of the momentum is

The term on the right is an integral over the surface S representing momentum flow into and out
of the volume, and nj is a component of the surface normal of S. The quantity Tij is called the
Maxwell stress tensor, defined as

[33]

Media

The above results are for the microscopic Maxwell equations, applicable to electromagnetic
forces in a vacuum (or on a very small scale in media). It is more difficult to define momentum
density in media because the division into electromagnetic and mechanical is arbitrary. The
definition of electromagnetic momentum density is modified to

where the H-field H is related to the B-field and the magnetization M by

The electromagnetic stress tensor depends on the properties of the media.[33]

Particle in field
If a charged particle q moves in an electromagnetic field, its kinetic momentum mv is not
conserved. However, it has a canonical momentum that is conserved.

Lagrangian and Hamiltonian formulation

The kinetic momentum p is different from the canonical momentum P (synonymous with the
generalized momentum) conjugate to the ordinary position coordinates r, because P includes a
contribution from the electric potential φ(r, t) and vector potential A(r, t):[28]

Classical mechanics Relativistic mechanics


Lagrangian
Canonical
momentum

Kinetic
momentum

Hamiltonia
n

where ṙ = v is the velocity (see time derivative), e is the electric charge of the particle and γ =
(1 − ṙ·ṙ/c2)−1/2 is the Lorentz factor. See also Electromagnetism (momentum). If neither φ nor
A depends on position, P is conserved.[6]

The classical Hamiltonian for a particle in any field equals the total energy of the system – the
kinetic energy T = p2/2m (where p2 = p · p, see dot product) plus the potential energy V. For a
particle in an electromagnetic field, the potential energy is V = eφ, and since the kinetic energy
T always corresponds to the kinetic momentum p, replacing the kinetic momentum by the above
equation (p = P − eA) leads to the Hamiltonian in the table.

These Lagrangian and Hamiltonian expressions can derive the Lorentz force.

Canonical commutation relations

The kinetic momentum (p above) satisfies the commutation relation:[28]


where: j, k, ℓ are indices labelling vector components, Bℓ is a component of the magnetic field,
and εkjℓ is the Levi-Civita symbol, here in 3-dimensions.

Deformable bodies and fluids


Conservation in a continuum

Main article: Cauchy momentum equation

Motion of a material body

In fields such as fluid dynamics and solid mechanics, it is not feasible to follow the motion of
individual atoms or molecules. Instead, the materials must be approximated by a continuum in
which there is a particle or fluid parcel at each point that is assigned the average of the properties
of atoms in a small region nearby. In particular, it has a density ρ and velocity v that depend on
time t and position r. The momentum per unit volume is ρv.[35]

Consider a column of water in hydrostatic equilibrium. All the forces on the water are in balance
and the water is motionless. On any given drop of water, two forces are balanced. The first is
gravity, which acts directly on each atom and molecule inside. The gravitational force per unit
volume is ρg, where g is the gravitational acceleration. The second force is the sum of all the
forces exerted on its surface by the surrounding water. The force from below is greater than the
force from above by just the amount needed to balance gravity. The normal force per unit area is
the pressure p. The average force per unit volume inside the droplet is the gradient of the
pressure, so the force balance equation is[36]
If the forces are not balanced, the droplet accelerates. This acceleration is not simply the partial
derivative ∂v/∂t because the fluid in a given volume changes with time. Instead, the material
derivative is needed:[37]

Applied to any physical quantity, the material derivative includes the rate of change at a point
and the changes dues to advection as fluid is carried past the point. Per unit volume, the rate of
change in momentum is equal to ρDv/Dt. This is equal to the net force on the droplet.

Forces that can change the momentum of a droplet include the gradient of the pressure and
gravity, as above. In addition, surface forces can deform the droplet. In the simplest case, a shear
stress τ, exerted by a force parallel to the surface of the droplet, is proportional to the rate of
deformation or strain rate. Such a shear stress occurs if the fluid has a velocity gradient because
the fluid is moving faster on one side than another. If the speed in the x direction varies with z,
the tangential force in direction x per unit area normal to the z direction is

where μ is the viscosity. This is also a flux, or flow per unit area, of x-momentum through the
surface.[38]

Including the effect of viscosity, the momentum balance equations for the incompressible flow of
a Newtonian fluid are

These are known as the Navier–Stokes equations.[39]

The momentum balance equations can be extended to more general materials, including solids.
For each surface with normal in direction i and force in direction j, there is a stress component
σij. The nine components make up the Cauchy stress tensor σ, which includes both pressure and
shear. The local conservation of momentum is expressed by the Cauchy momentum equation:

where f is the body force.[40]


The Cauchy momentum equation is broadly applicable to deformations of solids and liquids. The
relationship between the stresses and the strain rate depends on the properties of the material (see
Types of viscosity).

Acoustic waves

A disturbance in a medium gives rise to oscillations, or waves, that propagate away from their
source. In a fluid, small changes in pressure p can often be described by the acoustic wave
equation:

where c is the speed of sound. In a solid, similar equations can be obtained for propagation of
pressure (P-waves) and shear (S-waves).[41]

The flux, or transport per unit area, of a momentum component ρvj by a velocity vi is equal to ρ
vjvj. In the linear approximation that leads to the above acoustic equation, the time average of
this flux is zero. However, nonlinear effects can give rise to a nonzero average.[42] It is possible
for momentum flux to occur even though the wave itself does not have a mean momentum.[43]

History of the concept


See also: Theory of impetus

In about 530 A.D., working in Alexandria, Byzantine philosopher John Philoponus developed a
concept of momentum in his commentary to Aristotle's Physics. Aristotle claimed that everything
that is moving must be kept moving by something. For example, a thrown ball must be kept
moving by motions of the air. Most writers continued to accept Aristotle's theory until the time of
Galileo, but a few were skeptical. Philoponus pointed out the absurdity in Aristotle's claim that
motion of an object is promoted by the same air that is resisting its passage. He proposed instead
that an impetus was imparted to the object in the act of throwing it.[44] Ibn Sīnā (also known by
his Latinized name Avicenna) read Philoponus and published his own theory of motion in The
Book of Healing in 1020. He agreed that an impetus is imparted to a projectile by the thrower;
but unlike Philoponus, who believed that it was a temporary virtue that would decline even in a
vacuum, he viewed it as a persistent, requiring external forces such as air resistance to dissipate
it.[45][46][47] The work of Philoponus, and possibly that of Ibn Sīnā,[47] was read and refined by the
European philosophers Peter Olivi and Jean Buridan. Buridan, who in about 1350 was made
rector of the University of Paris, referred to impetus being proportional to the weight times the
speed. Moreover, Buridan's theory was different from his predecessor's in that he did not
consider impetus to be self-dissipating, asserting that a body would be arrested by the forces of
air resistance and gravity which might be opposing its impetus.[48][49]

René Descartes believed that the total "quantity of motion" in the universe is conserved, where
the quantity of motion is understood as the product of size and speed. This should not be read as
a statement of the modern law of momentum, since he had no concept of mass as distinct from
weight and size, and more importantly he believed that it is speed rather than velocity that is
conserved. So for Descartes if a moving object were to bounce off a surface, changing its
direction but not its speed, there would be no change in its quantity of motion.[50][51] Galileo, later,
in his Two New Sciences, used the Italian word impeto.

Leibniz, in his "Discourse on Metaphysics", gave an argument against Descartes' construction of


the conservation of the "quantity of motion" using an example of dropping blocks of different
sizes different distances. He points out that force is conserved but quantity of motion, construed
as the product of size and speed of an object, is not conserved.[52]

The first correct statement of the law of conservation of momentum was by English
mathematician John Wallis in his 1670 work, Mechanica sive De Motu, Tractatus Geometricus:
"the initial state of the body, either of rest or of motion, will persist" and "If the force is greater
than the resistance, motion will result".[53] Wallis uses momentum and vis for force. Newton's
Philosophiæ Naturalis Principia Mathematica, when it was first published in 1687, showed a
similar casting around for words to use for the mathematical momentum. His Definition II
defines quantitas motus, "quantity of motion", as "arising from the velocity and quantity of
matter conjointly", which identifies it as momentum.[54] Thus when in Law II he refers to mutatio
motus, "change of motion", being proportional to the force impressed, he is generally taken to
mean momentum and not motion.[55] It remained only to assign a standard term to the quantity of
motion. The first use of "momentum" in its proper mathematical sense is not clear but by the
time of Jenning's Miscellanea in 1721, four years before the final edition of Newton's Principia
Mathematica, momentum M or "quantity of motion" was being defined for students as "a
rectangle", the product of Q and V, where Q is "quantity of material" and V is "velocity", s/t.[56]

Poynting vector
Dipole radiation of a dipole vertically in the page showing electric field strength (colour) and
Poynting vector (arrows) in the plane of the page.

In physics, the Poynting vector represents the directional energy flux density (the rate of energy
transfer per unit area, in units of watts per square metre (W·m−2)) of an electromagnetic field. It
is named after its inventor John Henry Poynting. Oliver Heaviside and Nikolay Umov
independently co-invented the Poynting vector.

Contents
 1 Definition
o 1.1 Interpretation

o 1.2 Invariance to adding a curl of a field

 2 Formulation in terms of microscopic fields


 3 Time-averaged Poynting vector
 4 Examples and applications
o 4.1 In a coaxial cable

o 4.2 Resistive dissipation

o 4.3 In plane waves

 4.3.1 Derivation
o 4.4 Radiation pressure

o 4.5 In static fields


Definition
In Poynting's original paper and in many textbooks, it is usually denoted by S or N, and defined
as:[1][2]

which is often called the Abraham form; where E is the electric field and H the magnetic field.[3]
[4]
(All bold letters represent vectors.)

Occasionally an alternative definition in terms of electric field E and the magnetic flux density B
is used. It is even possible to combine the displacement field D with the magnetic flux density B
to get the Minkowski form of the Poynting vector, or use D and H to construct another.[5] The
choice has been controversial: Pfeifer et al.[6] summarize the century-long dispute between
proponents of the Abraham and Minkowski forms.

The Poynting vector represents the particular case of an energy flux vector for electromagnetic
energy. However, any type of energy has its direction of movement in space, as well as its
density, so energy flux vectors can be defined for other types of energy as well, e.g., for
mechanical energy. The Umov–Poynting vector[7] discovered by Nikolay Umov in 1874
describes energy flux in liquid and elastic media in a completely generalized view.

Interpretation

The Poynting vector appears in Poynting's theorem (see this article for the derivation of the
theorem and vector), an energy-conservation law,[4]

where Jf is the current density of free charges and u is the electromagnetic energy density,

where E is the electric field, D the electric displacement field, B the magnetic flux density, and
H the magnetic field vector.

The first term in the right-hand side represents the net electromagnetic energy flow into a small
volume, while the second term represents the subtracted portion of the work done by free
electrical currents that are not necessarily converted into electromagnetic energy (dissipation,
heat). In this definition, bound electrical currents are not included in this term, and instead
contribute to S and u.
Note that u can only be given if linear, nondispersive and uniform materials are involved, i.e., if
the constitutive relations can be written as

where ε and μ are constants (which depend on the material through which the energy flows),
called the permittivity and permeability, respectively, of the material.[4]

This practically limits Poynting's theorem in this form to fields in vacuum. A generalization to
dispersive materials is possible under certain circumstances at the cost of additional terms and
the loss of their clear physical interpretation.[4]

The Poynting vector is usually interpreted as an energy flux, but this is only strictly correct for
electromagnetic radiation. The more general case is described by Poynting's theorem above,
where it occurs as a divergence, which means that it can only describe the change of energy
density in space, rather than the flow.

Invariance to adding a curl of a field

Since the Poynting vector only occurs in Poynting's theorem as a divergence ∇ ⋅ S, the Poynting
vector S is arbitrary to the extent that one can add a curl of a field F to S,[4]

since the divergence of the curl term is zero: ∇ ⋅ (∇ × F) = 0 for an arbitrary field F (see Vector
calculus identities).

This property is used in quasi-electrostatic regimes to describe for instance energy propagating
through waves in piezoelectric materials. In such cases magnetic fields are negligible and a local
flux of energy can be defined based on electrical quantities only. In the general case we can
express the divergence of the Poynting vector as:

The fourth of the Maxwell's equations writes:

where is the current density due to free charges. In dielectric

materials it reduces to: .

Combining the two previous results, leads to the following quasi-electrostatic divergence:
A new "magnetic free" Poynting vector leading to the same divergence can be defined as:

where V is the electrostatic potential.

A demonstration in the case of the parallel-plate capacitor that both S and S′, although being
orthogonal, lead to the same overall energy balance is provided by Bondar & Bastien.[8]

It is often thought that using a different vector than the classical Poynting vector will lead to
inconsistencies in a relativistic description of electromagnetic fields where energy and
momentum should be defined locally in terms of the stress–energy tensor .[citation needed]

However such a transformation is consistent with Quantum electrodynamics where photon


particles have no defined trajectories but only a probability of being emitted or absorbed.

Formulation in terms of microscopic fields


In some cases, it may be more appropriate to define the Poynting vector S as

where μ0 is the magnetic constant. It can be derived directly from Maxwell's equations in terms
of total charge and current and the Lorentz force law only.

The corresponding form of Poynting's theorem is

where J is the total current density and the energy density u is

where ε0 is the electric constant.

The two alternative definitions of the Poynting vector are equivalent in vacuum or in non-
magnetic materials, where B = μ0H. In all other cases, they differ in that
and the corresponding u are purely radiative, since the dissipation term, (−J ⋅ E) covers the total
current, while the definition in terms of H has contributions from bound currents which then lack
in the dissipation term.[9]

Since only the microscopic fields E and B are needed in the derivation of

assumptions about any material possibly present can be completely avoided, and Poynting's
vector as well as the theorem in this definition are universally valid, in vacuum as in all kinds of
material. This is especially true for the electromagnetic energy density, in contrast to the case
above.[9]

Time-averaged Poynting vector


For time-periodic sinusoidal electromagnetic fields, the average power flow per unit time is often
more useful, and can be found by treating the electric and magnetic fields as complex vectors as
follows (star * denotes the complex conjugate):

The average over time is given as

The second term is a sinusoidal curve


and its average is zero, giving

Examples and applications


In a coaxial cable

Poynting vector in a coaxial cable, shown in red

For example, the Poynting vector within the dielectric insulator of a coaxial cable is nearly
parallel to the wire axis (assuming no fields outside the cable and a wavelength longer than the
diameter of the cable, including DC). Electrical energy delivered to the load is flowing entirely
through the dielectric between the conductors. Very little energy flows in the conductors
themselves, since the electric field strength is nearly zero. The energy flowing in the conductors
flows radially into the conductors and accounts for energy lost to resistive heating of the
conductor. No energy flows outside the cable, either, since there the magnetic fields of inner and
outer conductors cancel to zero.

Resistive dissipation

If a conductor has significant resistance, then, near the surface of that conductor, the Poynting
vector would be tilted toward and impinge upon the conductor. Once the Poynting vector enters
the conductor, it is bent to a direction that is almost perpendicular to the surface.[10] This is a
consequence of Snell's law and the very slow speed of light inside a conductor. See Hayt page
402[11] for the definition and computation of the speed of light in a conductor. Inside the
conductor, the Poynting vector represents energy flow from the electromagnetic field into the
wire, producing resistive Joule heating in the wire. For a derivation that starts with Snell's law
see Reitz page 454.[12]

In plane waves

In a propagating sinusoidal linearly polarized electromagnetic plane wave of a fixed frequency,


the Poynting vector always points in the direction of propagation while oscillating in magnitude.
The time-averaged magnitude of the Poynting vector is

where E0 is the peak value of the electric field and c is the speed of light in free space. This time-
averaged value is also called the irradiance or intensity I.

Derivation

In an electromagnetic plane wave, E and B are always perpendicular to each other and the
direction of propagation. Moreover, their amplitudes are related according to

and their time and position dependences are

where ω is the frequency of the wave and k is wave vector. The time-dependent and position
magnitude of the Poynting vector is then

In the last step, we used the equality ε0μ0 = c−2. Since the time- or space-average of cos2(ωt − k ⋅
r) is 1/2, it follows that

It will be appreciated that quantitatively the Poynting vector is evaluated only from a prior
knowledge of the distribution of electric and magnetic fields, which are calculated by applying
boundary conditions to a particular set of physical circumstances, for example a dipole antenna.
Therefore the E and H field distributions form the primary object of any analysis, while the
Poynting vector remains an interesting by-product.
Radiation pressure

The density of the linear momentum of the electromagnetic field is S/c2 (the speed of light in free
space). The radiation pressure exerted by an electromagnetic wave on the surface of a target is
given by:

where the time-averaged intensity above.

In static fields

Poynting vector in a static field, where E is electric field, H is magnetic field, and S is Poynting
vector

The consideration of the Poynting vector in static fields shows the relativistic nature of the
Maxwell equations and allows a better understanding of the magnetic component of the Lorentz
force, q(v × B). To illustrate, the accompanying picture is considered, which describes the
Poynting vector in a cylindrical capacitor, which is located in an H field (pointing into the page)
generated by a permanent magnet. Although there are only static electric and magnetic fields, the
calculation of the Poynting vector produces a clockwise circular flow of electromagnetic energy,
with no beginning or end.

While the circulating energy flow may seem nonsensical or paradoxical, it proves to be
absolutely necessary to maintain conservation of momentum. Momentum density is proportional
to energy flow density, so the circulating flow of energy contains an angular momentum. This is
the cause of the magnetic component of the Lorentz force which occurs when the capacitor is
discharged. During discharge, the angular momentum contained in the energy flow is depleted as
it is transferred to the charges of the discharge current crossing the magnetic field.[13]
Maxwell's Equations
Maxwell's equations represent one of the most elegant and concise ways to state the
fundamentals of electricity and magnetism. From them one can develop most of the working
relationships in the field. Because of their concise statement, they embody a high level of
mathematical sophistication and are therefore not generally introduced in an introductory
treatment of the subject, except perhaps as summary relationships.
Index
These basic equations of electricity and magnetism can be used as a starting point for
advanced courses, but are usually first encountered as unifying equations after the study of
Maxwell's
electrical and magnetic phenomena.
equations
concepts
Symbols Used
E = Electric field ρ = charge density i = electric current
B = Magnetic field ε0 = permittivity J = current density
D = Electric displacement μ0 = permeability c = speed of light
H = Magnetic field strength M = Magnetization P = Polarization
Integral form Differential form

Maxwell's Equations
Integral form in the absence of magnetic or polarizable media:

Index
I. Gauss' law for electricity
Maxwell's
equations
II. Gauss' law for magnetism concepts

III. Faraday's law of induction


IV. Ampere's law

Differential form Discussion

Go Back
HyperPhysics***** Electricity and Magnetism R Nave

Maxwell's Equations
Differential form in the absence of magnetic or polarizable media:

Index
I. Gauss' law for electricity
Maxwell's
equations
II. Gauss' law for magnetism concepts

III. Faraday's law of induction


IV. Ampere's law

here represent the vector operations divergence and curl,


Note:
respectively.
Integral form Discussion
Differential form with magnetic and polarizable media

Go Back
HyperPhysics***** Electricity and Magnetism R Nave

Maxwell's Equations
Differential form with magnetic and/or polarizable media:

I. Gauss' law for electricity


Index

Maxwell's
equations
concepts

II. Gauss' law for magnetism

III. Faraday's law of induction


IV. Ampere's law

here represent the vector operations divergence and curl,


Note:
respectively.
Integral form Discussion

Go Back
HyperPhysics***** Electricity and Magnetism R Nave

Gauss's law
In physics, Gauss's law, also known as Gauss's flux theorem, is a law relating to the
distribution of electric charge to the resulting electric field.

The law was formulated by Carl Friedrich Gauss in 1835, but was not published until 1867.[1] It
is one of the four Maxwell's equations which form the basis of classical electrodynamics, the
other three being Gauss's law for magnetism, Faraday's law of induction, and Ampère's law with
Maxwell's correction. Gauss's law can be used to derive Coulomb's law,[2] and vice versa.

Contents
 1 Qualitative description of the law
 2 Equation involving E field

o 2.1 Integral form

 2.1.1 Applying the integral form

o 2.2 Differential form

o 2.3 Equivalence of integral and differential forms


 3 Equation involving D field

o 3.1 Free, bound, and total charge

o 3.2 Integral form

o 3.3 Differential form

 4 Equivalence of total and free charge statements

 5 Equation for linear materials

 6 Relation to Coulomb's law

o 6.1 Deriving Gauss's law from Coulomb's law

o 6.2 Deriving Coulomb's law from Gauss's law

Qualitative description of the law


In words, Gauss's law states that:

The net electric flux through any closed surface is equal to 1⁄ε times the net electric charge
enclosed within that closed surface.[3]

Gauss's law has a close mathematical similarity with a number of laws in other areas of physics,
such as Gauss's law for magnetism and Gauss's law for gravity. In fact, any "inverse-square law"
can be formulated in a way similar to Gauss's law: For example, Gauss's law itself is essentially
equivalent to the inverse-square Coulomb's law, and Gauss's law for gravity is essentially
equivalent to the inverse-square Newton's law of gravity.

Gauss's law is something of an electrical analogue of Ampère's law, which deals with
magnetism.

The law can be expressed mathematically using vector calculus in integral form and differential
form, both are equivalent since they are related by the divergence theorem, also called Gauss's
theorem. Each of these forms in turn can also be expressed two ways: In terms of a relation
between the electric field E and the total electric charge, or in terms of the electric displacement
field D and the free electric charge.[4]

Equation involving E field


Gauss's law can be stated using either the electric field E or the electric displacement field D.
This section shows some of the forms with E; the form with D is below, as are other forms with
E.

Integral form
Gauss's law may be expressed as:[5]

where ΦE is the electric flux through a closed surface S enclosing any volume V, Q is the total
charge enclosed within S, and ε0 is the electric constant. The electric flux ΦE is defined as a
surface integral of the electric field:

where E is the electric field, dA is a vector representing an infinitesimal element of area,[note 1] and
· represents the dot product of two vectors.

Since the flux is defined as an integral of the electric field, this expression of Gauss's law is
called the integral form.

Applying the integral form

If the electric field is known everywhere, Gauss's law makes it quite easy, in principle, to find the
distribution of electric charge: The charge in any given region can be deduced by integrating the
electric field to find the flux.

However, much more often, it is the reverse problem that needs to be solved: The electric charge
distribution is known, and the electric field needs to be computed. This is much more difficult,
since if you know the total flux through a given surface, that gives almost no information about
the electric field, which (for all you know) could go in and out of the surface in arbitrarily
complicated patterns.

An exception is if there is some symmetry in the situation, which mandates that the electric field
passes through the surface in a uniform way. Then, if the total flux is known, the field itself can
be deduced at every point. Common examples of symmetries which lend themselves to Gauss's
law include cylindrical symmetry, planar symmetry, and spherical symmetry. See the article
Gaussian surface for examples where these symmetries are exploited to compute electric fields.

Differential form

By the divergence theorem Gauss's law can alternatively be written in the differential form:

where ∇ · E is the divergence of the electric field, ε0 is the electric constant, and ρ is the total
electric charge density.
Equivalence of integral and differential forms

Main article: Divergence theorem

The integral and differential forms are mathematically equivalent, by the divergence theorem.
Here is the argument more specifically.

[show]Outline of proof

Equation involving D field


See also: Maxwell's equations

Free, bound, and total charge

Main article: Electric polarization

The electric charge that arises in the simplest textbook situations would be classified as "free
charge"—for example, the charge which is transferred in static electricity, or the charge on a
capacitor plate. In contrast, "bound charge" arises only in the context of dielectric (polarizable)
materials. (All materials are polarizable to some extent.) When such materials are placed in an
external electric field, the electrons remain bound to their respective atoms, but shift a
microscopic distance in response to the field, so that they're more on one side of the atom than
the other. All these microscopic displacements add up to give a macroscopic net charge
distribution, and this constitutes the "bound charge".

Although microscopically, all charge is fundamentally the same, there are often practical reasons
for wanting to treat bound charge differently from free charge. The result is that the more
"fundamental" Gauss's law, in terms of E (above), is sometimes put into the equivalent form
below, which is in terms of D and the free charge only.

Integral form

This formulation of Gauss's law states the total charge form:

where ΦD is the D-field flux through a surface S which encloses a volume V, and Qfree is the free
charge contained in V. The flux ΦD is defined analogously to the flux ΦE of the electric field E
through S:
Differential form

The differential form of Gauss's law, involving free charge only, states:

where ∇ · D is the divergence of the electric displacement field, and ρfree is the free electric
charge density.

Equivalence of total and free charge statements


[show]Proof that the formulations of Gauss's law in terms of free charge are
equivalent to the formulations involving total charge.

Equation for linear materials


In homogeneous, isotropic, nondispersive, linear materials, there is a simple relationship between
E and D:

where ε is the permittivity of the material. For the case of vacuum (aka free space), ε = ε0. Under
these circumstances, Gauss's law modifies to

for the integral form, and

for the differential form.

Relation to Coulomb's law


Deriving Gauss's law from Coulomb's law

Gauss's law can be derived from Coulomb's law.

[show]Outline of proof
Note that since Coulomb's law only applies to stationary charges, there is no reason to expect
Gauss's law to hold for moving charges based on this derivation alone. In fact, Gauss's law does
hold for moving charges, and in this respect Gauss's law is more general than Coulomb's law.

Deriving Coulomb's law from Gauss's law

Strictly speaking, Coulomb's law cannot be derived from Gauss's law alone, since Gauss's law
does not give any information regarding the curl of E (see Helmholtz decomposition and
Faraday's law). However, Coulomb's law can be proven from Gauss's law if it is assumed, in
addition, that the electric field from a point charge is spherically-symmetric (this assumption,
like Coulomb's law itself, is exactly true if the charge is stationary, and approximately true if the
charge is in motion).

You might also like