You are on page 1of 34

Electromagnetism Electromagnetism is one of the four fundamental interactions in nature.

The other three are the strong interaction, the weak interaction and gravitation. Electromagnetism is the force that causes the interaction between electrically charged particles; the areas in which this happens are called electromagnetic fields. Electromagnetism is responsible for practically all the phenomena encountered in daily life, with the exception of gravity. Ordinary matter takes its form as a result of intermolecular forces between individual molecules in matter. Electromagnetism is also the force which holds electrons and protons together inside atoms, which are the building blocks of molecules. This governs the processes involved in chemistry, which arise from interactions between the electrons inside and between atoms. Electromagnetism manifests as both electric fields and magnetic fields. Both fields are simply different aspects of electromagnetism, and hence are intrinsically related. Thus, a changing electric field generates a magnetic field; conversely a changing magnetic field generates an electric field. This effect is called electromagnetic induction, and is the basis of operation for electrical generators, induction motors, and transformers. Mathematically speaking, magnetic fields and electric fields are convertible with relative motion as a four vector. Electric fields are the cause of several common phenomena, such as electric potential (such as the voltage of a battery) and electric current (such as the flow of electricity through a flashlight). Magnetic fields are the cause of the force associated with magnets. In quantum electrodynamics, electromagnetic interactions between charged particles can be calculated using the method of Feynman diagrams, in which we picture messenger particles called virtual photons being exchanged between charged particles. This method can be derived from the field picture through perturbation theory. The theoretical implications of electromagnetism led to the development of special relativity by Albert Einstein in 1905.[edit] History of the theory See also: History of electromagnetic theory Originally electricity and magnetism were thought of as two separate forces. This view changed, however, with the publication of James Clerk Maxwell's 1873 Treatise on Electricity and Magnetism in which the interactions of positive and negative charges were shown to be regulated by one force. There are four main effects resulting from these interactions, all of which have been clearly demonstrated by experiments: 1. 2. 3. 4. Electric charges attract or repel one another with a force inversely proportional to the square of the distance between them: unlike charges attract, like ones repel. Magnetic poles (or states of polarization at individual points) attract or repel one another in a similar way and always come in pairs: every north pole is yoked to a south pole. An electric current in a wire creates a circular magnetic field around the wire, its direction depending on that of the current. A current is induced in a loop of wire when it is moved towards or away from a magnetic field, or a magnet is moved towards or away from it, the direction of current depending on that of the movement.

While preparing for an evening lecture on 21 April 1820, Hans Christian rsted made a surprising observation. As he was setting up his materials, he noticed a compass needle deflected from magnetic north when the electric current from the battery he was using was switched on and off. This deflection convinced him that magnetic fields radiate from all sides of a wire carrying an electric current, just as light and heat do, and that it confirmed a direct relationship between electricity and magnetism. At the time of discovery, rsted did not suggest any satisfactory explanation of the phenomenon, nor did he try to represent the phenomenon in a mathematical framework. However, three months later he began more intensive investigations. Soon thereafter he published his findings, proving that an electric current produces a magnetic field as it flows through a wire. The CGS unit of magnetic induction (oersted) is named in honor of his contributions to the field of electromagnetism. His findings resulted in intensive research throughout the scientific community in electrodynamics. They influenced French physicist Andr-Marie Ampre's developments of a single mathematical form to represent the magnetic forces between current-carrying conductors. rsted's discovery also represented a major step toward a unified concept of energy. This unification, which was observed by Michael Faraday, extended by James Clerk Maxwell, and partially reformulated by Oliver Heaviside and Heinrich Hertz, is one of the key accomplishments of 19th century mathematical physics. It had far-reaching consequences, one of which was the understanding of the nature of light. Light and other electromagnetic waves take the form of quantized, self-propagating oscillatory electromagnetic field disturbances called photons. Different frequencies of oscillation give rise to the different forms of electromagnetic radiation, from radio waves at the lowest frequencies, to visible light at intermediate frequencies, to gamma rays at the highest frequencies. rsted was not the only person to examine the relation between electricity and magnetism. In 1802 Gian Domenico Romagnosi, an Italian legal scholar, deflected a magnetic needle by electrostatic charges. Actually, no galvanic current existed in the setup and hence no electromagnetism was present. An account of the discovery was published in 1802 in an Italian newspaper, but it was largely overlooked by the contemporary scientific community.[1] [edit] Overview

The electromagnetic force is one of the four fundamental forces. The other fundamental forces are: the strong nuclear force (which holds quarks together, along with its residual strong force effect that holds atomic nuclei together, to form the nucleus), the weak nuclear force (which causes certain forms of radioactive decay), and the gravitational force. All other forces (e.g. friction) are ultimately derived from these fundamental forces. The electromagnetic force is the one responsible for practically all the phenomena one encounters in daily life, with the exception of gravity. Roughly speaking, all the forces involved in interactions between atoms can be traced to the electromagnetic force acting on the electrically charged protons and electrons inside the atoms. This includes the forces we experience in "pushing" or "pulling" ordinary material objects, which come from the intermolecular forces between the individual molecules in our bodies and those in the objects. It also includes all forms of chemical phenomena, which arise from interactions between electron orbitals. [edit] Classical electrodynamics Main article: Classical electrodynamics The scientist William Gilbert proposed, in his De Magnete (1600), that electricity and magnetism, while both capable of causing attraction and repulsion of objects, were distinct effects. Mariners had noticed that lightning strikes had the ability to disturb a compass needle, but the link between lightning and electricity was not confirmed until Benjamin Franklin's proposed experiments in 1752. One of the first to discover and publish a link between man-made electric current and magnetism was Romagnosi, who in 1802 noticed that connecting a wire across a voltaic pile deflected a nearby compass needle. However, the effect did not become widely known until 1820, when rsted performed a similar experiment.[2] rsted's work influenced Ampre to produce a theory of electromagnetism that set the subject on a mathematical foundation. An accurate theory of electromagnetism, known as classical electromagnetism, was developed by various physicists over the course of the 19th century, culminating in the work of James Clerk Maxwell, who unified the preceding developments into a single theory and discovered the electromagnetic nature of light. In classical electromagnetism, the electromagnetic field obeys a set of equations known as Maxwell's equations, and the electromagnetic force is given by the Lorentz force law. One of the peculiarities of classical electromagnetism is that it is difficult to reconcile with classical mechanics, but it is compatible with special relativity. According to Maxwell's equations, the speed of light in a vacuum is a universal constant, dependent only on the electrical permittivity and magnetic permeability of free space. This violates Galilean invariance, a long-standing cornerstone of classical mechanics. One way to reconcile the two theories is to assume the existence of a luminiferous aether through which the light propagates. However, subsequent experimental efforts failed to detect the presence of the aether. After important contributions of Hendrik Lorentz and Henri Poincar, in 1905, Albert Einstein solved the problem with the introduction of special relativity, which replaces classical kinematics with a new theory of kinematics that is compatible with classical electromagnetism. (For more information, see History of special relativity.) In addition, relativity theory shows that in moving frames of reference a magnetic field transforms to a field with a nonzero electric component and vice versa; thus firmly showing that they are two sides of the same coin, and thus the term "electromagnetism". (For more information, see Classical electromagnetism and special relativity.) [edit] The photoelectric effect Main article: Photoelectric effect In another paper published in that same year, Albert Einstein undermined the very foundations of classical electromagnetism. His theory of the photoelectric effect (for which he won the Nobel prize for physics) posited that light could exist in discrete particle-like quantities, which later came to be known as photons. Einstein's theory of the photoelectric effect extended the insights that appeared in the solution of the ultraviolet catastrophe presented by Max Planck in 1900. In his work, Planck showed that hot objects emit electromagnetic radiation in discrete packets, which leads to a finite total energy emitted as black body radiation. Both of these results were in direct contradiction with the classical view of light as a continuous wave. Planck's and Einstein's theories were progenitors of quantum mechanics, which, when formulated in 1925, necessitated the invention of a quantum theory of electromagnetism. This theory, completed in the 1940s, is known as quantum electrodynamics (or "QED"), and, in situations where perturbation theory is applicable, is one of the most accurate theories known to physics. [edit] Units Electromagnetic units are part of a system of electrical units based primarily upon the magnetic properties of electric currents, the fundamental SI unit being the ampere. The units are:

ampere (current) coulomb (charge) farad (capacitance) henry (inductance) ohm (resistance) volt (electric potential) watt (power) tesla (magnetic field) weber (flux)

In the electromagnetic cgs system, electric current is a fundamental quantity defined via Ampre's law and takes the permeability as a dimensionless quantity (relative permeability) whose value in a vacuum is unity. As a consequence, the square of the speed of light appears explicitly in some of the equations interrelating quantities in this system. SI electromagnetism unitsv d e Symbol[3] Name of Quantity I Electric current Q Electric charge U, V, ; E Potential difference; Electromotive force R; Z; X Electric resistance; Impedance; Reactance Resistivity P Electric power C Capacitance E Electric field strength D Electric displacement field Permittivity e Electric susceptibility G; Y; B Conductance; Admittance; Susceptance , , Conductivity B Magnetic flux density, Magnetic induction Magnetic flux H Magnetic field strength L, M Inductance Permeability Magnetic susceptibility [edit] Electromagnetic phenomena With the exception of gravitation, electromagnetic phenomena as described by quantum electrodynamics (which includes as a limiting case classical electrodynamics) account for almost all physical phenomena observable to the unaided human senses, including light and other electromagnetic radiation, all of chemistry, most of mechanics (excepting gravitation), and of course magnetism and electricity. Magnetic monopoles (and "Gilbert" dipoles) are not strictly electromagnetic phenomena, since in standard electromagnetism, magnetic fields are generated not by true "magnetic charge" but by currents. There are, however, condensed matter analogs of magnetic monopoles in exotic materials (spin ice) created in the laboratory.[4]

Derived Units ampere (SI base unit) coulomb volt ohm ohm metre watt farad volt per metre Coulomb per square metre farad per metre (dimensionless) siemens siemens per metre tesla weber ampere per metre henry henry per metre (Unitless)

Unit A C V m W F V/m C/m2 F/m S S/m T Wb A/m H H/m -

Base Units A (= W/V = C/s) As J/C = kgm2s3A1 V/A = kgm2s3A2 kgm3s3A2 VA = kgm2s3 C/V = kg1m2A2s4 N/C = kgmA1s3 Asm2 kg1m3A2s4 1 = kg1m2s3A2 kg1m3s3A2 Wb/m2 = kgs2A1 = NA1m1 Vs = kgm2s2A1 Am1 Wb/A = Vs/A = kgm2s2A2 kgms2A2 -

Faraday's law of induction Faradays law of Induction is a basic law of electromagnetism relating to the operating principles of transformers, inductors, and many types of electrical motors and generators.[1] Faraday's law is applicable to a closed circuit made of thin wire and states that: The induced electromotive force (EMF) in any closed circuit is equal to the time rate of change of the magnetic flux through the circuit.[1] Or alternatively: The EMF generated is proportional to the rate of change of the magnetic flux. The law strictly holds only when the closed circuit is an infinitely-thin wire[2]; for example, a spinning homopolar generator has a constant magneticallyinduced EMF, but its magnetic flux does not rise perpetually higher and higher, as would be implied by a naive interpretation of the statements above.[2] EMF is defined as the energy available per unit charge that travels once around the wire loop (the unit of EMF is the volt).[2][3][4][5] Equivalently, it is the voltage that would be measured by cutting the wire to create an open circuit, and attaching a voltmeter to the leads. According the Lorentz force law, the EMF on a wire loop is:

Faraday's law of induction is closely related to the Maxwell-Faraday equation:[3][2]

where:

denotes curl E is the electric field B is the magnetic flux density. The Maxwell-Faraday equation is one of the four Maxwell's equations, and therefore plays a fundamental role in the theory of classical electromagnetism. History Electromagnetic induction was discovered independently by Michael Faraday and Joseph Henry in 1831; however, Faraday was the first to publish the results of his experiments.[6][7]

Faraday's disk (see homopolar generator). In Faraday's first experimental demonstration of electromagnetic induction (August 29, 1831[8]), he wrapped two wires around opposite sides of an iron torus (an arrangement similar to a modern transformer). Based on his assessment of recently-discovered properties of electromagnets, he expected that when current started to flow in one wire, a sort of wave would travel through the ring and cause some electrical effect on the opposite side. He plugged one wire into a galvanometer, and watched it as he connected the other wire to a battery. Indeed, he saw a transient current (which he called a "wave of electricity") when he connected the wire to the battery, and another when he disconnected it.[9] Within two months, Faraday had found several other manifestations of electromagnetic induction. For example, he saw transient currents when he quickly slid a bar magnet in and out of a coil of wires, and he generated a steady (DC) current by rotating a copper disk near a bar magnet with a sliding electrical lead ("Faraday's disk").[10] Faraday explained electromagnetic induction using a concept he called lines of force. However, scientists at the time widely rejected his theoretical ideas, mainly because they were not formulated mathematically.[11] An exception was Maxwell, who used Faraday's ideas as the basis of his quantitative electromagnetic theory.[11][12][13] In Maxwell's papers, the time varying aspect of electromagnetic induction is expressed as a differential equation which Oliver Heaviside referred to as Faraday's law even though it is slightly different in form from the original version of Faraday's law, and does not describe motional EMF. Heaviside's version (see Maxwell-Faraday equation below) is the form recognized today in the group of equations known as Maxwell's equations. Lenz's law, formulated by Heinrich Lenz in 1834, describes "flux through the circuit", and gives the direction of the induced electromotive force and current resulting from electromagnetic induction (elaborated upon in the examples below).

Faraday's experiment showing induction between coils of wire: The liquid battery (right) provides a current which flows through the small coil (A), creating a magnetic field. When the coils are stationary, no current is induced. But when the small coil is moved in or out of the large coil (B), the magnetic flux through the large coil changes, inducing a current which is detected by the galvanometer (G).[14] Faraday's law as two different phenomena

Some physicists have remarked that Faraday's law is a single equation describing two different phenomena: the motional EMF generated by a magnetic force on a moving wire (see Lorentz force), and the transformer EMF generated by an electric force due to a changing magnetic field (due to the Maxwell-Faraday equation). James Clerk Maxwell drew attention to this fact in his 1861 paper On Physical Lines of Force. In the latter half of part II of that paper, Maxwell gives a separate physical explanation for each of the two phenomena.[citation needed] A reference to these two aspects of electromagnetic induction is made in some modern textbooks.[15] As Richard Feynman states:[2] So the "flux rule" that the emf in a circuit is equal to the rate of change of the magnetic flux through the circuit applies whether the flux changes because the field changes or because the circuit moves (or both).... Yet in our explanation of the rule we have used two completely distinct laws for the two cases for "circuit moves" and for "field changes". We know of no other place in physics where such a simple and accurate general principle requires for its real understanding an analysis in terms of two different phenomena. Richard P. Feynman, The Feynman Lectures on Physics Reflection on this apparent dichotomy was one of the principal paths that led Einstein to develop special relativity: It is known that Maxwell's electrodynamicsas usually understood at the present timewhen applied to moving bodies, leads to asymmetries which do not appear to be inherent in the phenomena. Take, for example, the reciprocal electrodynamic action of a magnet and a conductor. The observable phenomenon here depends only on the relative motion of the conductor and the magnet, whereas the customary view draws a sharp distinction between the two cases in which either the one or the other of these bodies is in motion. For if the magnet is in motion and the conductor at rest, there arises in the neighbourhood of the magnet an electric field with a certain definite energy, producing a current at the places where parts of the conductor are situated. But if the magnet is stationary and the conductor in motion, no electric field arises in the neighbourhood of the magnet. In the conductor, however, we find an electromotive force, to which in itself there is no corresponding energy, but which gives riseassuming equality of relative motion in the two cases discussedto electric currents of the same path and intensity as those produced by the electric forces in the former case. Examples of this sort, together with unsuccessful attempts to discover any motion of the earth relative to the "light medium," suggest that the phenomena of electrodynamics as well as of mechanics possess no properties corresponding to the idea of absolute rest. Albert Einstein, On the Electrodynamics of Moving Bodies[16] Flux through a surface and EMF around a loop

The wire loop (red) forms the boundary of a surface (blue). The black arrows denote any vector field F(r, t) defined throughout space; in the case of Faraday's law, the relevant vector field is the magnetic flux density B, and it is integrated over the blue surface. The red arrow represents the fact that the wire loop may be moving and/or deforming.

The definition of surface integral relies on splitting the surface into small surface elements. Each element is associated with a vector dA of magnitude equal to the area of the element and with direction normal to the element and pointing outward. Faraday's law of induction makes use of the magnetic flux B through a hypothetical surface whose boundary is a wire loop. Since the wire loop may be moving, we write (t) for the surface. The magnetic flux is defined by a surface integral:

where dA is an element of surface area of the moving surface (t), B is the magnetic field, and BdA is a vector dot product. In more visual terms, the magnetic flux through the wire loop is proportional to the the number of magnetic flux lines that pass through the loop. When the flux changesbecause B changes, or because the wire loop is moved or deformed, or bothFaraday's law of induction says that the wire loop acquires an EMF , defined as the energy available per unit charge that travels once around the wire loop (the unit of EMF is the volt). The EMF is given by the rate of change of the magnetic flux:

where is the magnitude of the electromotive force (EMF) in volts and B is the magnetic flux in webers. The direction of the electromotive force is given by Lenz's law. For a tightly-wound coil of wire, composed of N identical loops, each with the same B, Faraday's law of induction states that

[17]

where N is the number of turns of wire and B is the magnetic flux in webers through a single loop. The Maxwell-Faraday equation

An illustration of Kelvin-Stokes theorem with surface its boundary and orientation n set by the right-hand rule. A changing magnetic field creates an electric field; this phenomenon is described by the Maxwell-Faraday equation:[18]

where: denotes curl E is the electric field B is the magnetic flux density. This equation appears in modern sets of Maxwell's equations and is often referred to as Faraday's law. It can also be written in an integral form by the Kelvin-Stokes theorem:[19]

where, as indicated in the figure: is a surface bounded by the closed contour , E is the electric field, d is an infinitesimal vector element of the contour , B is the magnetic field. dA is an infinitesimal vector element of surface . If its direction is orthogonal to that surface patch, the magnitude is the area of an infinitesimal patch of surface. Both d and dA have a sign ambiguity; to get the correct sign, the right-hand rule is used, as explained in the article Kelvin-Stokes theorem. For a planar surface , a positive path element d of curve is defined by the right-hand rule as one that points with the fingers of the right hand when the thumb points in the direction of the normal n to the surface . The integral around is called a path integral or line integral. The surface integral at the right-hand side of the Maxwell-Faraday equation is the explicit expression for the magnetic flux B through . Notice that a nonzero path integral for E is different from the behavior of the electric field generated by charges. A charge-generated E-field can be expressed as the gradient of a scalar field that is a solution to Poisson's equation, and has a zero path integral. See gradient theorem. The integral equation is true for any path through space, and any surface for which that path is a boundary. If the path is not changing in time, the equation can be rewritten:

Proof of Faraday's law The four Maxwell's equations (including the Maxwell-Faraday equation), along with the Lorentz force law, are a sufficient foundation to derive everything in classical electromagnetism.[3][2] Therefore it is possible to "prove" Faraday's law starting with these equations.[20][21] Click "show" in the box below for an outline of this proof. (In an alternative approach, not shown here but equally valid, Faraday's law could be taken as the starting point and used to "prove" the Maxwell-Faraday equation and/or other laws.) [show]Outline of proof of Faraday's law from Maxwell's equations and the Lorentz force law. "Counterexamples" to Faraday's law

Faraday's disc electric generator. The disc rotates with angular rate , sweeping the conducting radius circularly in the static magnetic field B. The magnetic Lorentz force v B drives the current along the conducting radius to the conducting rim, and from there the circuit completes through the lower brush and the axle supporting the disc. Thus, current is generated from mechanical motion.

A counterexample to Faraday's Law when over-broadly interpreted. A wire (solid red lines) connects to two touching metal plates (silver) to form a circuit. The whole system sits in a uniform magnetic field, normal to the page. If the word "circuit" is interpreted as "primary path of current flow" (marked in red), then the magnetic flux through the "circuit" changes dramatically as the plates are rotated, yet the EMF is almost zero, which contradicts Faraday's Law. After Feynman Lectures on Physics Vol. II page 17-3

Although Faraday's law is always true for loops of thin wire, it can give the wrong result if naively extrapolated to other contexts.[2] One example is the homopolar generator (above left): A spinning circular metal disc in a homogeneous magnetic field generates a DC (constant in time) EMF. In Faraday's law, EMF is the time-derivative of flux, so a DC EMF is only possible if the magnetic flux is getting uniformly larger and larger perpetually. But in the generator, the magnetic field is constant and the disc stays in the same position, so no magnetic fluxes are growing larger and larger. So this example cannot be analyzed directly with Faraday's law. Another example, due to Feynman,[2] has a dramatic change in flux through a circuit, even though the EMF is arbitrarily small. See figure and caption above right. In both these examples, the changes in the current path are different from the motion of the material making up the circuit. The electrons in a material tend to follow the motion of the atoms that make up the material, due to scattering in the bulk and work function confinement at the edges. Therefore, motional EMF is generated when a material's atoms are moving through a magnetic field, dragging the electrons with them, thus subjecting the electrons to the Lorentz force. In the homopolar generator, the material's atoms are moving, even though the overall geometry of the circuit is staying the same. In the second example, the material's atoms are almost stationary, even though the overall geometry of the circuit is changing dramatically. On the other hand, Faraday's law always holds for thin wires, because there the geometry of the circuit always changes in a direct relationship to the motion of the material's atoms. Although Faraday's law does not apply to all situations, the Maxwell-Faraday equation and Lorentz force law are always correct and can always be used directly.[2] Electrical generator

Rectangular wire loop rotating at angular velocity in radially outward pointing magnetic field B of fixed magnitude. Current is collected by brushes attached to top and bottom discs, which have conducting rims. This is a simplified version of the drum generator Main article: electrical generator The EMF generated by Faraday's law of induction due to relative movement of a circuit and a magnetic field is the phenomenon underlying electrical generators. When a permanent magnet is moved relative to a conductor, or vice versa, an electromotive force is created. If the wire is connected through an electrical load, current will flow, and thus electrical energy is generated, converting the mechanical energy of motion to electrical energy. For example, the drum generator is based upon the figure to the right. A different implementation of this idea is the Faraday's disc, shown in simplified form on the right. In the Faraday's disc example, the disc is rotated in a uniform magnetic field perpendicular to the disc, causing a current to flow in the radial arm due to the Lorentz force. It is interesting to understand how it arises that mechanical work is necessary to drive this current. When the generated current flows through the conducting rim, a magnetic field is generated by this current through Ampre's circuital law (labeled "induced B" in the figure). The rim thus becomes an electromagnet that resists rotation of the disc (an example of Lenz's law). On the far side of the figure, the return current flows from the rotating arm through the far side of the rim to the bottom brush. The B-field induced by this return current opposes the applied B-field, tending to decrease the flux through that side of the circuit, opposing the increase in flux due to rotation. On the near side of the figure, the return current flows from the rotating arm through the near side of the rim to the bottom brush. The induced B-field increases the flux on this side of the circuit, opposing the decrease in flux due to rotation. Thus, both sides of the circuit generate an emf opposing the rotation. The energy required to keep the disc moving, despite this reactive force, is exactly equal to the electrical energy generated (plus energy wasted due to friction, Joule heating, and other inefficiencies). This behavior is common to all generators converting mechanical energy to electrical energy. Electrical motor Main article: electrical motor An electrical generator can be run "backwards" to become a motor. For example, with the Faraday disc, suppose a DC current is driven through the conducting radial arm by a voltage. Then by the Lorentz force law, this traveling charge experiences a force in the magnetic field B that will turn the disc in a direction given by Fleming's left hand rule. In the absence of irreversible effects, like friction or Joule heating, the disc turns at the rate necessary to make d B / dt equal to the voltage driving the current. Electrical transformer Main article: transformer

The EMF predicted by Faraday's law is also responsible for electrical transformers. When the electric current in a loop of wire changes, the changing current creates a changing magnetic field. A second wire in reach of this magnetic field will experience this change in magnetic field as a change in its coupled magnetic flux, a d B / d t. Therefore, an electromotive force is set up in the second loop called the induced EMF or transformer EMF. If the two ends of this loop are connected through an electrical load, current will flow. Magnetic flow meter Main article: magnetic flow meter Faraday's law is used for measuring the flow of electrically conductive liquids and slurries. Such instruments are called magnetic flow meters. The induced voltage generated in the magnetic field B due to a conductive liquid moving at velocity v is thus given by:

where is the distance between electrodes in the magnetic flow meter. Parasitic induction and waste heating All metal objects moving in relation to a static magnetic field will experience inductive power flow, as do all stationary metal objects in relation to a moving magnetic field. These power flows are occasionally undesirable, resulting in flowing electric current at very low voltage and heating of the metal. There are a number of methods employed to control these undesirable inductive effects.

Electromagnets in electric motors, generators, and transformers do not use solid metal, but instead use thin sheets of metal plate, called laminations. These thin plates reduce the parasitic eddy currents, as described below. Inductive coils in electronics typically use magnetic cores to minimize parasitic current flow. They are a mixture of metal powder plus a resin binder that can hold any shape. The binder prevents parasitic current flow through the powdered metal. Electromagnet laminations

Eddy currents occur when a solid metallic mass is rotated in a magnetic field, because the outer portion of the metal cuts more lines of force than the inner portion, hence the induced electromotive force not being uniform, tends to set up currents between the points of greatest and least potential. Eddy currents consume a considerable amount of energy and often cause a harmful rise in temperature.[23] Only five laminations or plates are shown in this example, so as to show the subdivision of the eddy currents. In practical use, the number of

laminations or punchings ranges from 40 to 66 per inch, and brings the eddy current loss down to about one percent. While the plates can be separated by insulation, the voltage is so low that the natural rust/oxide coating of the plates is enough to prevent current flow across the laminations.[23]

This is a rotor approximately 20mm in diameter from a DC motor used in a CD player. Note the laminations of the electromagnet pole pieces, used to limit parasitic inductive losses. Parasitic induction within inductors

In this illustration, a solid copper bar inductor on a rotating armature is just passing under the tip of the pole piece N of the field magnet. Note the uneven distribution of the lines of force across the bar inductor. The magnetic field is more concentrated and thus stronger on the left edge of the copper bar (a,b) while the field is weaker on the right edge (c,d). Since the two edges of the bar move with the same velocity, this difference in field strength across the bar creates whorls or current eddies within the copper bar.[24] This is one reason high voltage devices tend to be more efficient than low voltage devices. High voltage devices use many turns of small-gauge wire in motors, generators, and transformers. These many small turns of inductor wire in the electromagnet break up the eddy flows that can form within the large, thick inductors of low voltage, high current devices.

Quantum electrodynamics (QED) is the relativistic quantum field theory of electrodynamics. In essence, it describes how light and matter interact and is the first theory where full agreement between quantum mechanics and special relativity is achieved. QED mathematically describes all phenomena involving electrically charged particles interacting by means of exchange of photons and represents the quantum counterpart of classical electrodynamics giving a complete account of matter and light interaction. One of the founding fathers of QED, Richard Feynman, has called it "the jewel of physics" for its extremely accurate predictions of quantities like the anomalous magnetic moment of the electron, and the Lamb shift of the energy levels of hydrogen.[1] In technical terms, QED can be described as a perturbation theory of the electromagnetic quantum vacuum

FEYNMAN DIAGRAM. Feynman diagrams are a pictorial representation scheme for the mathematical expressions governing the behavior of subatomic particles. The calculation of probability amplitudes in theoretical particle physics requires the use of some rather large and complicated integrals over a large number of variables. These integrals do, however, have a regular structure, and may be represented graphically as Feynman diagrams. A Feynman diagram allows for a simple visualization of what would otherwise be a rather arcane and abstract formula. More precisely, a Feynman diagram is a graphical representation of a perturbative contribution to the transition amplitude or correlation function of a quantum mechanical or statistical field theory. Within the canonical formulation of quantum field theory a Feynman diagram represents a term in the Wick's expansion of the perturbative S-matrix. Alternatively, the path integral formulation of quantum field theory represents the transition amplitude as a weighted sum of all possible histories of the system from the initial to the final state, in terms of either particles or fields. A Feynman diagram is then a contribution of a particular class of particle paths, which join and split as described by the diagram. The transition amplitude is then given as the matrix element of the S-matrix between the initial and the final states of the quantum system. Feynman diagrams were developed by Richard Feynman, and are named after him. There are many applications, primarily in quantum field theory, but also in other fields, e.g., in solid-state theory. Special relativity (SR, also known as the special theory of relativity or STR) is the physical theory of measurement in inertial frames of reference proposed in 1905 by Albert Einstein (after the considerable and independent contributions of Hendrik Lorentz, Henri Poincar[1] and others) in the paper "On the Electrodynamics of Moving Bodies".[2] It generalizes Galileo's principle of relativitythat all uniform motion is relative, and that there is no absolute and well-defined state of rest (no privileged reference frames)from mechanics to all the laws of physics, including both the laws of mechanics and of electrodynamics, whatever they may be.[3] Special relativity incorporates the principle that the speed of light is the same for all inertial observers regardless of the state of motion of the source.[4] This theory has a wide range of consequences which have been experimentally verified,[5] including counter-intuitive ones such as length contraction, time dilation and relativity of simultaneity, contradicting the classical notion that the duration of the time interval between two events is equal for all observers. (On the other hand, it introduces the space-time interval, which is invariant.) Combined with other laws of physics, the two postulates of special relativity predict the equivalence of matter and energy, as expressed in the massenergy equivalence formula E = mc2, where c is the speed of light in a vacuum.[6][7] The predictions of special relativity agree well with Newtonian mechanics in their common realm of applicability, specifically in experiments in which all velocities are small compared with the speed of light. Special relativity reveals that c is not just the velocity of a certain phenomenonnamely the propagation of electromagnetic radiation (light)but rather a fundamental feature of the way space and time are unified as spacetime. One of the consequences of the theory is that it is impossible for any particle that has rest mass to be accelerated to the speed of light. The theory was originally termed "special" because it applied the principle of relativity only to the special case of inertial reference frames, i.e. frames of reference in uniform relative motion with respect to each other.[8] Einstein developed general relativity to apply the principle in the more general case, that is, to any frame so as to handle general coordinate transformations, and that theory includes the effects of gravity. The term is currently used more generally to refer to any case in which gravitation is not significant. General relativity is the generalization of special relativity to include gravitation. In general relativity, gravity is described using noneuclidean geometry, so that gravitational effects are represented by curvature of spacetime; special relativity is restricted to flat spacetime. Just as the curvature of the earth's surface is not noticeable in everyday life, the curvature of spacetime can be neglected on small scales, so that locally, special relativity is a valid approximation to general relativity.[9] The presence of gravity becomes undetectable in a sufficiently small, free-falling laboratory. Radioactive decay From Wikipedia, the free encyclopedia Jump to: navigation, search For particle decay in a more general context, see Particle decay. For more information on hazards of various kinds of radiation from decay, see Ionizing radiation. "Radioactive" redirects here. For other uses, see Radioactive (disambiguation).

This article needs additional citations for verification. Please help improve this article by adding reliable references. Unsourced material may be challenged and removed. (April 2010)

Alpha decay is one example type of radioactive decay, in which an atomic nucleus emits an alpha particle, and thereby transforms (or 'decays') into an atom with a mass number 4 less and atomic number 2 less. Many other types of decays are possible. Radioactive decay is the process by which an atomic nucleus of an unstable atom loses energy by emitting ionizing particles (ionizing radiation). The emission is spontaneous, in that the atom decays without any interaction with another particle from outside the atom (i.e., without a nuclear reaction). Usually, radioactive decay happens due to a process confined to the nucleus of the unstable atom, but, on occasion (as with the different processes of electron capture and internal conversion), an inner electron of the radioactive atom is also necessary to the process. Radioactive decay is a stochastic (i.e., random) process at the level of single atoms, in that, according to quantum theory, it is impossible to predict when a given atom will decay.[1] However, the chance that a given atom will decay is constant over time, so that given a large number of identical atoms (nuclides), the decay rate for the collection is predictable to the extent allowed by the law of large numbers. The decay, or loss of energy, results when an atom with one type of nucleus, called the parent radionuclide, transforms to an atom with a nucleus in a different state, or a different nucleus, either of which is named the daughter nuclide. Often the parent and daughter are different chemical elements, and in such cases the decay process results in nuclear transmutation. In an example of this, a carbon-14 atom (the "parent") emits radiation (a beta particle, antineutrino, and a gamma ray) and transforms to a nitrogen-14 atom (the "daughter"). By contrast, there exist two types of radioactive decay processes (gamma decay and internal conversion decay) that do not result in transmutation, but only decrease the energy of an excited nucleus. This results in an atom of the same element as before but with a nucleus in a lower energy state. An example is the nuclear isomer technetium-99m decaying, by the emission of a gamma ray, to an atom of technetium-99. Nuclides produced by radioactive decay are called radiogenic nuclides, whether they themselves are stable or not. A number of naturally occurring radionuclides are short-lived radiogenic nuclides that are the daughters of radioactive primordial nuclides (types of radioactive atoms that have been present since the beginning of the Earth and solar system). Other naturally occurring radioactive nuclides are cosmogenic nuclides, formed by cosmic ray bombardment of material in the Earth's atmosphere or crust. For a summary table showing the number of stable nuclides and of radioactive nuclides in each category, see Radionuclide. The SI unit of activity is the becquerel (Bq). One Bq is defined as one transformation (or decay) per second. Since any reasonably-sized sample of radioactive material contains many atoms, a Bq is a tiny measure of activity; amounts on the order of GBq (gigabecquerel, 1 x 10 9 decays per second) or TBq (terabecquerel, 1 x 1012 decays per second) are commonly used. Another unit of radioactivity is the curie, Ci, which was originally defined as the amount of radium emanation (radon-222) in equilibrium with one gram of pure radium, isotope Ra-226. At present it is equal, by definition, to the activity of any radionuclide decaying with a disintegration rate of 3.7 1010 Bq. The use of Ci is currently discouraged by the SI. Contents [hide]

1 Explanation 2 Discovery radioactive

3 Danger of substances 4 Types of decay

5 Decay modes in table form

6 Decay chains and multiple modes 7 Occurrence and applications 8 Radioactive decay rates o 8.1 Activity measurements 9 Decay timing o 9.1 Half-life o 9.2 Example 10 Changing decay rates

11 See also 12 Notes 13 References 14 External links

[edit] Explanation

The trefoil symbol is used to indicate radioactive material. The neutrons and protons that constitute nuclei, as well as other particles that approach close enough to them, are governed by several interactions. The strong nuclear force, not observed at the familiar macroscopic scale, is the most powerful force over subatomic distances. The electrostatic force is almost always significant, and, in the case of beta decay, the weak nuclear force is also involved. The interplay of these forces produces a number of different phenomena in which energy may be released by rearrangement of particles in the nucleus or the change of one particle into others. The rearrangement is hindered energetically, so that it does not occur immediately. Random quantum vacuum fluctuations are theorized to promote relaxation to a lower energy state (the "decay") in a phenomenon known as quantum tunneling. One might draw an analogy with a snowfield on a mountain: While friction between the ice crystals may be supporting the snow's weight, the system is inherently unstable with regard to a state of lower potential energy. A disturbance would thus facilitate the path to a state of greater entropy: The system will move towards the ground state, producing heat, and the total energy will be distributable over a larger number of quantum states. Thus, an avalanche results. The total energy does not change in this process, but, because of the law of entropy, avalanches happen only in one direction and that is toward the "ground state" the state with the largest number of ways in which the available energy could be distributed. Such a collapse (a decay event) requires a specific activation energy. For a snow avalanche, this energy comes as a disturbance from outside the system, although such disturbances can be arbitrarily small. In the case of an excited atomic nucleus, the arbitrarily small disturbance comes from quantum vacuum fluctuations. A radioactive nucleus (or any excited system in quantum mechanics) is unstable, and can, thus, spontaneously stabilize to a less-excited system. The resulting transformation alters the structure of the nucleus and results in the emission of either a photon or a high-velocity particle that has mass (such as an electron, alpha particle, or other type). [edit] Discovery Radioactivity was first discovered during 1896 by the French scientist Henri Becquerel, while working on phosphorescent materials. These materials glow in the dark after exposure to light, and he suspected that the glow produced in cathode ray tubes by X-rays might be associated with phosphorescence. He wrapped a photographic plate in black paper and placed various phosphorescent salts on it. All results were negative until he used uranium salts. The result with these compounds was a blackening of the plate. These radiations were called Becquerel Rays. It soon became clear that the blackening of the plate did not have anything to do with phosphorescence, because the plate blackened when the mineral was in the dark. Non-phosphorescent salts of uranium and metallic uranium also blackened the plate. It was clear that there is a form of radiation that could pass through paper that was causing the plate to become black. At first it seemed that the new radiation was similar to the then recently-discovered X-rays. Further research by Becquerel, Ernest Rutherford, Paul Villard, Pierre Curie, Marie Curie, and others discovered that this form of radioactivity was significantly more complicated. Different types of decay can occur, producing very different types of radiation. Rutherford was the first to realize that they all occur with the same mathematical exponential formula (see below), and to realize that many decay processes resulted in the transmutation of one element to another. The early researchers also discovered that many other chemical elements besides uranium have radioactive isotopes. A systematic search for the total radioactivity in uranium ores also guided Marie Curie to isolate a new element polonium and to separate a new element radium from barium. The two elements' chemical similarity would otherwise have made them difficult to distinguish. [edit] Danger of radioactive substances Main article: Ionizing radiation

The danger classification sign of radioactive materials

Alpha particles may be completely stopped by a sheet of paper, beta particles by aluminum shielding. Gamma rays can only be reduced by much more substantial barriers, such as a very thick layer of lead.

Different types of decay of a radionuclide. Vertical: atomic number Z, Horizontal: neutron number N The dangers of radioactivity and radiation were not immediately recognized. Acute effects of radiation were first observed in the use of X-rays when electrical engineer and physicist Nikola Tesla intentionally subjected his fingers to X-rays during 1896.[2] He published his observations concerning the burns that developed, though he attributed them to ozone rather than to X-rays. His injuries healed later. The genetic effects of radiation, including the effect on cancer risk, were recognized much later. During 1927, Hermann Joseph Muller published research showing genetic effects, and in 1946 was awarded the Nobel prize for his findings. Before the biological effects of radiation were known, many physicians and corporations had begun marketing radioactive substances as patent medicine, glow-in-the-dark pigments, and radioactive quackery. Examples were radium enema treatments, and radium-containing waters to be drunk as tonics. Marie Curie protested this sort of treatment, warning that the effects of radiation on the human body were not well understood (Curie later died from aplastic anemia, which was likely caused by exposure to ionizing radiation). By the 1930s, after a number of cases of bone necrosis and death in enthusiasts, radium-containing medical products had been largely removed from the market. [edit] Types of decay As for types of radioactive radiation, it was found that an electric or magnetic field could split such emissions into three types of beams. For lack of better terms, the rays were given the alphabetic names alpha, beta, and gamma, still in use today. While alpha decay was seen only in heavier elements (atomic number 52, tellurium, and greater), the other two types of decay were seen in all of the elements. In analyzing the nature of the decay products, it was obvious from the direction of electromagnetic forces produced upon the radiations by external magnetic and electric fields that alpha rays carried a positive charge, beta rays carried a negative charge, and gamma rays were neutral. From the magnitude of deflection, it was clear that alpha particles were much more massive than beta particles. Passing alpha particles through a very thin glass window and trapping them in a discharge tube allowed researchers to study the emission spectrum of the resulting gas, and ultimately prove that alpha particles are helium nuclei. Other experiments showed the similarity between classical beta radiation and cathode rays: They are both streams of electrons. Likewise gamma radiation and X-rays were found to be similar high-energy electromagnetic radiation.

The relationship between types of decays also began to be examined: For example, gamma decay was almost always found associated with other types of decay, occurring at about the same time, or afterward. Gamma decay as a separate phenomenon (with its own half-life, now termed isomeric transition), was found in natural radioactivity to be a result of the gamma decay of excited metastable nuclear isomers, in turn created from other types of decay. Although alpha, beta, and gamma were found most commonly, other types of decay were eventually discovered. Shortly after the discovery of the positron in cosmic ray products, it was realized that the same process that operates in classical beta decay can also produce positrons (positron emission). In an analogous process, instead of emitting positrons and neutrinos, some proton-rich nuclides were found to capture their own atomic electrons (electron capture), and emit only a neutrino (and usually also a gamma ray). Each of these types of decay involves the capture or emission of nuclear electrons or positrons, and acts to move a nucleus toward the ratio of neutrons to protons that has the least energy for a given total number of nucleons (neutrons plus protons). Shortly after discovery of the neutron in 1932, it was discovered by Enrico Fermi that certain rare decay reactions yield neutrons as a decay particle (neutron emission). Isolated proton emission was eventually observed in some elements. It was also found that some heavy elements may undergo spontaneous fission into products that vary in composition. In a phenomenon called cluster decay, specific combinations of neutrons and protons (atomic nuclei) other than alpha particles (helium nuclei) were found to be spontaneously emitted from atoms, on occasion. Other types of radioactive decay that emit previously seen particles were found, but by different mechanisms. An example is internal conversion, which results in electron and sometimes high-energy photon emission, even though it involves neither beta nor gamma decay. This type of decay (like isomeric transition gamma decay) did not transmute one element to another. Rare events that involve a combination of two beta-decay type events happening simultaneously (see below) are known. Any decay process that does not violate conservation of energy or momentum laws (and perhaps other particle conservation laws) is permitted to happen, although not all have been detected. An interesting example (discussed in a final section) is bound state beta decay of rhenium-187. In this process, an inverse of electron capture, beta electron-decay of the parent nuclide is not accompanied by beta electron emission, because the beta particle has been captured into the K-shell of the emitting atom. An antineutrino, however, is emitted. [edit] Decay modes in table form Radionuclides can undergo a number of different reactions. These are summarized in the following table. A nucleus with mass number A and atomic number Z is represented as (A, Z). The column "Daughter nucleus" indicates the difference between the new nucleus and the original nucleus. Thus, (A 1, Z) means that the mass number is one less than before, but the atomic number is the same as before. Mode of decay Participating particles Daughter nucleus (A 4, Z 2) (A 1, Z 1) (A 1, Z) (A 2, Z 2) (A A1, Z Z1) + (A1, Z1)

Decays with emission of nucleons: Alpha decay An alpha particle (A = 4, Z = 2) emitted from nucleus Proton emission A proton ejected from nucleus Neutron emission A neutron ejected from nucleus Double proton emission Two protons ejected from nucleus simultaneously Spontaneous fission Nucleus disintegrates into two or more smaller nuclei and other particles Cluster decay Nucleus emits a specific type of smaller nucleus (A1, Z1) smaller than, or larger than, an alpha particle

Different modes of beta decay: decay A nucleus emits an electron and an electron antineutrino (A, Z + 1) Positron emission (+ A nucleus emits a positron and an electron neutrino (A, Z 1) decay) A nucleus captures an orbiting electron and emits a neutrino the daughter nucleus is left in an excited unstable Electron capture (A, Z 1) state A nucleus beta decays to electron and antineutrino, but the electron is not emitted, as it is captured into an Bound state beta decay empty K-shell;the daughter nucleus is left in an excited and unstable state. This process is suppressed except (A, Z + 1) in ionized atoms that have K-shell vacancies. Double beta decay A nucleus emits two electrons and two antineutrinos (A, Z + 2) A nucleus absorbs two orbital electrons and emits two neutrinos the daughter nucleus is left in an excited and Double electron capture (A, Z 2) unstable state Electron capture with A nucleus absorbs one orbital electron, emits one positron and two neutrinos (A, Z 2) positron emission Double positron A nucleus emits two positrons and two neutrinos (A, Z 2) emission Transitions between states of the same nucleus: Isomeric transition Excited nucleus releases a high-energy photon (gamma ray) (A, Z) Internal conversion Excited nucleus transfers energy to an orbital electron and it is ejected from the atom (A, Z) Radioactive decay results in a reduction of summed rest mass, once the released energy (the disintegration energy) has escaped in some way (for example, the products might be captured and cooled, and the heat allowed to escape). Although decay energy is sometimes defined as associated with the difference between the mass of the parent nuclide products and the mass of the decay products, this is true only of rest mass measurements, where some energy has been removed from the product system. This is true because the decay energy must always carry mass with it, wherever it appears (see mass in special relativity) according to the formula E = mc2. The decay energy is initially released as the energy of emitted photons plus the kinetic energy of massive emitted particles (that is, particles that have rest mass). If these particles come to thermal equilibrium with their surroundings and photons are absorbed, then the decay energy is transformed to thermal energy, which retains its mass.

Decay energy therefore remains associated with a certain measure of mass of the decay system invariant mass. The energy of photons, kinetic energy of emitted particles, and, later, the thermal energy of the surrounding matter, all contribute to calculations of invariant mass of systems. Thus, while the sum of rest masses of particles is not conserved in radioactive decay, the system mass and system invariant mass (and also the system total energy) is conserved throughout any decay process. [edit] Decay chains and multiple modes The daughter nuclide of a decay event may also be unstable (radioactive). In this case, it will also decay, producing radiation. The resulting second daughter nuclide may also be radioactive. This can lead to a sequence of several decay events. Eventually, a stable nuclide is produced. This is called a decay chain (see this article for specific details of important natural decay chains).

Gamma-ray energy spectrum of 238U (inset). Gamma-rays are emitted by decaying nuclides, and the gamma-ray energy can be used to characterize the decay (which nuclide is decaying to which). Here, using the gamma-ray spectrum, several nuclides that are typical of the decay chain have been identified: 226Ra, 214Pb, 214Bi. An example is the natural decay chain of 238U, which is as follows:

decays, through alpha-emission, with a half-life of 4.5 billion years to thorium-234 which decays, through beta-emission, with a half-life of 24 days to protactinium-234 which decays, through beta-emission, with a half-life of 1.2 minutes to uranium-234 which decays, through alpha-emission, with a half-life of 240 thousand years to thorium-230 which decays, through alpha-emission, with a half-life of 77 thousand years to radium-226 which decays, through alpha-emission, with a half-life of 1.6 thousand years to radon-222 which decays, through alpha-emission, with a half-life of 3.8 days to polonium-218 which decays, through alpha-emission, with a half-life of 3.1 minutes to lead-214 which decays, through beta-emission, with a half-life of 27 minutes to bismuth-214 which decays, through beta-emission, with a half-life of 20 minutes to polonium-214 which decays, through alpha-emission, with a half-life of 160 microseconds to lead-210 which decays, through beta-emission, with a half-life of 22 years to bismuth-210 which decays, through beta-emission, with a half-life of 5 days to polonium-210 which decays, through alpha-emission, with a half-life of 140 days to lead-206, which is a stable nuclide.

Some radionuclides may have several different paths of decay. For example, approximately 36% of bismuth-212 decays, through alpha-emission, to thallium-208 while approximately 64% of bismuth-212 decays, through beta-emission, to polonium-212. Both the thallium-208 and the polonium-212 are radioactive daughter products of bismuth-212, and both decay directly to stable lead-208. [edit] Occurrence and applications According to the Big Bang theory, stable isotopes of the lightest five elements (H, He, and traces of Li, Be, and B) were produced very shortly after the emergence of the universe, in a process called Big Bang nucleosynthesis. These lightest stable nuclides (including deuterium) survive to today, but any radioactive isotopes of the light elements produced in the Big Bang (such as tritium) have long since decayed. Isotopes of elements heavier than boron were not produced at all in the Big Bang, and these first five elements do not have any long-lived radioisotopes. Thus, all radioactive nuclei are, therefore, relatively young with respect to the birth of the universe, having formed later in various other types of nucleosynthesis in stars (in particular, supernovae), and also during ongoing interactions between stable isotopes and energetic particles. For example, carbon-14, a radioactive nuclide with a half-life of only 5730 years, is constantly produced in Earth's upper atmosphere due to interactions between cosmic rays and nitrogen. Radioactive decay has been put to use in the technique of radioisotopic labeling, which is used to track the passage of a chemical substance through a complex system (such as a living organism). A sample of the substance is synthesized with a high concentration of unstable atoms. The presence of the substance in one or another part of the system is determined by detecting the locations of decay events. On the premise that radioactive decay is truly random (rather than merely chaotic), it has been used in hardware random-number generators. Because the process is not thought to vary significantly in mechanism over time, it is also a valuable tool in estimating the absolute ages of certain materials. For

geological materials, the radioisotopes and some of their decay products become trapped when a rock solidifies, and can then later be used (subject to many well-known qualifications) to estimate the date of the solidification. These include checking the results of several simultaneous processes and their products against each other, within the same sample. In a similar fashion, and also subject to qualification, the rate of formation of carbon-14 in various eras, the date of formation of organic matter within a certain period related to the isotope's half-life may be estimated, because the carbon-14 becomes trapped when the organic matter grows and incorporates the new carbon-14 from the air. Thereafter, the amount of carbon-14 in organic matter decreases according to decay processes that may also be independently cross-checked by other means (such as checking the carbon-14 in individual tree rings, for example). [edit] Radioactive decay rates The decay rate, or activity, of a radioactive substance are characterized by: Constant quantities:

half-life symbol t1/2 the time taken for the activity of a given amount of a radioactive substance to decay to half of its initial value. mean lifetime symbol the average lifetime of a radioactive particle. decay constant symbol the inverse of the mean lifetime.

Although these are constants, they are associated with statistically random behavior of populations of atoms. In consequence predictions using these constants are less accurate for small number of atoms. Time-variable quantities:

Total activity symbol A number of decays an object undergoes per second. Number of particles symbol N the total number of particles in the sample.

Specific activity symbol SA number of decays per second per amount of substance. (The "amount of substance" can be the unit of either mass or volume.) These are related as follows:

where a0 is the initial amount of active substance substance that has the same percentage of unstable particles as when the substance was formed. [edit] Activity measurements The units in which activities are measured are: becquerel (symbol Bq) = one disintegration per second; curie (Ci) = 3.7 1010 Bq. Low activities are also measured in disintegrations per minute (dpm). [edit] Decay timing See also: exponential decay The decay of an unstable nucleus is entirely random and it is impossible to predict when a particular atom will decay. [1] However, it is equally likely to decay at any time. Therefore, given a sample of a particular radioisotope, the number of decay events dN expected to occur in a small interval of time dt is proportional to the number of atoms present.

If N is the number of atoms, then the probability of decay (dN/N) is proportional to dt:

Particular radionuclides decay at different rates, each having its own decay constant (). The negative sign indicates that N decreases with each decay event. The solution to this first-order differential equation is the following function:

Where N0 is the value of N at time zero (t = 0). The second equation recognizes that the differential decay constant has units of 1/time, and can thus also be represented as 1/, where is a characteristic time for the process. This characteristic time is called the time constant of the process. In radioactive decay, this process time constant is also the mean lifetime for decaying atoms. Each atom "lives" for a finite amount of time before it decays, and it may be shown that this mean lifetime is the arithmetic mean of all the atoms' lifetimes, and that it is , which again is related to the decay constant as follows:

Simulation of many identical atoms undergoing radioactive decay, starting with either 4 atoms (left) or 400 (right). The number at the top indicates how many half-lives have elapsed. Note the law of large numbers: With more atoms, the overall decay is less random. The previous exponential function, in general, represents the result of exponential decay. Although the parent decay distribution follows an exponential, observations of decay times will be limited by a finite integer number of N atoms and follow Poisson statistics as a consequence of the random nature of the process. [edit] Half-life A more commonly used parameter is the half-life. Given a sample of a particular radionuclide, the half-life is the time taken for half the radionuclide's atoms to decay. The half-life is related to the decay constant as follows:

This relationship between the half-life and the decay constant shows that highly radioactive substances are quickly spent, while those that radiate weakly endure longer. Half-lives of known radionuclides vary widely, from more than 1019 years (such as for very nearly stable nuclides, e.g., 209Bi), to 1023 seconds for highly unstable ones. The factor of ln2 in the above relations results from the fact that concept of "half-life" is merely a way of selecting a different base other than the natural base e for the lifetime expression. The time constant is the "1/e" life (time till only 1/e = about 36.8% remains) rather than the "1/2" life of a radionuclide where 50% remains (thus, is longer than t). Thus, the following equation can easily be shown to be valid.

Since radioactive decay is exponential with a constant probability, each process could as easily be described with a different constant time period that (for example) gave its "1/3-life" (how long until only 1/3 is left) or "1/10-life" (a time period till only 10% is left), and so on. Thus, the choice of and t for

marker-times, are only for convenience, and from convention. They reflect a fundamental principle only in so much as they show that the same proportion of a given radioactive substance will decay, during any time-period that one chooses. [edit] Example A sample of 14C, whose half-life is 5730 years, has a decay rate of 14 disintegration per minute (dpm) per gram of natural carbon. An artifact is found to have radioactivity of 4 dpm per gram of its present C, how old is the artifact? Using the above equation, we have:

where:

years,

years. [edit] Changing decay rates The radioactive decay modes of electron capture and internal conversion are known to be slightly sensitive to chemical and environmental effects which change the electronic structure of the atom, which in turn affects the presence of 1s and 2s electrons that participate in the decay process. A small number of mostly light nuclides are affected. For example, chemical bonds can affect the rate of electron capture to a small degree (in general, less than 1%) depending on the proximity of electrons to the nucleus in beryllium. In 7Be, a difference of 0.9% has been observed between half-lives in metallic and insulating environments.[3] This relatively large effect is because beryllium is a small atom whose valence electrons are in 2s atomic orbitals, which are subject to electron capture in 7Be because (like all s atomic orbitals in all atoms) they naturally penetrate into the nucleus. Rhenium-187 is a more spectacular example. 187Re normally beta decays to 187Os with a half-life of 41.6 109 y,[4] but studies using fully ionised 187Re atoms (bare nuclei) have found that this can decrease to only 33 y. This is attributed to "bound-state decay" of the fully ionised atom the electron is emitted into the "K-shell" (1s atomic orbital), which cannot occur for neutral atoms in which all low-lying bound states are occupied.[5] A number of experiments have found that decay rates of other modes of artificial and naturally-occurring radioisotopes are, to a high degree of precision, unaffected by external conditions such as temperature, pressure, the chemical environment, and electric, magnetic, or gravitational fields.[citation needed] Comparison of laboratory experiments over the last century, studies of the Oklo natural nuclear reactor (which exemplified the effects of thermal neutrons on nuclear decay), and astrophysical observations of the luminosity decays of distant supernovae (which occurred far away so the light has taken a great deal of time to reach us), for example, strongly indicate that decay rates have been constant (at least to within the limitations of small experimental errors) as a function of time as well. Recent results suggest the possibility that decay rates might have a weak dependence (0.5% or less) on environmental factors. It has been suggested that measurements of decay rates of silicon-32, manganese-54, and radium-226 exhibit small seasonal variations (of the order of 0.1%), proposed to be related to either solar flare activity or distance from the sun.[6][7][8] However, such measurements are highly susceptible to systematic errors, and a subsequent paper[9] has found no evidence for such correlations in six other isotopes, and sets upper limits on the size of any such effects. Radiation Risk

Because the energies of the particles emitted during radioactive processes are extremely high, nearly all such particles fall in the class of ionizing radiation.

Activity of source Absorbed dose Biologically effective dose Intensity Old standard unit Curie SI unit Becquerel Rad Gray Rem Sievert Roentgen ...

Ionizing Radiation The practical threshold for radiation risk is that of ionization of tissue. Since the ionization energy of a hydrogen atom is 13.6 eV, the level around 10 eV is an approximate threshold. Since the energies associated with nuclear radiation are many orders of magnitude above this threshold, in the MeV range, then all nuclear radiation is ionizing radiation. Likewise, x-rays are ionizing radiation, as is the upper end of the ultraviolet range. In addition, the upper end of the electromagnetic spectrum is ionizing radiation.

All nuclear radiation must be considered to be ionizing radiation!

Activity of Radioactive Source The curie (Ci) is the old standard unit for measuring the activity of a given radioactive sample. It is equivalent to the activity of 1 gram of radium. It is formally defined by:

1 curie = amount of material that will produce 3.7 x 1010 nuclear decays per second. 1 becquerel = amount of material which will produce 1 nuclear decay per second. 1 curie = 3.7 x 1010 becquerels.

The bequerel is the more recent SI unit for radioactive source activity.

Radiation units Intensity of Radiation The roentgen (R) is a measure of radiation intensity of xrays or gamma rays. It is formally defined as the radiation intensity required to produce and ionization charge of 0.000258 coulombs per kilogram of air. It is one of the standard units for radiation dosimitry, but is not applicable to alpha, beta, or other particle emission and does not accurately predict the tissue effects of gamma rays of extremely high energies. The roentgen has mainly been used for calibration of xray machines.

Absorbed Dose of Radiation

The rad is a unit of absorbed radiation dose in terms of the energy actually deposited in the tissue. The rad is defined as an absorbed dose of 0.01 joules of energy per kilogram of tissue. The more recent SI unit is the gray, which is defined as 1joule of deposited energy per kilogram of tissue. To assess the risk of radiation, the absorbed dose is multiplied by the relative biological effectiveness of the radiation to get the biological dose equivalent in rems or sieverts.

Biologically Effective Dose

The biologically effective dose in rems is the radiation dose in rads multiplied by a "quality factor" which is an assessment of the effectiveness of that particular type and energy of radiation. For alpha particles the relative biological effectiveness (rbe) may be as high as 20, so that one rad is equivalent to 20 rems. However, for x-rays and gamma rays, the rbe is taken as one so that the rad and rem are equivalent for those radiation sources. The sievert is equal to 100 rems.

Cyclotron The cyclotron was one of the earliest types of particle accelerators, and is still used as the first stage of some large multi-stage particle accelerators. It makes use of the magnetic force on a moving charge to bend moving charges into a semicircular path between accelerations by an applied electric field. The applied electric field accelerates electrons between the "dees" of the magnetic field region. The field is reversed at the cyclotron frequency to accelerate the electrons back across the gap.

Index Electromagnetic Magnetic field concepts force

When the cyclotron principle is used to accelerated electrons, it has been historically called a betatron. The cyclotron principle as applied to electrons is illustrated below.

Note: these illustrations are grossly simplified for demonstration fo the cyclotron principle. In current practice sine waves are used for the acceleration and the "dees" are resonant cavities favoring one frequency. The magnetic fields are typically altered to keep the acceleration condition optimized, even when speeds become high enough that relativistic corrections are necessary. Magnetic interactions with charge Magnetic force applications HyperPhysics***** Electricity and Magnetism Go Back

Cyclotron Frequency

Index Electromagnetic force Magnetic A moving charge in a cyclotron will move in Magnetic a circular path under the influence of a concepts constant magnetic field. If the time to complete one orbit is calculated: force field

it is found that the period is independent of the radius. Therefore if a square wave is applied at angular frequency qB/m, the charge will spiral outward, increasing in speed.

When a square wave of angular frequency

is applied between the two sides of the magnetic poles, the charge will be boosted again at just the right time to accelerate it across the gap. Thus the constant cyclotron frequency can continue to accelerate the charge (so long as it is not relativistic).

HyperPhysics***** Electricity and Magnetism

Go Back

The four laws of thermodynamics summarize the most important facts of thermodynamics. They define fundamental physical quantities, such as temperature, energy, and entropy, to describe thermodynamic systems and they describe the transfer of energy as heat and work in thermodynamic processes. Experimentally reproducible distinction between heat and work is at the heart of thermodynamics, and about processes in which this distinction cannot be made, thermodynamics has nothing to say. The four principles, or laws, of thermodynamics are:[1][2][3][4][5][6]

The zeroth law of thermodynamics recognizes that if two systems are in thermal equilibrium with a third, they are also in thermal equilibrium with each other, thus supporting the notions of temperature and heat. The first law of thermodynamics distinguishes between two kinds of physical process, namely energy transfer as work, and energy transfer as heat. It tells how this shows the existence of a mathematical quantity called the internal energy of a system. The internal energy obeys the principle of conservation of energy but work and heat are not defined as separately conserved quantities. Equivalently, the first law of thermodynamics states that perpetual motion machines of the first kind are impossible.

The second law of thermodynamics distinguishes between reversible and irreversible physical processes. It tells how this shows the existence of a mathematical quantity called the entropy of a system, and thus it expresses the irreversibility of actual physical processes by the statement that the entropy of an isolated macroscopic system never decreases. Equivalently, perpetual motion machines of the second kind are impossible.

The third law of thermodynamics concerns the entropy of a perfect crystal at absolute zero temperature, and implies that it is impossible to cool a system to exactly absolute zero, or, equivalently, that perpetual motion machines of the third kind are impossible.[7] Classical thermodynamics describes the exchange of work and heat between systems. It has a special interest in systems that are individually in states of thermodynamic equilibrium. Thermodynamic equilibrium is a condition of systems which are adequately described by only macroscopic variables. Every physical system, however, when microscopically examined, shows apparently random microscopic statistical fluctuations in its thermodynamic variables of state (entropy, temperature, pressure, etc.). These microscopic fluctuations are negligible for systems which are nearly in thermodynamic equilibrium and which are only macroscopically examined. They become important, however, for systems which are nearly in thermodynamic equilibrium when they are microscopically examined, and, exceptionally, for macroscopically examined systems that are in critical states[8], and for macroscopically examined systems that are far from thermodynamic equilibrium. There have been suggestions of additional laws, but none of them achieve the generality of the four accepted laws, and they are not mentioned in standard textbooks.[1][2][3][4][5][9][10] The laws of thermodynamics are important fundamental laws in physics and they are applicable in other natural sciences. Contents [hide]

1 Zeroth law

2 First law 3 Second law 4 Third law 5 History 6 See also 7 References Further

8 reading

[edit] Zeroth law The zeroth law of thermodynamics may be stated as follows: If system A and system B are in thermal equilibrium with system C, then system A is in thermal equilibrium with system B The zeroth law implies that thermal equilibrium, viewed as a binary relation, is a Euclidean relation. If we assume that the binary relationship is also reflexive, then it follows that thermal equilibrium is an equivalence relation. Equivalence relations are also transitive and symmetric. The symmetric relationship allows one to speak of two systems being "in thermal equilibrium with each other", which gives rise to a simpler statement of the zeroth law: If two systems are in thermal equilibrium with a third, they are in thermal equilibrium with each other However, this statement requires the implicit assumption of both symmetry and reflexivity, rather than reflexivity alone. The law is also a statement about measurability. To this effect the law allows the establishment of an empirical parameter, the temperature, as a property of a system such that systems in equilibrium with each other have the same temperature. The notion of transitivity permits a system, for example a gas thermometer, to be used as a device to measure the temperature of another system. Although the concept of thermodynamic equilibrium is fundamental to thermodynamics, the need to state it explicitly as a law was not widely perceived until Fowler and Planck stated it in the 1930s, long after the first, second, and third law were already widely understood and recognized. Hence it was numbered the zeroth law. The importance of the law as a foundation to the earlier laws is that it allows the definition of temperature in a non-circular way without reference to entropy, its conjugate variable. [edit] First law The first law of thermodynamics may be expressed by several forms of the fundamental thermodynamic relation: Increase in internal energy of a system = heat supplied to the system + work done on the system

For a thermodynamic cycle the net heat supplied to the system equals the net work done by the system. The net change in internal energy is the energy that flows in as heat minus the energy that flows out as the work that the system performs on its environment. Work and heat are not defined as separately conserved quantities; they refer only to processes of exchange of energy. These statements entail that the internal energy obeys the principle of conservation of energy. The principle of conservation of energy may be stated in several ways: Energy can be neither created nor destroyed. It can only change forms. In any process in an isolated system, the total energy remains the same. [edit] Second law The second law of thermodynamics asserts the existence of a quantity called the entropy of a system and further states that When two isolated systems in separate but nearby regions of space, each in thermodynamic equilibrium in itself (but not necessarily in equilibrium with each other at first) are at some time allowed to interact, breaking the isolation that separates the two systems, allowing them to exchange matter or energy, they will eventually reach a mutual thermodynamic equilibrium. The sum of the entropies of the initial, isolated systems is less than or equal to the entropy of the final combination of exchanging systems. In the process of reaching a new thermodynamic equilibrium, total entropy has increased, or at least has not decreased.

It follows that the entropy of an isolated macroscopic system never decreases. The second law states that spontaneous natural processes increase entropy overall, or in another formulation that heat can spontaneously be conducted or radiated only from a higher-temperature region to a lower-temperature region, but not the other way around. The second law refers to a wide variety of processes, reversible and irreversible. Its main import is to tell about irreversibility. The prime example of irreversibility is in the transfer of heat by conduction or radiation. It was known long before the discovery of the notion of entropy that when two bodies of different temperatures are connected with each other by purely thermal connection, conductive or radiative, then heat always flows from the hotter body to the colder one. This fact is part of the basic idea of heat, and is related also to the so-called zeroth law, though the textbooks' statements of the zeroth law are usually reticent about that, because they have been influenced by Carathodory's basing his axiomatics on the law of conservation of energy and trying to make heat seem a theoretically derivative concept instead of an axiomatically accepted one. ilahv (1997) notes that Carathodory's approach does not work for the description of irreversible processes that involve both heat conduction and conversion of kinetic energy into internal energy by viscosity (which is another prime example of irreversibility), because "the mechanical power and the rate of heating are not expressible as differential forms in the 'external parameters'".[11] The second law tells also about kinds of irreversibility other than heat transfer, and the notion of entropy is needed to provide that wider scope of the law. According to the second law of thermodynamics, in a reversible heat transfer, an element of heat transferred, Q, is the product of the temperature (T), both of the system and of the source or destination of the heat, with the increment (dS) of the system's conjugate variable, its entropy (S)

[1]

The second law defines entropy, which may be viewed not only as a macroscopic variable of classical thermodynamics, but may also be viewed as a measure of deficiency of physical information about the microscopic details of the motion and configuration of the system, given only predictable experimental reproducibility of bulk or macroscopic behavior as specified by macroscopic variables that allow the distinction to be made between heat and work. More exactly, the law asserts that for two given macroscopically specified states of a system, there is a quantity called the difference of entropy between them. The entropy difference tells how much additional microscopic physical information is needed to specify one of the macroscopically specified states, given the macroscopic specification of the other, which is often a conveniently chosen reference state. It is often convenient to presuppose the reference state and not to explicitly state it. A final condition of a natural process always contains microscopically specifiable effects which are not fully and exactly predictable from the macroscopic specification of the initial condition of the process. This is why entropy increases in natural processes. The entropy increase tells how much extra microscopic information is needed to tell the final macroscopically specified state from the initial macroscopically specified state.[12] [edit] Third law The third law of thermodynamics is usually stated as follows: The entropy of a perfect crystal at absolute zero is exactly equal to zero. This is explained in statistical mechanics by the fact that a perfect crystal has only one possible microstate (microscopic state) at extremely low temperatures: The locations and energies of every atom in a crystal are known and fixed. (In quantum mechanics, the location of each atom is not exactly fixed, but the wavefunction of each atom is fixed in the unique ground state for its position in the crystal.) Entropy is related to the number of possible microstates, and with only one microstate, the entropy is exactly zero. The third law is also stated in a form that includes non-crystal systems, such as glasses: As temperature approaches absolute zero, the entropy of a system approaches a minimum. The minimum, not necessarily zero, is called the residual entropy of the system. Chemical kinetics From Wikipedia, the free encyclopedia Jump to: navigation, search

Reaction rate tends to increase with concentration - a phenomenon explained by collision theory. Chemical kinetics, also known as reaction kinetics, is the study of rates of chemical processes. Chemical kinetics includes investigations of how different experimental conditions can influence the speed of a chemical reaction and yield information about the reaction's mechanism and transition states, as well as the construction of mathematical models that can describe the characteristics of a chemical reaction. In 1864, Peter Waage and Cato Guldberg pioneered the development of chemical kinetics by formulating the law of mass action, which states that the speed of a chemical reaction is proportional to the quantity of the reacting substances. Chemical kinetics deals with the experimental determination of reaction rates from which rate laws and rate constants are derived. Relatively simple rate laws exist for zero-order reactions (for which reaction rates are independent of concentration), first-order reactions, and second-order reactions, and can be derived for others. In consecutive reactions the rate-determining step often determines the kinetics. In consecutive first-order reactions, a steady state approximation can simplify the rate law. The activation energy for a reaction is experimentally determined through the Arrhenius equation and the Eyring equation. The main factors that influence the reaction rate include: the physical state of the reactants, the concentrations of the reactants, the temperature at which the reaction occurs, and whether or not any catalysts are present in the reaction. Contents [hide]

1 Factors affecting reaction rate o 1.1 Nature of the reactants o 1.2 Physical state o 1.3 Concentration o 1.4 Temperature o 1.5 Catalysts o 1.6 Pressure 2 Equilibrium 3 Free energy 4 Applications 5 See also 6 References 7 External links

[edit] Factors affecting reaction rate [edit] Nature of the reactants Depending upon what substances are reacting, the reaction rate varies. Acid/base reactions, the formation of salts, and ion exchange are fast reactions. When covalent bond formation takes place between the molecules and when large molecules are formed, the reactions tend to be very slow. Nature and strength of bonds in reactant molecules greatly influences the rate of its transformation into products. The reactions which involve lesser bond rearrangement proceed faster than the reactions which involve larger bond rearrangement. [edit] Physical state The physical state (solid, liquid, or gas) of a reactant is also an important factor of the rate of change. When reactants are in the same phase, as in aqueous solution, thermal motion brings them into contact. However, when they are in different phases, the reaction is limited to the interface between the reactants. Reaction can only occur at their area of contact, in the case of a liquid and a gas, at the surface of the liquid. Vigorous shaking and stirring may be needed to bring the reaction to completion. This means that the more finely divided a solid or liquid reactant, the greater its surface area per unit volume, and the more contact it makes with the other reactant, thus the faster the reaction. To make an analogy, for example, when one starts a fire, one uses wood chips and small branchesone doesn't start with large logs right away. In organic chemistry, on water reactions are the exception to the rule that homogeneous reactions take place faster than heterogeneous reactions. [edit] Concentration Concentration plays a very important role in reactions, because according to the collision theory of chemical reactions, molecules must collide in order to react together. As the concentration of the reactants increases, the frequency of the molecules colliding increases, striking each other more frequently by being in closer contact at any given point in time. Think of two reactants being in a closed container. All the molecules contained within are colliding constantly. By increasing the amount of one or more of the reactants it causes these collisions to happen more often, increasing the reaction rate. [edit] Temperature Temperature usually has a major effect on the rate of a chemical reaction. Molecules at a higher temperature have more thermal energy. Although collision frequency is greater at higher temperatures, this alone contributes only a very small proportion to the increase in rate of reaction. Much more

important is the fact that the proportion of reactant molecules with sufficient energy to react (energy greater than activation energy: E > Ea) is significantly higher and is explained in detail by the MaxwellBoltzmann distribution of molecular energies. The 'rule of thumb' that the rate of chemical reactions doubles for every 10 C temperature rise is a common misconception. This may have been generalized from the special case of biological systems, where the (temperature coefficient) is often between 1.5 and 2.5. A reaction's kinetics can also be studied with a temperature jump approach. This involves using a sharp rise in temperature and observing the relaxation time of the return to equilibrium. [edit] Catalysts

Generic potential energy diagram showing the effect of a catalyst in an hypothetical endothermic chemical reaction. The presence of the catalyst opens a different reaction pathway (shown in red) with a lower activation energy. The final result and the overall thermodynamics are the same. A catalyst is a substance that accelerates the rate of a chemical reaction but remains chemically unchanged afterwards. The catalyst increases rate reaction by providing a different reaction mechanism to occur with a lower activation energy. In autocatalysis a reaction product is itself a catalyst for that reaction leading to positive feedback. Proteins that act as catalysts in biochemical reactions are called enzymes. Michaelis-Menten kinetics describe the rate of enzyme mediated reactions. A catalyst does not affect the position of the equilibria, as the catalyst speeds up the backward and forward reactions equally. In certain organic molecules, specific substituents can have an influence on reaction rate in neighbouring group participation. Agitating or mixing a solution will also accelerate the rate of a chemical reaction, as this gives the particles greater kinetic energy, increasing the number of collisions between reactants and therefore the possibility of successful collisions. [edit] Pressure Increasing the pressure in a gaseous reaction will increase the number of collisions between reactants, increasing the rate of reaction. This is because the activity of a gas is directly proportional to the partial pressure of the gas. This is similar to the effect of increasing the concentration of a solution. A reaction's kinetics can also be studied with a pressure jump approach. This involves making fast changes in pressure and observing the relaxation time of the return to equilibrium. [edit] Equilibrium While chemical kinetics is concerned with the rate of a chemical reaction, thermodynamics determines the extent to which reactions occur. In a reversible reaction, chemical equilibrium is reached when the rates of the forward and reverse reactions are equal and the concentrations of the reactants and products no longer change. This is demonstrated by, for example, the HaberBosch process for combining nitrogen and hydrogen to produce ammonia. Chemical clock reactions such as the BelousovZhabotinsky reaction demonstrate that component concentrations can oscillate for a long time before finally attaining the equilibrium. [edit] Free energy In general terms, the free energy change (G) of a reaction determines whether a chemical change will take place, but kinetics describes how fast the reaction is. A reaction can be very exothermic and have a very positive entropy change but will not happen in practice if the reaction is too slow. If a reactant can produce two different products, the thermodynamically most stable one will generally form except in special circumstances when the reaction is said to be under kinetic reaction control. The CurtinHammett principle applies when determining the product ratio for two reactants interconverting rapidly, each going to a different product. It is possible to make predictions about reaction rate constants for a reaction from free-energy relationships.

The kinetic isotope effect is the difference in the rate of a chemical reaction when an atom in one of the reactants is replaced by one of its isotopes. Chemical kinetics provides information on residence time and heat transfer in a chemical reactor in chemical engineering and the molar mass distribution in polymer chemistry. [edit] Applications The mathematical models that describe chemical reaction kinetics provide chemists and chemical engineers with tools to better understand and describe chemical processes such as food decomposition, microorganism growth, stratospheric ozone decomposition, and the complex chemistry of biological systems. These models can also be used in the design or modification of chemical reactors to optimize product yield, more efficiently separate products, and eliminate environmentally harmful by-products. When performing catalytic cracking of heavy hydrocarbons into gasoline and light gas, for example, kinetic models can be used to find the temperature and pressure at which the highest yield of heavy hydrocarbons into gasoline will occur. Thermodynamic free energy From Wikipedia, the free encyclopedia Jump to: navigation, search Thermodynamics

Branches[show] Laws[show] Systems[show] System properties[show] Material properties[show] T N 1 V 1 V

Specific heat capacity c =

Compressibility

Thermal expansion

Equations[show] Potentials[show] Internal energy Enthalpy Helmholtz free energy Gibbs free energy History and culture[show] Scientists[show] vde U(S,V) H(S,p) = U + pV A(T,V) = U TS G(T,p) = H TS

The thermodynamic free energy is the amount of work that a thermodynamic system can perform. The concept is useful in the thermodynamics of chemical or thermal processes in engineering and science. The free energy is the internal energy of a system less the amount of energy that cannot be used to perform work. This unusable energy is given by the entropy of a system multiplied by the temperature of the system. Like the internal energy, the free energy is a thermodynamic state function. Contents [hide]

1 Overview o 1.1 The meaning of free 2 Application 3 History 4 See also 5 References

[edit] Overview Free energy is that portion of any first-law energy that is available to perform thermodynamic work; i.e., work mediated by thermal energy. Free energy is subject to irreversible loss in the course of such work.[1] Since first-law energy is always conserved, it is evident that free energy is an expendable, second-law kind of energy that can perform work within finite amounts of time. Several free energy functions may be formulated based on system criteria. Free energy functions are Legendre transformations of the internal energy. For processes involving a system at constant pressure p and temperature T, the Gibbs free energy is the most useful because, in addition to subsuming any entropy change due merely to heat, it does the same for the pdV work needed to "make space for additional molecules" produced by various processes. (Hence its utility to solution-phase chemists, including biochemists.) The Helmholtz free energy has a special theoretical importance since it is proportional to the logarithm of the partition function for the canonical ensemble in statistical mechanics. (Hence its utility to physicists; and to gas-phase chemists and engineers, who do not want to ignore pdV work.) The historically earlier Helmholtz free energy is defined as A = U TS, where U is the internal energy, T is the absolute temperature, and S is the entropy. Its change is equal to the amount of reversible work done on, or obtainable from, a system at constant T. Thus its appellation "work content", and the designation A from Arbeit, the German word for work. Since it makes no reference to any quantities involved in work (such as p and V), the Helmholtz function is completely general: its decrease is the maximum amount of work which can be done by a system, and it can increase at most by the amount of work done on a system. The Gibbs free energy G = H TS, where H is the enthalpy. (H = U + pV, where p is the pressure and V is the volume.) Historically, these energy terms have been used inconsistently. In physics, free energy most often refers to the Helmholtz free energy, denoted by F, while in chemistry, free energy most often refers to the Gibbs free energy. Since both fields use both functions, a compromise has been suggested, using A to denote the Helmholtz function and G for the Gibbs function. While A is preferred by IUPAC, F is sometimes still in use, and the correct free energy function is often implicit in manuscripts and presentations. [edit] The meaning of free In the 18th and 19th centuries, the theory of heat, i.e., that heat is a form of energy having relation to vibratory motion, was beginning to supplant both the caloric theory, i.e., that heat is a fluid, and the four element theory, in which heat was the lightest of the four elements. In a similar manner, during these years, heat was beginning to be distinguished into different classification categories, such as free heat, combined heat, radiant heat, specific heat, heat capacity, absolute heat, latent caloric, free or perceptible caloric (calorique sensible), among others. In 1780, for example, Laplace and Lavoisier stated: In general, one can change the first hypothesis into the second by changing the words free heat, combined heat, and heat released into vis viva, loss of vis viva, and increase of vis viva. In this manner, the total mass of caloric in a body, called absolute heat, was regarded as a mixture of two components; the free or perceptible caloric could affect a thermometer, whereas the other component, the latent caloric, could not.[2] The use of the words latent heat implied a similarity to latent heat in the more usual sense; it was regarded as chemically bound to the molecules of the body. In the adiabatic compression of a gas, the absolute heat remained constant by the observed rise of temperature, indicating that some latent caloric had become free or perceptible. During the early 19th century, the concept of perceptible or free caloric began to be referred to as free heat or heat set free. In 1824, for example, the French physicist Sadi Carnot, in his famous Reflections on the Motive Power of Fire, speaks of quantities of heat absorbed or set free in different transformations. In 1882, the German physicist and physiologist Hermann von Helmholtz coined the phrase free energy for the expression E TS, in which the change in F (or G) determines the amount of energy free for work under the given conditions.[3] Thus, in traditional use, the term free was attached to Gibbs free energy, i.e., for systems at constant pressure and temperature, or to Helmholtz free energy, i.e., for systems at constant volume and temperature, to mean available in the form of useful work. [4] With reference to the Gibbs free energy, we add the qualification that it is the energy free for non-volume work.[5]

An increasing number of books and journal articles do not include the attachment free, referring to G as simply Gibbs energy (and likewise for the Helmholtz energy). This is the result of a 1988 IUPAC meeting to set unified terminologies for the international scientific community, in which the adjective free was supposedly banished.[6] This standard, however, has not yet been universally adopted, and many published articles and books still include the descriptive free.[citation needed] [edit] Application The experimental usefulness of these functions is restricted to conditions where certain variables (T, and V or external p) are held constant, although they also have theoretical importance in deriving Maxwell relations. Work other than pdV may be added, e.g., for electrochemical cells, or work in elastic materials and in muscle contraction. Other forms of work which must sometimes be considered are stress-strain, magnetic, as in adiabatic demagnetization used in the approach to absolute zero, and work due to electric polarization. These are described by tensors. In most cases of interest there are internal degrees of freedom and processes, such as chemical reactions and phase transitions, which create entropy. Even for homogeneous "bulk" materials, the free energy functions depend on the (often suppressed) composition, as do all proper thermodynamic potentials (extensive functions), including the internal energy. Name Gibbs free energy Symbol Formula U TS G Natural variables T,V,{Ni}

Helmholtz free energy F, A

U + pV TS T,p,{Ni}

Ni is the number of molecules (alternatively, moles) of type i in the system. If these quantities do not appear, it is impossible to describe compositional changes. The differentials for reversible processes are (assuming only pV work)

where i is the chemical potential for the i-th component in the system. The second relation is especially useful at constant T and p, conditions which are easy to achieve experimentally, and which approximately characterize living creatures.

Any decrease in the Gibbs function of a system is the upper limit for any isothermal, isobaric work that can be captured in the surroundings, or it may simply be dissipated, appearing as T times a corresponding increase in the entropy of the system and/or its surrounding. An example is surface free energy, the amount of increase of free energy when the area of surface increases by every unit area. The path integral Monte Carlo method is a numerical approach for determining the values of free energies, based on quantum dynamical principles. Entropy From Wikipedia, the free encyclopedia Jump to: navigation, search This article is about entropy in thermodynamics. For entropy in information theory, see Entropy (information theory). For a comparison of entropy in information theory with entropy in thermodynamics, see Entropy in thermodynamics and information theory. For other uses, see Entropy (disambiguation). For a generally accessible and less technical introduction to the topic, see Introduction to entropy.

Ice melting in a warm room is a common example of increasing entropy,[note 1] described in 1862 by Rudolf Clausius as an increase in the disgregation of the molecules of ice.[1] Entropy articles

Introduction

History

Classical

Statistical

Entropy is a thermodynamic property that can be used to determine the energy available for useful work in a thermodynamic process, such as in energy conversion devices, engines, or machines. Such devices can only be driven by convertible energy, and have a theoretical maximum efficiency when converting energy to work. During this work, entropy accumulates in the system, which then dissipates in the form of waste heat. In classical thermodynamics, the concept of entropy is defined phenomenologically by the second law of thermodynamics, which states that the entropy of an isolated system always increases or remains constant. Thus, entropy is also a measure of the tendency of a process, such as a chemical reaction, to be entropically favored, or to proceed in a particular direction. It determines that thermal energy always flows spontaneously from regions of higher temperature to regions of lower temperature, in the form of heat. These processes reduce the state of order of the initial systems, and therefore entropy is an expression of disorder or randomness. This picture is the basis of the modern microscopic interpretation of entropy in statistical mechanics, where entropy is defined as the amount of additional information needed to specify the exact physical state of a system, given its thermodynamic specification. The second law is then a consequence of this definition and the fundamental postulate of statistical mechanics. Thermodynamic entropy has the dimension of energy divided by temperature, and a unit of joules per kelvin (J/K) in the International System of Units. The term entropy was coined in 1865 by Rudolf Clausius based on the Greek [entropa], a turning toward, from - [en-] (in) and [trop] (turn, conversion).[2][note 2] Definitions and descriptions Thermodynamic entropy is more generally defined from a statistical thermodynamics viewpoint, in which the molecular nature of matter is explicitly considered. Alternatively entropy can be defined from a classical thermodynamics viewpoint, in which the molecular interactions are not considered and instead the system is viewed from perspective of the gross motion of very large masses of molecules and the behavior of individual molecules is averaged and obscured. Historically, the classical thermodynamics definition developed first, and it has more recently been extended in the area of nonequilibrium thermodynamics. [edit] Carnot cycle

The concept of entropy arose from Rudolf Clausius's study of the Carnot cycle.[18] In a Carnot cycle, heat (Q1) is absorbed from a 'hot' reservoir, isothermally at the higher temperature T1, and given up isothermally to a 'cold' reservoir, Q2, at a lower temperature, T2. According to Carnot's principle, work can only be done when there is a drop in the temperature, and the work should be some function of the difference in temperature and the heat absorbed. Carnot did not distinguish between Q1 and Q2, since he was working under the hypothesis that caloric theory was valid, and hence heat was conserved.[19] Through the efforts of Clausius and Kelvin, it is now known that the maximum work that can be done is the product of the Carnot efficiency

and the heat absorbed at the hot reservoir: In order to derive the Carnot efficiency, Kelvin had to evaluate the ratio of the work done to the heat absorbed in the isothermal expansion with the help of the Carnot-Clapeyron equation which contained an unknown function, known as the Carnot function. The fact that the Carnot function could be the temperature, measured from zero, was suggested by Joule in a letter to Kelvin, and this allowed Kelvin to establish his absolute temperature scale.[20] It is also known that the work is the difference in the heat absorbed at the hot reservoir and rejected at the cold one: W = Q1 Q2 Since the latter is valid over the entire cycle, this gave Clausius the hint that at each stage of the cycle, work and heat would not be equal, but rather their difference would be a state function that would vanish upon completion of the cycle. The state function was called the internal energy and it became the first law of thermodynamics.[21]

Now equating the two expressions gives If we allow Q2 to incorporate the algebraic sign, this becomes a sum and implies that there is a function of state which is conserved over a complete cycle. Clausius called this state function entropy. This is the second law of thermodynamics. Then Clausius asked what would happen if there would be less work done than that predicted by Carnot's principle. The right-hand side of the first

equation would be the upper bound of the work, which would now be converted into an inequality

When the second

equation is used to express the work as a difference in heats, we get or So more heat is given off to the cold reservoir than in the Carnot cycle. If we denote the entropies by Si = Qi / Ti for the two states, then the above inequality can be written as a decrease in the entropy S1 S2 < 0 The wasted heat implies that irreversible processes must have prevented the cycle from carrying out maximum work. Approaches to understanding entropy [edit] Order and disorder Main article: Entropy (order and disorder) Entropy has often been loosely associated with the amount of order, disorder, and/or chaos in a thermodynamic system. The traditional qualitative description of entropy is that it refers to changes in the status quo of the system and is a measure of "molecular disorder" and the amount of wasted energy in a dynamical energy transformation from one state or form to another.[38] In this direction, several recent authors have derived exact entropy formulas to account for and measure disorder and order in atomic and molecular assemblies.[39][40][41][42] One of the simpler entropy order/disorder formulas is that derived in 1984 by thermodynamic physicist Peter Landsberg, based on a combination of thermodynamics and information theory arguments. He argues that when constraints operate on a system, such that it is prevented from entering one or more of its possible or permitted states, as contrasted with its forbidden states, the measure of the total amount of disorder in the system is given by:[41][42]

Similarly, the total amount of "order" in the system is given by:

In which CD is the "disorder" capacity of the system, which is the entropy of the parts contained in the permitted ensemble, CI is the "information" capacity of the system, an expression similar to Shannon's channel capacity, and CO is the "order" capacity of the system.[40] [edit] Energy dispersal Main article: Entropy (energy dispersal) The concept of entropy can be described qualitatively as a measure of energy dispersal at a specific temperature. [43] Similar terms have been in use from early in the history of classical thermodynamics, and with the development of statistical thermodynamics and quantum theory,

entropy changes have been described in terms of the mixing or "spreading" of the total energy of each constituent of a system over its particular quantized energy levels. Ambiguities in the terms disorder and chaos, which usually have meanings directly opposed to equilibrium, contribute to widespread confusion and hamper comprehension of entropy for most students. [44] As the second law of thermodynamics shows, in an isolated system internal portions at different temperatures will tend to adjust to a single uniform temperature and thus produce equilibrium. A recently developed educational approach avoids ambiguous terms and describes such spreading out of energy as dispersal, which leads to loss of the differentials required for work even though the total energy remains constant in accordance with the first law of thermodynamics[45] (compare discussion in next section). Physical chemist Peter Atkins, for example, who previously wrote of dispersal leading to a disordered state, now writes that "spontaneous changes are always accompanied by a dispersal of energy".[30][46] [edit] Relating entropy to energy usefulness Following on from the above, it is possible (in a thermal context) to regard entropy as an indicator or measure of the effectiveness or usefulness of a particular quantity of energy.[47] This is because energy supplied at a high temperature (i.e. with low entropy) tends to be more useful than the same amount of energy available at room temperature. Mixing a hot parcel of a fluid with a cold one produces a parcel of intermediate temperature, in which the overall increase in entropy represents a loss which can never be replaced. Thus, the fact that the entropy of the universe is steadily increasing, means that its total energy is becoming less useful: eventually, this will lead to the "heat death of the Universe".

You might also like