You are on page 1of 18

Physics 201 and Physics 214 Study Guide Topics: Definition, History, People and Branches of Sciences and

Physics, Fundamental Units (Base Units), Scientific Notation, Rounding Off Numbers, Significant Figures, Conversion of Units, Scalar and Vector Quantities A. Definition, History, People and Branches of Science Definition of Science-is a systematized body of knowledge based on facts and principles -is a branch of knowledge or study dealing with a body of facts or truths systematically arranged and showing the operation of the general laws -is systematic knowledge of the physical or material world gained through observation and experimentation Divisions of Science: (1) Natural Science- division of science that deals with nature; Subdivisions (1.a) Biological Science-deals with living things e.g. Biology, Botany, Zoology, Microbiology; (1.b) Physical Sciencedeals with non-living things e.g. Physics, Chemistry, Earth Science, Metreology, Geology; (2) Social Sciencedivision of science that deals with the society e.g. anthropology, archaeology, business administration, criminology, economics, education, geography, linguistics, political science, government, sociology, international relations, history, law, psychology and communication; (3) Applied Science- the application of scientific knowledge transferred into a physical environment e.g. engineering, medicine and other health sciences; (4) Abstract science- e.g. mathematics and philosophy History of Science Scientific Revolution, the period roughly between 1500 and 1700 during which the foundations of modern science were laid down in Western Europe. Before this period, nothing like science in the modern sense existed. Throughout the Middle Ages, formal attempts to understand the physical world were developed, chiefly in the arts and medical faculties of the medieval universities. This natural philosophy, as it was known, derived almost entirely from the teachings of the ancient Greek philosopher Aristotle. Most of the brilliant legacy of ancient Greek thought had been lost to Western Europe after the fall of the Roman Empire in the 5th century. When this legacy began to be recovered from Byzantine and Islamic sources where it had to some extent been preserved, it was the works of Aristotle that had the most immediate impact and began to dominate Western philosophical thought. The learning in the two most powerful faculties of the medieval university system, the faculties of divinity and of law, was based on ancient writings: the Bible and Roman Law, as codified by Byzantine emperor Justinian I in the Corpus Juris Civilis (534; Body of Civil Law). The arts and medical faculties tended to follow suit, with the result that study focused not on the natural world itself, nor on the techniques of practical healing, but instead on the writings of Aristotle and Galen, who was the equivalent medical authority in ancient times. Concentration on the study of texts meant that there was little or no practical study or experimentation within the university curricula. This tendency to avoid practical subjects was reinforced by Aristotle's own teachings on how natural philosophy should be conducted and on the correct way of determining the truth of things. He rejected the use of mathematics in natural philosophy, for example, because he insisted that natural philosophy should explain phenomena in terms of physical causes. Mathematics, being entirely abstract, could not contribute to this kind of physical explanation. Even those branches of the mathematical sciences that seemed to come close to explaining the physical world, such as astronomy and optics, were disparaged as mixed sciences that tried to combine the principles of one science, geometry, with those of another, physics, in order to explain the behavior of heavenly bodies or rays of light. But the results, according to Aristotle, could not properly explain anything. Although geometry and arithmetic were taught in the university system they were always regarded as inferior to natural philosophy and could not be used, therefore, to promote more practical approaches to the understanding of nature. Within the universities, even the study of plants and animals tended to be text-based. Students learned their knowledge of flora, for example, from the compilations of herbal and medicinal plants by the Greek physician Pedanius Dioscorides, leaving more localized and practical knowledge to lay experts in herbal lore

outside the university system. Similarly, alchemy and other empirical (based on experimentation and observation) aspects of the natural magic tradition were pursued almost entirely outside the university system. This fragmentation of studies concerned with the workings of nature was reinforced throughout the Middle Ages by the Roman Catholic Church. After some initial problems with non-Christian aspects of Aristotelian teaching, the Church embraced such teaching as a handmaiden to the so-called queen of the sciences, theology. The Church considered Aristotelian natural philosophy to provide support to religious doctrines, but other naturalist pursuits were considered to be subversive. The Church tended to be suspicious of natural magic, for example, even though natural magic was simply concerned with the demonstrable properties of material bodies (such as the ability of magnets to attract iron or the ability of certain plants or their extracts to cure diseases). One way or another, therefore, the powerful combination of Aristotelian teachings with Church doctrines tended to exclude direct study and analysis of nature. The situation began to change during the Renaissance, a period of tremendous cultural achievement in Europe that began in the early 14th century and ended about 1600. The scientific revolution can be seen as a major aspect of the sweeping and far-reaching changes of the Renaissance. In broad terms the scientific revolution had four major aspects: (1) the development of the experimental method, (2) the realization that nature obeys mathematical rules, (3) the use of scientific knowledge to achieve practical aims, and (4) the development of scientific institutions. Development of the Experimental method The Renaissance was the period when the experimental method, still characteristic of science today, began to be developed and came increasingly to be used for understanding all aspects of the physical world. Previously, the natural world had been thought to be comprehensible based on thoughtful consideration alone. The experimental method holds that understanding comes through hands-on trial and error under controlled conditions. The experimental method was not in itself new it had been a common aspect of the natural magic tradition from ancient times. For example, all the experimental techniques used by the English physicist William Gilbert, author of what is generally acknowledged to be the earliest example of an experimental study of a natural phenomenon, De Magnete (1600; Of Magnets, Magnetic Bodies, and the Great Magnet of the Earth, 1890), were first developed by Petrus Peregrinus, a renowned medieval magus (magician). Experimentation was a major aspect of the natural magic tradition and was ready for appropriation by Renaissance natural philosophers who recognized its potential. The experimental methodology used in magic became more acceptable to Renaissance scholars thanks to the rediscovery of ancient magical writings. Religious opposition to magic had less force after the discovery of various writings allegedly written by Hermes Trismegistus, Zoroaster, Orpheus, and other mythical or legendary characters. We now know these texts were written in the early centuries of the Christian Era and deliberately attributed to such legendary authors, but Renaissance scholars believed they were genuinely ancient documents. This gave the texts great authority and led to increased respect for magical approaches. Increased emphasis on experience and observation complemented the adoption of manipulative experimental techniques. Andreas Vesalius, innovative professor of surgery at the University of Padua, claimed to have noticed over 200 errors in Galen's anatomical writings when he performed his own dissections. Scholars had previously relied on Galen s works rather than performing their own dissections. Vesalius's emphasis upon a return to anatomical dissection led to major discoveries. William Harvey, who was taught by one of Vesalius's successors at Padua, discovered that blood circulates through the body. Similarly, the discovery of numerous new species of animals and plants in the New World led to a more empirical approach to natural history. Previously, bestiaries (books containing collected descriptions of animals) and herbals (books containing collected descriptions of plants) had included religious symbolism, legends, superstitions, and other nonnatural lore. Since there was no equivalent information about newly discovered species, however, herbals and bestiaries compiled after the Renaissance were more likely to record properties based on actual observation. The advent of printing also played an important part in the transmission of accurate information. When the circulation of texts depended upon handwritten copies, illustrations were often crudely executed by the various

scribes who copied the book. Subsequent copies of the copy could be unrecognizable. In the preparation of a printed edition, however, a skilled illustrator could be called in to prepare a single illustration that would then be mass-produced. The standard of illustrations improved immeasurably. Almost inevitably the illustrations became more realistic and stimulated a concern for proper observation of natural phenomena. Another important aspect of the new focus on experimentation and observation (empiricism) was the invention of new observational instruments. The Italian astronomer Galileo, for example, used the telescope first developed for commercial purposes to make astonishing astronomical observations. His exciting success stimulated the development of a whole range of instruments for studying nature, such as the microscope, thermometer, and barometer. Mathematization of Nature The scientific revolution has also been characterized as the period of the mathematization of the world picture. Quantitative information and mathematical analysis of the physical world began to be seen to offer more reliable knowledge than the more qualitative and philosophical analyses that had been typical of traditional natural philosophy. The mathematical sciences had their own long history, but thanks to Aristotle's strictures they had always been kept separate from natural philosophy and regarded as inferior to it. Aristotle's authority weakened throughout the Renaissance, however, as the rediscovery of the writings of other ancient Greek philosophers with views widely divergent from those of Aristotle, such as Plato, Epicurus, and the Stoics, made it plain that he was by no means the only ancient authority. As skepticism became credible in light of the remarkable exposures of the failings of traditional intellectual positions, mathematics became an increasingly powerful force. Mathematicians claimed to deal with absolute knowledge, capable of undeniable proof and so immune from skeptical criticisms. The full story of the rise in status of mathematics is complex and crowded. Notable contributors included Polish astronomer Nicolaus Copernicus, who claimed that, for no other reason than that the mathematics indicated it, Earth must revolve around the Sun, and German astronomer Johannes Kepler, who reinforced this idea with astronomical measurements vastly more precise than any that had previously been made. Copernicus s moving Earth demanded a new theory of how moving bodies behave. This theory of motion was effectively initiated as a new mathematical science by Galileo and reached its pinnacle a few decades later in the work of Isaac Newton. Practical Uses of Scientific Knowledge Experimentalism and mathematization were both stimulated by an increasing concern that knowledge of nature should be practically useful, bringing distinct benefits to its practitioners, its patrons, or even to people in general. Apart from supporting dubious medical ideas, the only use to which natural philosophy had been put throughout the Middle Ages was for bolstering religion. During the scientific revolution the practical usefulness of knowledge, an assumption previously confined to the magical and the mathematical traditions, was extended to natural philosophy. To a large extent this new emphasis was a result of the demands of new patrons, chiefly wealthy princes, who sought some practical benefit from their financial support for the study of nature. The requirement that knowledge be practically useful was also in keeping, however, with the claims of the Renaissance humanists that the vita activa (active life) was contrary to the teachings of the Church morally superior to the vita contemplativa (contemplative life) of the monk because of the benefits an active life could bring to others. The major spokesman for this new focus in natural philosophy was Francis Bacon, one-time Lord Chancellor of England. Bacon promoted his highly influential vision of a reformed empirical knowledge of nature that he believed would result in immense benefits to mankind. Development of Scientific Institutions Finally, the scientific revolution was also a period during which new organizations and institutions were established for the study of the natural world. While the universities still tended to maintain the traditional natural philosophy, the new empirical, mathematical, and practical approaches were encouraged in the royal courts of Europe and in meetings of like-minded individuals, such as the informal gatherings of experimental philosophers in Oxford and London that occurred during the 1650s. The Royal Society of London was established on a formal basis in 1660 by attendees of those earlier gatherings. Although nominally under the patronage of Charles II, the Royal Society received no financial support from the monarchy. A similar French society, the

Acadmie des Sciences de Paris, however, was set up by Jean-Baptiste Colbert, Louis XIV's controller-general of finance, and its fellows were paid from the treasury. Whatever their precise constitution, the proliferation of collaborative scientific societies testifies to the widespread recognition that, as Bacon wrote, knowledge is power, and knowledge of nature is potentially extremely powerful. People of Science Research the birth and death information, schooling, contributions, work information and awards of the following persons; Plato, Socrates, Aristotle, William Gilbert, Galileo Galilie, Isaac Newton, Nicolaus Copernicus, Johannes Kepler, and Tycho Brahe B. Definition, History, People and Branches of Physics Definition of Physics- is the branch of science that deals with matter and energy, and its interactions; -matter- is anything that occupies space (may have volume) and has mass (matter has quantity); energy is the capacity to do work. Branches of Physics- Divisions: Classical Physics- physics branches under classical physics are those that developed and were recognized before 1900 e.g. optics, acoustics, mechanics, thermodynamics, astronomy, electricity, magnetism, electromagnetism; Modern Physics physics branches under modern physics are those that developed during 20th century to present e.g. quantum physics, relativistic physics, plasma physics, elementary particle physics, solid state physics, condensed matter, molecular physics, atomic physics, and nuclear physics Father of Classical Physics-ISAAC NEWTON Father of Modern Physics- ALBERT EINSTEIN History of Physics Physics is closely related to the other natural sciences and, in a sense, encompasses them. Chemistry, for example, deals with the interaction of atoms to form molecules; much of modern geology is largely a study of the physics of the earth and is known as geophysics; and astronomy deals with the physics of the stars and outer space. Even living systems are made up of fundamental particles and, as studied in biophysics and biochemistry, they follow the same types of laws as the simpler particles traditionally studied by a physicist. The Babylonians, Egyptians, and early Mesoamericans observed the motions of the planets and succeeded in predicting eclipses, but they failed to find an underlying system governing planetary motion. Little was added by the Greek civilization, partly because the uncritical acceptance of the ideas of the major philosophers Plato and Aristotle discouraged experimentation. Some progress was made, however, notably in Alexandria, the scientific center of Greek civilization. There, the Greek mathematician and inventor Archimedes designed various practical mechanical devices, such as levers and screws, and measured the density of solid bodies by submerging them in a liquid. Other important Greek scientists were the astronomer Aristarchus of Smos, who measured the ratio of the distances from the earth to the sun and the moon; the mathematician, astronomer, and geographer Eratosthenes, who determined the circumference of the earth and drew up a catalog of stars; the astronomer Hipparchus, who discovered the precession of the equinoxes; and the astronomer, mathematician, and geographer Ptolemy, who proposed the system of planetary motion that was named after him, in which the earth was the center and the sun, moon, and stars moved around it in circular orbits In the middle ages, little advance was made in physics, or in any other science, during the Middle Ages, other than the preservation of the classical Greek treatises, for which the Arab scholars such as Averros and Al-Quarashi, the latter also known as Ibn al-Naf s, deserve much credit. The founding of the great medieval universities by monastic orders in Europe, starting in the 13th century, generally failed to advance physics or any experimental investigations. The Italian Scholastic philosopher and theologian Saint Thomas Aquinas, for instance, attempted to demonstrate that the works of Plato and Aristotle were consistent with the Scriptures. The English Scholastic philosopher and scientist Roger Bacon was one of the few philosophers who advocated the experimental method

as the true foundation of scientific knowledge and who also did some work in astronomy, chemistry, optics, and machine design. In the 16th and 17th centuries, the advent of modern science followed the Renaissance and was ushered in by the highly successful attempt by four outstanding individuals to interpret the behavior of the heavenly bodies during the 16th and early 17th centuries. The Polish natural philosopher Nicolaus Copernicus propounded the heliocentric system that the planets move around the sun. He was convinced, however, that the planetary orbits were circular, and therefore his system required almost as many complicated elaborations as the Ptolemaic system it was intended to replace. The Danish astronomer Tycho Brahe, believing in the Ptolemaic system, tried to confirm it by a series of remarkably accurate measurements. These provided his assistant, the German astronomer Johannes Kepler, with the data to overthrow the Ptolemaic system and led to the enunciation of three laws that conformed with a modified heliocentric theory. Galileo, having heard of the invention of the telescope, constructed one of his own and, starting in 1609, was able to confirm the heliocentric system by observing the phases of the planet Venus. He also discovered the surface irregularities of the moon, the four brightest satellites of Jupiter, sunspots, and many stars in the Milky Way. Galileo's interests were not limited to astronomy; by using inclined planes and an improved water clock, he had earlier demonstrated that bodies of different weight fall at the same rate (thus overturning Aristotle's dictums), and that their speed increases uniformly with the time of fall. Galileo's astronomical discoveries and his work in mechanics foreshadowed the work of the 17th-century English mathematician and physicist Sir Isaac Newton, one of the greatest scientists who ever lived. (For Mechanics and Heat students) Starting about 1665, at the age of 23, Newton enunciated the principles of mechanics, formulated the law of universal gravitation, separated white light into colors, proposed a theory for the propagation of light, and invented differential and integral calculus. Newton's contributions covered an enormous range of natural phenomena: He was thus able to show that not only Kepler's laws of planetary motion but also Galileo's discoveries of falling bodies follow a combination of his own second law of motion and the law of gravitation, and to predict the appearance of comets, explain the effect of the moon in producing the tides, and explain the precession of the equinoxes. The subsequent development of physics owes much to Newton's laws of motion, notably the second, which states that the force needed to accelerate an object will be proportional to its mass times the acceleration. If the force and the initial position and velocity of a body are given, subsequent positions and velocities can be computed, although the force may vary with time or position; in the latter case, Newton's calculus must be applied. This simple law contained another important aspect: Each body has an inherent property, its inertial mass, which influences its motion. The greater this mass, the slower the change of velocity when a given force is impressed. Even today, the law retains its practical utility, as long as the body is not very small, not very massive, and not moving extremely rapidly. Newton's third law, expressed simply as for every action there is an equal and opposite reaction, recognizes, in more sophisticated modern terms, that all forces between particles come in oppositely directed pairs, although not necessarily along the line joining the particles. Newton's more specific contribution to the description of the forces in nature was the elucidation of the force of gravity. Today scientists know that in addition to gravity only three other fundamental forces give rise to all observed properties and activities in the universe: those of electromagnetism, the so-called strong nuclear interactions that bind together the neutrons and protons within atomic nuclei, and the weak interactions between some of the elementary particles that account for the phenomenon of radioactivity. Understanding of the force concept, however, dates from the universal law of gravitation, which recognizes that all material particles, and the bodies that are composed of them, have a property called gravitational mass. This property causes any two particles to exert attractive forces on each other (along the line joining them) that are directly proportional to the product of the masses, and inversely proportional to the square of the distance between the particles. This force of gravity governs the motion of the planets about the sun and the earth's own gravitational field, and it may also be responsible for the possible gravitational collapse, the final stage in the life cycle of stars. One of the most important observations of physics is that the gravitational mass of a body (which is the source of one of the forces existing between it and another particle), is effectively the same as its inertial mass, the property that determines the motional response to any force exerted on it. This equivalence, now confirmed

experimentally to within one part in 1013, holds in the sense of proportionality that is, when one body has twice the gravitational mass of another, it also has twice the inertial mass. Thus, Galileo's demonstrations, which antedate Newton's laws, that bodies fall to the ground with the same acceleration and hence with the same motion, can be explained by the fact that the gravitational mass of a body, which determines the forces exerted on it, and the inertial mass, which determines the response to that force, cancel out. The full significance of this equivalence between gravitational and inertial masses, however, was not appreciated until Albert Einstein, the theoretical physicist who enunciated the theory of relativity, saw that it led to a further implication: the inability to distinguish between a gravitational field and an accelerated frame of reference The force of gravity is the weakest of the four forces of nature when elementary particles are considered. The gravitational force between two protons, for example, which are among the heaviest elementary particles, is at any given distance only 10-36 the magnitude of the electrostatic forces between them, and for two such protons in the nucleus of an atom, this force in turn is many times smaller than the strong nuclear interaction. The dominance of gravity on a macroscopic scale is due to two reasons: (1) Only one type of mass is known, which leads to only one kind of gravitational force, which is attractive. The many elementary particles that make up a large body, such as the earth, therefore exhibit an additive effect of their gravitational forces in line with the addition of their masses, which thus become very large. (2) The gravitational forces act over a large range, and decrease only as the square of the distance between two bodies. By contrast, the electric charges of elementary particles, which give rise to electrostatic and magnetic forces, are either positive or negative, or absent altogether. Only particles with opposite charges attract one another, and large composite bodies therefore tend to be electrically neutral and inactive. On the other hand, the nuclear forces, both strong and weak, are extremely short range and become hardly noticeable at distances of the order of 1 million-millionth of an inch. Despite its macroscopic importance, the force of gravity remains so weak that a body must be very massive before its influence is noticed by another. Thus, the law of universal gravitation was deduced from observations of the motions of the planets long before it could be checked experimentally. Not until 1771 did the British physicist and chemist Henry Cavendish confirm it by using large spheres of lead to attract small masses attached to a torsion pendulum, and from these measurements also deduced the density of the earth. In the two centuries after Newton, although mechanics was analyzed, reformulated, and applied to complex systems, no new physical ideas were added. The Swiss mathematician Leonhard Euler first formulated the equations of motion for rigid bodies, while Newton had dealt only with masses concentrated at a point, which thus acted like particles. Various mathematical physicists, among them Joseph Louis Lagrange of France and Sir William Rowan Hamilton of Ireland extended Newton's second law in more sophisticated and elegant reformulations. Over the same period, Euler, the Dutch-born scientist Daniel Bernoulli, and other scientists also extended Newtonian mechanics to lay the foundation of fluid mechanics. (For electricity and magnetism students)Although the ancient Greeks were aware of the electrostatic properties of amber, and the Chinese as early as 2700 BC made crude magnets from lodestone, experimentation with and the understanding and use of electric and magnetic phenomena did not occur until the end of the 18th century. In 1785 the French physicist Charles Augustin de Coulomb first confirmed experimentally that electrical charges attract or repel one another according to an inverse square law, similar to that of gravitation. A powerful theory to calculate the effect of any number of static electric charges arbitrarily distributed was subsequently developed by the French mathematician Simon-Denis Poisson and the German mathematician Carl Friedrich Gauss. A positively charged particle attracts a negatively charged particle, tending to accelerate one toward the other. If the medium through which the particle moves offers resistance to that motion, this may be reduced to a constant-velocity (rather than accelerated) motion, and the medium will be heated up and may also be otherwise affected. The ability to maintain an electromotive force that could continue to drive electrically charged particles had to await the development of the chemical battery by the Italian physicist Alessandro Volta in 1800. The

classical theory of a simple electric circuit assumes that the two terminals of a battery are maintained positively and negatively charged as a result of its internal properties. When the terminals are connected by a wire, negatively charged particles will be simultaneously pushed away from the negative terminal and attracted to the positive one, and in the process heat up the wire that offers resistance to the motion. Upon their arrival at the positive terminal, the battery will force the particles toward the negative terminal, overcoming the opposing forces of Coulomb's law. The German physicist Georg Simon Ohm first discovered the existence of a simple proportionality constant between the current flowing and the electromotive force supplied by a battery, known as the resistance of the circuit. Ohm's law, which states that the resistance is equal to the electromotive force, or voltage, divided by the current, is not a fundamental and universally applicable law of physics, but rather describes the behavior of a limited class of solid materials. The historical concepts of magnetism, based on the existence of pairs of oppositely charged poles, had started in the 17th century and owe much to the work of Coulomb. The first connection between magnetism and electricity, however, was made through the pioneering experiments of the Danish physicist and chemist Hans Christian Oersted, who in 1819 discovered that a magnetic needle could be deflected by a wire nearby carrying an electric current. Within one week after learning of Oersted's discovery, the French scientist Andr Marie Ampre showed experimentally that two current-carrying wires would affect each other like poles of magnets. In 1831 the British physicist and chemist Michael Faraday discovered that an electric current could be induced (made to flow) in a wire without connection to a battery, either by moving a magnet or by placing another current-carrying wire with an unsteady that is, rising and falling current nearby. The intimate connection between electricity and magnetism, now established, can best be stated in terms of electric or magnetic fields, or forces that will act at a particular point on a unit charge or unit current, respectively, placed at that point. Stationary electric charges produce electric fields; currents that is, moving electric charges produce magnetic fields. Electric fields are also produced by changing magnetic fields, and vice versa. Electric fields exert forces on charged particles as a function of their charge alone; magnetic fields will exert an additional force only if the charges are in motion. These qualitative findings were finally put into a precise mathematical form by the British physicist James Clerk Maxwell who, in developing the partial differential equations that bear his name, related the space and time changes of electric and magnetic fields at a point with the charge and current densities at that point. In principle, they permit the calculation of the fields everywhere and any time from a knowledge of the charges and currents. An unexpected result arising from the solution of these equations was the prediction of a new kind of electromagnetic field, one that was produced by accelerating charges, that was propagated through space with the speed of light in the form of an electromagnetic wave, and that decreased with the inverse square of the distance from the source. In 1887 the German physicist Heinrich Rudolf Hertz succeeded in actually generating such waves by electrical means, thereby laying the foundations for radio, radar, television, and other forms of telecommunications. The behavior of electric and magnetic fields in these waves is quite similar to that of a very long taut string, one end of which is rapidly moved up and down in a periodic fashion. Any point along the string will be observed to move up and down, or oscillate, with the same period or with the same frequency as the source. Points along the string at different distances from the source will reach the maximum vertical displacements at different times, or at a different phase. Each point along the string will do what its neighbor did, but a little later, if it is further removed from the vibrating source. The speed with which the disturbance, or the message to oscillate, is transmitted along the string is called the wave velocity. This is a function of the medium, its mass, and the tension in the case of a string. An instantaneous snapshot of the string (after it has been in motion for a while) would show equispaced points having the same displacement and motion, separated by a distance known as the wavelength, which is equal to the wave velocity divided by the frequency. In the case of the electromagnetic field one can think of the electric-field strength as taking the place of the up-and-down motion of each piece of the string, with the magnetic field acting similarly at a direction at right angles to that of the electric field. The electromagnetic-wave velocity away from the source is the speed of light. (For Wave, Sounds, Light, radiation and Optics students) The apparent linear propagation of light was known since antiquity, and the ancient Greeks believed that light consisted of a stream of corpuscles. They were, however, quite confused as to whether these corpuscles originated in the eye or in the object viewed. Any satisfactory theory of light must explain its origin and disappearance and its changes in speed and direction

while it passes through various media. Partial answers to these questions were proposed in the 17th century by Newton, who based them on the assumptions of a corpuscular theory, and by the English scientist Robert Hooke and the Dutch astronomer, mathematician, and physicist Christiaan Huygens, who proposed a wave theory. No experiment could be performed that distinguished between the two theories until the demonstration of interference in the early 19th century by the British physicist and physician Thomas Young. The French physicist Augustin Jean Fresnel decisively favored the wave theory. Interference can be demonstrated by placing a thin slit in front of a light source, stationing a double slit farther away, and looking at a screen spaced some distance behind the double slit. Instead of showing a uniformly illuminated image of the slits, the screen will show equispaced light and dark bands. Particles coming from the same source and arriving at the screen via the two slits could not produce different light intensities at different points and could certainly not cancel each other to yield dark spots. Light waves, however, can produce such an effect. Assuming, as did Huygens, that each of the double slits acts as a new source, emitting light in all directions, the two wave trains arriving at the screen at the same point will not generally arrive in phase, though they will have left the two slits in phase. Depending on the difference in their paths, positive displacements arriving at the same time as negative displacements of the other will tend to cancel out and produce darkness, while the simultaneous arrival of either positive or negative displacements from both sources will lead to reinforcement or brightness. Each apparent bright spot undergoes a timewise variation as successive in-phase waves go from maximum positive through zero to maximum negative displacement and back. Neither the eye nor any classical instrument, however, can determine this rapid flicker, which in the visible-light range has a frequency from 4 1014 to 7.5 1014 Hz, or cycles per second. Although it cannot be measured directly, the frequency can be inferred from wavelength and velocity measurements. The wavelength can be determined from a simple measurement of the distance between the two slits, and the distance between adjacent bright bands on the screen; it ranges from 4 10-5 cm (1.6 10-5 in) for violet light to 7.5 10-5 cm (3 10-5 in) for red light with intermediate wavelengths for the other colors. The first measurement of the velocity of light was carried out by the Danish astronomer Olaus Roemer in 1676. He noted an apparent time variation between successive eclipses of Jupiter's moons, which he ascribed to the intervening change in the distance between Earth and Jupiter, and to the corresponding difference in the time required for the light to reach the earth. His measurement was in fair agreement with the improved 19th-century observations of the French physicist Armand Hippolyte Louis Fizeau, and with the work of the American physicist Albert Abraham Michelson and his coworkers, which extended into the 20th century. Today the velocity of light is known very accurately as 299,292.6 km (185,971.8 mi sec) in vacuum. In matter, the velocity is less and varies with frequency, giving rise to a phenomenon known as dispersion. Maxwell's work contributed several important results to the understanding of light by showing that it was electromagnetic in origin and that electric and magnetic fields oscillated in a light wave. His work predicted the existence of nonvisible light, and today electromagnetic waves or radiations are known to cover the spectrum from gamma rays, with wavelengths of 10-12 cm (4 10-11 in), through X rays, visible light, microwaves, and radio waves, to long waves of hundreds of kilometers in length (see X Ray). It also related the velocity of light in vacuum and through media to other observed properties of space and matter on which electrical and magnetic effects depend. Maxwell's discoveries, however, did not provide any insight into the mysterious medium, corresponding to the string, through which light and electromagnetic waves had to travel. Based on the experience with water, sound, and elastic waves, scientists assumed a similar medium to exist, a luminiferous ether without mass, which was all-pervasive (because light could obviously travel through a massless vacuum), and had to act like a solid (because electromagnetic waves were known to be transverse and the oscillations took place in a plane perpendicular to the direction of propagation, and gases and liquids could only sustain longitudinal waves, such as sound waves). The search for this mysterious ether occupied physicists' attention for much of the last part of the 19th century. The problem was further compounded by an extension of a simple problem. A person walking forward with a speed of 3.2 km/h (2 mph) in a train traveling at 64.4 km/h (40 mph) appears to move at 67.6 km/h (42 mph), to an observer on the ground. In terms of the velocity of light the question that now arose was: If light travels at about 300,000 km/sec (about 186,000 mi/sec) through the ether, at what velocity should it travel relative to an

observer on earth while the earth also moves through the ether? Or, alternately, what is the earth's velocity through the ether? The famous Michelson-Morley experiment, first performed in 1887 by Michelson and the American chemist Edward Williams Morley using an interferometer, was an attempt to measure this velocity; if the earth were traveling through a stationary ether, a difference should be apparent in the time taken by light to traverse a given distance, depending on whether it travels in the direction of or perpendicular to the earth's motion. The experiment was sensitive enough to detect even a very slight difference by interference; the results were negative. Physics was now in a profound quandary from which it was not rescued until Einstein formulated his theory of relativity in 1905. (For Mechanics and Heat; and Thermodynamics students) A branch of physics that assumed major stature during the 19th century was thermodynamics. It began by disentangling the previously confused concepts of heat and temperature, by arriving at meaningful definitions, and by showing how they could be related to the heretofore purely mechanical concepts of work and energy. A different sensation is experienced when a hot or a cold body is touched, leading to the qualitative and subjective concept of temperature. The addition of heat to a body leads to an increase in temperature (as long as no melting or boiling occurs), and in the case of two bodies at different temperatures brought into contact, heat flows from one to the other until their temperatures become the same and thermal equilibrium is reached. To arrive at a scientific measure of temperature, scientists used the observation that the addition or subtraction of heat produced a change in at least one well-defined property of a body. The addition of heat, for example, to a column of liquid maintained at constant pressure increased the length of the column, while the heating of a gas confined in a container raised its pressure. Temperature, therefore, can invariably be measured by one other physical property, as in the length of the mercury column in an ordinary thermometer, provided the other relevant properties remain unchanged. The mathematical relationship between the relevant physical properties of a body or system and its temperature is known as the equation of state. Thus, for an ideal gas, a simple relationship exists between the pressure, p, volume V, number of moles n, and the absolute temperature T, given by pV = nRT, where R is the same constant for all ideal gases. Boyle's law, named after the British physicist and chemist Robert Boyle, and Gay-Lussac's law or Charles's law, named after the French physicists and chemists Joseph Louis Gay-Lussac and Jacques Alexandre Csar Charles, are both contained in this equation of state Until well into the 19th century, heat was considered a massless fluid called caloric, contained in matter and capable of being squeezed out of or into it. Although the so-called caloric theory answered most early questions on thermometry and calorimetry, it failed to provide a sound explanation of many early 19th-century observations. The first true connection between heat and other forms of energy was observed in 1798 by the Anglo-American physicist and statesman Benjamin Thompson who noted that the heat produced in the boring of cannon was roughly proportional to the amount of work done. In mechanics, work is the product of a force on a body and the distance through which the body moves during its application. The equivalence of heat and work was explained by the German physicist Hermann Ludwig Ferdinand von Helmholtz and the British mathematician and physicist William Thomson, 1st Baron Kelvin, by the middle of the 19th century. Equivalence means that doing work on a system can produce exactly the same effect as adding heat; thus the same temperature rise can be achieved in a gas contained in a vessel by adding heat or by doing an appropriate amount of work through a paddle wheel sticking into the container where the paddle is actuated by falling weights. The numerical value of this equivalent was first demonstrated by the British physicist James Prescott Joule in several heating and paddle-wheel experiments between 1840 and 1849. That performing work or adding heat to a system were both means of transferring energy to it was thus recognized. Therefore, the amount of energy added by heat or work had to increase the internal energy of the system, which in turn determined the temperature. If the internal energy remains unchanged, the amount of work done on a system must equal the heat given up by it. This is the first law of thermodynamics, a statement of the conservation of energy. Not until the action of molecules in a system was better understood by the development of the kinetic theory could this internal energy be related to the sum of the kinetic energies of all the molecules making up the system. While the first law indicates that energy must be conserved in any interactions between a system and its surroundings, it gives no indication whether all forms of mechanical and thermal energy exchange are possible.

That overall changes in energy proceed in one direction was first formulated by the French physicist and military engineer Nicolas Lonard Sadi Carnot, who in 1824 pointed out that a heat engine (a device that can produce work continuously while only exchanging heat with its surroundings) requires both a hot body as a source of heat and a cold body to absorb heat that must be discharged. When the engine performs work, heat must be transferred from the hotter to the colder body; to have the inverse take place requires the expenditure of mechanical (or electrical) work. Thus, in a continuously working refrigerator, the absorption of heat from the low temperature source (the cold space) requires the addition of work (usually as electrical power), and the discharge of heat (usually via fanned coils in the rear) to the surroundings. These ideas, based on Carnot's concepts, were eventually formulated rigorously as the second law of thermodynamics by the German mathematical physicist Rudolf Julius Emanuel Clausius and by Lord Kelvin in various alternate, although equivalent, ways. One such formulation is that heat cannot flow from a colder to a hotter body without the expenditure of work. From the second law, it follows that in an isolated system (one that has no interactions with the surroundings) internal portions at different temperatures will always adjust to a single uniform temperature and thus produce equilibrium. This can also be applied to other internal properties that may be different initially. If milk is poured into a cup of coffee, for example, the two substances will continue to mix until they are inseparable and can no longer be differentiated. Thus, an initial separate or ordered state is turned into a mixed or disordered state. These ideas can be expressed by a thermodynamic property, called the entropy (first formulated by Clausius), which serves as a measure of how close a system is to equilibrium that is, to perfect internal disorder. The entropy of an isolated system, and of the universe as a whole, can only increase, and when equilibrium is eventually reached, no more internal change of any form is possible. Applied to the universe as a whole, this principle suggests that eventually all temperature in space becomes uniform, resulting in the so-called heat death of the universe. Locally, the entropy can be lowered by external action. This applies to machines, such as a refrigerator, where the entropy in the cold chamber is being reduced, and to living organisms. This local increase in order is, however, only possible at the expense of an entropy increase in the surroundings; here more disorder must be created. This continued increase in entropy is related to the observed nonreversibility of macroscopic processes. If a process were spontaneously reversible that is, if, after undergoing a process, both it and all the surroundings could be brought back to their initial state the entropy would remain constant in violation of the second law. While this is true for macroscopic processes, and therefore corresponds to daily experience, it does not apply to microscopic processes, which are believed to be reversible. Thus, chemical reactions between individual molecules are not governed by the second law, which applies only to macroscopic ensembles. From the promulgation of the second law, thermodynamics went on to other advances and applications in physics, chemistry, and engineering. Most chemical engineering, all power-plant engineering, and airconditioning and low-temperature physics are just a few of the fields that owe their theoretical basis to thermodynamics and to the subsequent achievements of such scientists as Maxwell, the American physicist J. Willard Gibbs, the German physical chemist Walther Hermann Nernst, and the Norwegian-born American chemist Lars Onsager. (For Modern Physics, Mechanics and Heat, Radiation and Optics students) Two major new developments during the first third of the 20th century, the quantum theory and the theory of relativity, explained these findings, yielded new discoveries, and changed the understanding of physics as it is known today. Relativity:To extend the example of relative velocity introduced with the Michelson-Morley experiment, two situations can be compared. One consists of a person, A, walking forward with a velocity v in a train moving at velocity u. The velocity of A with regard to an observer B stationary on the ground is then simply V = u + v. If, however, the train were at rest in the station and A was moving forward with velocity v while observer B walked backward with velocity u, the relative speed between A and B would be exactly the same as in the first case. In more general terms, if two frames of reference are moving relative to each other at constant velocity, observations of any phenomena made by observers in either frame will be physically equivalent. As already mentioned, the Michelson-Morley experiment failed to confirm the concept of adding velocities, and two

observers, one at rest and the other moving toward a light source with velocity u, both observe the same light velocity V, commonly denoted by the symbol c. Einstein incorporated the invariance of c into his theory of relativity. He also demanded a very careful rethinking of the concepts of space and time, showing the imperfection of intuitive notions about them. As a consequence of his theory, it is known that two clocks that keep identical time when at rest relative to each other must run at different speeds when they are in relative motion, and two rods that are identical in length (at rest) will become different in length when they are in relative motion. Space and time must be closely linked in a four-dimensional continuum where the normal three-space dimensions must be augmented by an interrelated time dimension. Two important consequences of Einstein's relativity theory are the equivalence of mass and energy and the limiting velocity of the speed of light for material objects. Relativistic mechanics describes the motion of objects with velocities that are appreciable fractions of the speed of light, while Newtonian mechanics remains useful for velocities typical of the macroscopic motion of objects on earth. No material object, however, can have a speed equal to or greater than the speed of light. Even more important is the relation between the mass m and energy E. They are coupled by the relation E = mc2, and because c is very large, the energy equivalence of a given mass is enormous. The change of mass giving an energy change is significant in nuclear reactions, as in reactors or nuclear weapons, and in the stars, where a significant loss of mass accompanies the huge energy release. Einstein's original theory, formulated in 1905 and known as the special theory of relativity, was limited to frames of reference moving at constant velocity relative to each other. In 1915, he generalized his hypothesis to formulate the general theory of relativity that applied to systems that accelerate with reference to each other. This extension showed gravitation to be a consequence of the geometry of space-time and predicted the bending of light in its passage close to a massive body like a star, an effect first observed in 1919. General relativity, although less firmly established than the special theory, has deep significance for an understanding of the structure of the universe and its evolution. Quantum Theory:The quandary posed by the observed spectra emitted by solid bodies was first explained by the German physicist Max Planck. According to classical physics, all molecules in a solid can vibrate with the amplitude of the vibrations directly related to the temperature. All vibration frequencies should be possible and the thermal energy of the solid should be continuously convertible into electromagnetic radiation as long as energy is supplied. Planck made a radical assumption by postulating that the molecular oscillator could emit electromagnetic waves only in discrete bundles, now called quanta, or photons. See Photon; Quantum Theory. Each photon has a characteristic wavelength in the spectrum and an energy E given by E = hf, where f is the frequency of the wave. The wavelength related to the frequency by f = c, where c is the speed of light. With the frequency specified in hertz (Hz), or cycles per second, h, now known as Planck's constant, is extremely small (6.626 10-27 erg-sec). With his theory, Planck again introduced a partial duality into the theory of light, which for nearly a century had been considered to be wavelike only. (For Electricity and Magnetism Students) In 1913 the New Zealand-born British physicist Ernest Rutherford, making use of the newly discovered radiations from radioactive nuclei, found Thomson's earlier model of an atom with uniformly distributed positive and negative charged particles to be untenable. The very fast, massive, positively charged alpha particles he employed were found to deflect sharply in their passage through matter. This effect required an atomic model with a heavy positive scattering center. Rutherford then suggested that the positive charge of an atom was concentrated in a massive stationary nucleus, with the negative electron moving in orbits about it, and positioned by the electric attraction between opposite charges. This solar-system-like atomic model, however, could not persist according to Maxwell's theory, where the revolving electrons should emit electromagnetic radiation and force a total collapse of the system in a very short time. Another sharp break with classical physics was required at this point. It was provided by the Danish physicist Niels Henrik David Bohr, who postulated the existence within atoms of certain specified orbits in which electrons could revolve without electromagnetic radiation emission. These allowed orbits, or so-called stationary states,

are determined by the condition that the angular momentum J of the orbiting electron must be a positive multiple integral of Planck's constant, divided by 2 , that is, J = nh/2p, where the quantum number n may have any positive integer value. This extended quantization to dynamics, fixed the possible orbits, and allowed Bohr to calculate their radii and the corresponding energy levels. Also in 1913 the model was confirmed experimentally by the German-born American physicist James Franck and the German physicist Gustav Hertz. Bohr developed his model much further. He explained how atoms radiate light and other electromagnetic waves, and also proposed that an electron lifted by a sufficient disturbance of the atom from the orbit of smallest radius and least energy (the ground state) into another orbit, would soon fall back to the ground state. This falling back is accompanied by the emission of a single photon of energy E = hf, where E is the difference in energy between the higher and lower orbits. Each orbit shift emits a characteristic photon of sharply defined frequency and wavelength; thus one photon would be emitted in a direct shift from the n = 3 to the n = 1 orbit, which will be quite different from the two photons emitted in a sequential shift from the n = 3 to n = 2 orbit, and then from there to the n = 1 orbit. This model now allowed Bohr to account with great accuracy for the simplest atomic spectrum, that of hydrogen, which had defied classical physics. Although Bohr's model was extended and refined, it could not explain observations for atoms with more than one electron. It could not even account for the intensity of the spectral colors of the simple hydrogen atom. Because it had no more than a limited ability to predict experimental results, it remained unsatisfactory for theoretical physicists. (For Modern Physics Students) The understanding of atomic structure was also facilitated by Becquerel's discovery in 1896 of radioactivity in uranium ore. Within a few years radioactive radiation was found to consist of three types of emissions: alpha rays, later found by Rutherford to be the nuclei of helium atoms; beta rays, shown by Becquerel to be very fast electrons; and gamma rays, identified later as very short wavelength electromagnetic radiation. In 1898 the French physicists Marie and Pierre Curie separated two highly radioactive elements, radium and polonium, from uranium ore, thus showing that radiations could be identified with particular elements. By 1903 Rutherford and the British physical chemist Frederick Soddy had shown that the emission of alpha or beta rays resulted in the transmutation of the emitting element into a different one. Radioactive processes were shortly thereafter found to be completely statistical; no method exists that could indicate which atom in a radioactive material will decay at any one time. These developments, in addition to leading to Rutherford's and Bohr's model of the atom, also suggested that alpha, beta, and gamma rays could only come from the nuclei of very heavy atoms. In 1919 Rutherford bombarded nitrogen with alpha particles and converted it to hydrogen and oxygen, thus producing the first artificial transmutation of elements. Meanwhile, a knowledge of the nature and abundance of isotopes was growing, largely through the development of the mass spectrograph. A model emerged in which the nucleus contained all the positive charge and almost all the mass of the atom. The nuclear-charge carriers were identified as protons, but except for hydrogen, the nuclear mass could be accounted for only if some additional uncharged particles were present. In 1932 the British physicist Sir James Chadwick discovered the neutron, an electrically neutral particle of mass 1.675 10-27 kg, slightly more than that of the proton. Now nuclei could be understood as consisting of protons and neutrons, collectively called nucleons, and the atomic number of the element was simply the number of protons in the nucleus. On the other hand, the isotope number, also called the atomic mass number, was the sum of the neutrons and protons present. Thus, all atoms of oxygen (atomic no. 8) have eight protons, but the three isotopes of oxygen, O16, O 17, and O18, also contain within their respective nuclei eight, nine, or ten neutrons. Positive electric charges repel each other, and because atomic nuclei (except for hydrogen) have more than one proton, they would fly apart except for a strong attractive force, called the nuclear force, or strong interaction that binds the nucleons to each other. The energy associated with this strong force is very great, millions of times greater than the energies characteristic of electrons in their orbits or chemical binding energies. An escaping alpha particle (consisting of two protons and two neutrons), therefore, will have to overcome this strong interaction force to escape from a radioactive nucleus such as uranium. This apparent paradox was explained by the American physicists Edward U. Condon, George Gamow, and Ronald Wilfred Gurney, who applied quantum

mechanics to the problem of alpha emission in 1928 and showed that the statistical nature of nuclear processes allowed alpha particles to leak out of radioactive nuclei, even though their average energy was insufficient to overcome the nuclear force. Beta decay was explained as a result of a neutron disruption within the nucleus, the neutron changing into an electron (the beta particle), which is promptly ejected, and a residual proton. The proton left behind leaves the daughter nucleus with one more proton than its parent and thus increases the atomic number and the position in the periodic table. Alpha or beta emission usually leaves the nucleus with excess energy, which it unloads by emitting a gamma-ray photon. In all these nuclear processes a large amount of energy, given by Einstein's E = mc2 equation, is released. After the process is over, the total mass of the product is less than that of the parent, with the mass difference appearing as energy. C. Fundamental Quantities International System of Units (French Le Systme International d'Units), name adopted by the Eleventh General Conference on Weights and Measures, held in Paris in 1960, for a universal, unified, self-consistent system of measurement units based on the MKS (meter-kilogram-second) system. The international system is commonly referred to throughout the world as SI, after the initials of Systme International. The Metric Conversion Act of 1975 commits the United States to the increasing use of, and voluntary conversion to, the metric system of measurement, further defining metric system as the International System of Units as interpreted or modified for the United States by the secretary of commerce. At the 1960 conference, standards were defined for six base units and for two supplementary units; a seventh base unit, the mole, was added in 1971. Eventually, the General Conference on Weights and Measures has replaced all but one of the definitions of its base (fundamental) units based on physical objects (such as standard meter sticks or standard kilogram bars) with physical descriptions of the units based on stable properties of the Universe. For example, the second, the base unit of time, is now defined as that period of time in which the waves of radiation emitted by cesium atoms, under specified conditions, display exactly 9 192 631 770 cycles. The meter, the base unit of distance, is defined by stating that the speed of light, a universal physical constant, is exactly 299 792 458 meters per second. These physical definitions allow scientists to reconstruct meter standards or standard clocks anywhere in the world, or even on other planets, without referring to a physical object kept in a vault somewhere. In fact, the kilogram is the only base unit still defined by a physical object. The International Bureau of Weights and Measures (BIPM) keeps the world's standard kilogram in Paris, and all other weight standards, such as those of Britain and the United States, are weighed against this standard kilogram. This one physical standard is still used because scientists can weigh objects very accurately. Weight standards in other countries can be adjusted to the Paris standard kilogram with an accuracy of one part per hundred million. So far, no one has figured out how to define the kilogram in any other way that can be reproduced with better accuracy than this. The 21st General Conference on Weights and Measures, meeting in October 1999, passed a resolution calling on national standards laboratories to press forward with research to "link the fundamental unit of mass to fundamental or atomic constants with a view to a future redefinition of the kilogram." The 22nd General Conference, in 2003, renewed this request. It is possible that the 24th General Conference, in 2007, will make a change in the definition. Following are the official definitions of the seven base units, as given by BIPM meter(m) distance "The metre is the length of the path travelled by light in vacuum during a time

interval of 1/299 792 458 of a second." kilogram(kg) mass "The kilogram is equal to the mass of the international prototype of the kilogram." The international prototype was manufactured in the 1880s of an alloy of 90 % platinum-10 % iridium. Four of the six official copies date from the same period. In addition, copies of the international prototype have been manufactured by the BIPM for use as 1 kg national prototypes. The first of these were distributed in 1889. Since the 1880s the BIPM has produced more than eighty1 kg prototypes in Pt/Ir. "The second is the duration of 9 192 631 770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom." "The ampere is that constant current which, if maintained in two straight parallel conductors of infinite length, of negligible circular cross-section, and placed 1 metre apart in vacuum, would produce between these conductors a force equal to 2 10-7 Newton per metre of length." "The kelvin is the fraction 1/273.16 of the thermodynamic temperature of the triple point of water."

second(s)

time

ampere(A)

electric current

Kelvin(K) mole(mol)

temperature

amount of "The mole is the amount of substance of a system which contains as many substance elementary entities as there are atoms in 0.012 kilogram of carbon 12. When the mole is used, the elementary entities must be specified and may be atoms, molecules, ions, electrons, other particles, or specified groups of such particles." intensity light of "The candela is the luminous intensity, in a given direction, of a source that emits monochromatic radiation of frequency 540 1012 hertz and that has a radiant intensity in that direction of 1/683 watt per steradian."

candela(cd)

D. Scientific Notation Scientific notation is simply a method for expressing, and working with, very large or very small numbers. It is a short hand method for writing numbers, and an easy method for calculations. Numbers in scientific notation are made up of three parts: the coefficient, the base and the exponent. Observe the example below: 5.67 x 105 This is the scientific notation for the standard number, 567 000. Now look at the number again, with the three parts labeled. 5.67 x 105 coefficient base exponent In order for a number to be in correct scientific notation, the following conditions must be true: 1. The coefficient must be greater than or equal to 1 and less than 10. 2. The base must be 10. 3. The exponent must be an integer andmust show the number of decimal places that the decimal needs to be moved to change the number to standard notation. A negative exponent means that the decimal is moved to the left when changing to standard notation. Changing numbers from scientific notation to normal notation Ex.1 Change 6.03 x 107 to normal notation. LONG METHOD remember, 107 = 10 x 10 x 10 x 10 x 10 x 10 x 10 = 10 000 000 so, 6.03 x 107 = 6.03 x 10 000 000 = 60 300 000 answer = 60 300 000 SHORTCUT METHOD Instead of finding the value of the base, we can simply move the decimal seven places to the right because the exponent is 7.

So, 6.03 x 107 = 60 300 000 Or follow the codes, SNPR (Scientific notation to Normal notation Positive exponents means moving decimal place to the RIGHT); SNNL (Scientific notation to Normal notation Negative exponents means moving decimal place to the LEFT) Let us try one with a negative exponent. Ex.2 Change 5.3 x 10-4 to standard notation. The exponent tells us to move the decimal four places to the left. so, 5.3 x 10-4 = 0.00053

Changing numbers from normal notation to scientific notation The codes: NSLP (Normal notation to Scientific notation when decimal point is moved to the LEFT, the exponent will be Positive); NSRN (Normal notation to Scientific notation when decimal point is moved to the RIGHT, the exponent will be Negative) Ex.3 Change 56 760 000 000 to scientific notation Remember, the decimal is at the end of the final zero. The decimal must be moved behind the five to ensure that the coefficient is less than 10, but greater than or equal to one. The coefficient will then read 5.676 The decimal will move 10 places to the left, making the exponent equal to 10. Answer equals 5.676 x 1010 Now we try a number that is very small. Ex.2 Change 0.000000902 to scientific notation The decimal must be moved behind the 9 to ensure a proper coefficient. The coefficient will be 9.02 The decimal moves seven spaces to the right, making the exponent -7 Answer equals 9.02 x 10-7 Calculating with Scientific Notation Not only does scientific notation give us a way of writing very large and very small numbers, it allows us to easily do calculations as well. Calculators are very helpful tools, but unless you can do these calculations without them, you can never check to see if your answers make sense. Any calculation should be checked using your logic, so don't just assume an answer is correct. This page will explain the rules for calculating with scientific notation. Rule for Multiplication - When you multiply numbers with scientific notation, multiply the coefficients together and add the exponents. The base will remain 10. Ex 1. Multiply (3.45 x 107) x (6.25 x 105) first rewrite the problem as: (3.45 x 6.25) x (107 x 105) Then multiply the coefficients and add the exponents: 21.5625 x 1012 Then change to correct scientific notation and round to correct significant digits: 2.16 x 1013 NOTE - we add one to the exponent because we moved the decimal one place to the left. Remember that correct scientific notation has a coefficient that is less than 10, but greater than or equal to one. Ex. 2. Multiply (2.33 x 10-6) x (8.19 x 103) rewrite the problem as: (2.33 x 8.19) x (10-6 x 103) Then multiply the coefficients and add the exponents: 19.0827 x 10-3 Then change to correct scientific notation and round to correct significant digits 1.91 x 10-2 Remember that -3 + 1 = -2

Rule for Division - When dividing with scientific notation, divide the coefficients and subtract the exponents. The base will remain 10. Ex. 1 Divide 3.5 x 108 by 6.6 x 104 rewrite the problem as: 3.5 x 108 --------6.6 x 104 Divide the coefficients and subtract the exponents to get: 0.530303 x 104 Change to correct scientific notation and round to correct significant digits to get: 5.3 x 103 Note - We subtract one from the exponent because we moved the decimal one place to the right. Rule for Addition and Subtraction - when adding or subtracting in scientific notation, you must express the numbers as the same power of 10. This will often involve changing the decimal place of the coefficient. Ex. 1 Add 3.76 x 104 and 5.5 x 102 move the decimal to change 5.5 x 102 to 0.055 x 104 add the coefficients and leave the base and exponent the same: 3.76 + 0.055 = 3.815 x 104 following the rules for rounding, our final answer is 3.815 x 104 Rounding is a little bit different because each digit shown in the original problem must be considered significant, regardless of where it ends up in the answer. Ex. 2 Subtract (4.8 x 105) - (9.7 x 104) move the decimal to change 9.7 x 104 to 0.97 x 105 subtract the coefficients and leave the base and exponent the same: 4.8 - 0.97 = 3.83 x 105 round to correct number of significant digits: 3.83 x 105 E. Significant Digits The number of significant digits in an answer to a calculation will depend on the number of significant digits in the given data, as discussed in the rules below. Approximate calculations (order-of-magnitude estimates) always result in answers with only one or two significant digits. The following are the basic rules. Rules is Significant Figures a. Non-zero digits are always significant. Thus, 22 has two significant digits, and 22.3 has three significant digits. With zeroes, the situation is more complicated: b. Zeroes placed before other digits are not significant; 0.046 has two significant digits. c. Zeroes placed between other digits are always significant; 4009 kg has four significant digits. d. Zeroes placed after other digits but behind a decimal point are significant; 7.90 has three significant digits. e. Zeroes at the end of a number are significant only if they are behind a decimal point as in (c). Otherwise, it is impossible to tell if they are significant. For example, in the number 8200, it is not clear if the zeroes are significant or not. The number of significant digits in 8200 is at least two, but could be three or four. To avoid uncertainty, use scientific notation to place significant zeroes behind a decimal point: 8.200 x 103 has four significant digits 8.20 x 103 has three significant digits 8.2 x 103 has two significant digits Significant Digits in Multiplication, Division, Trig. functions, etc. In a calculation involving multiplication, division, trigonometric functions, etc., the number of significant digits in an answer should equal the least number of significant digits in any one of the numbers being multiplied, divided etc.

Thus in evaluating sin(kx), where k = 0.097 m-1 (two significant digits) and x = 4.73 m (three significant digits), the answer should have two significant digits. Note that whole numbers have essentially an unlimited number of significant digits. As an example, if a hair dryer uses 1.2 kW of power, then 2 identical hairdryers use 2.4 kW: 1.2 kW {2 sig. dig.} 2 {unlimited sig. dig.} = 2.4 kW {2 sig. dig.} Significant Digits in Addition and Subtraction When quantities are being added or subtracted, the number of decimal places (not significant digits) in the answer should be the same as the least number of decimal places in any of the numbers being added or subtracted. Example: 5.67 J (two decimal places) 1.1 J (one decimal place) 0.9378 J (four decimal place) 7.7 J (one decimal place) Keep One Extra Digit in Intermediate Answers When doing multi-step calculations, keep at least one more significant digit in intermediate results than needed in your final answer. For instance, if a final answer requires two significant digits, then carry at least three significant digits in calculations. If you round-off all your intermediate answers to only two digits, you are discarding the information contained in the third digit, and as a result the second digit in your final answer might be incorrect. (This phenomenon is known as "round-off error.") The Two Greatest Sins Regarding Significant Digits 1. Writing more digits in an answer (intermediate or final) than justified by the number of digits in the data. 2. Rounding-off, say, to two digits in an intermediate answer, and then writing three digits in the final answer. F. Rounding Off Numbers In mathematics rounding off is writing an answer to a given degree of accuracy. Let's round off 314 to the nearest hundred. You know that 314 is closer to 300 than 400, so when we rounded off 314 to the nearest hundred is 300. Now let's round off 483 to the nearest hundred. We know that 483 is in between 400 and 500 and it is closer to 500. So, 483 rounded off to the nearest hundred is 500. This is another way of looking at it: In 483, 4 is in Hundreds Place 8 is in Tens Place 3 is in Units Place Hundreds 4 Tens 8 Units or Ones 3

We want to round off 483 to the nearest hundred. The number to the right of 4 is 8, which is more than 5.So you add 1 to the 4 ie 5 and change the digits in tens and units to zeros

To round off a number correct to a given place, we round up (that is add 1) if the next figure is 5 or more, we round down (that is just drop them) if the next number is less than 5. eg: Round off 483 to the nearest hundred. 8 is more than 5 so we round up to 500. Round off 314 to the nearest hundred. 1 is less than 5, so you just drop14. That is we round down to 300. G. Conversion of Units The most comprehensive online reference (READ THIS PLEASE) is http://oakroadsystems.com/math/convert.htm In summary, we have: You can convert units easily and accurately with one simple rule: just multiply the old measurement by a carefully chosen form of the number 1. Suppose you want to convert four and a half hours to minutes. Of course you know that Now divide both sides by 1 hour. (Remember you can do this because you treat the unit hour just like a variable. If you had 60x = 1y, you could certainly divide both sides by 1y.) After dividing, you have Why did I do that? Because if (60 min)/(1 hr) = 1, then I can multiply any measurement by that fraction and not change its value. From 4 hours that we wanted to convert to minutes. To do the conversion, simply multiply by that well-chosen form of 1: which is the same as 60 minutes = 1 hour

60 minutes ---------- = 1 1 hour 4.5 hr 1

60 min 4.5 hr -----1 hr 4.5 hr 60 min --------------1 hr 4.5 60 min -------------1 270 min

Now, x times y/z is the same as xy/z, so our units expression is the same as

Notice that you have hours (hr) in both top and bottom. Just as you would divide through by x when x was in both top and bottom, so you can divide through by the variable hr: which multiplies out to

H. Vectors and Scalars

You might also like