You are on page 1of 86

NUMERICAL

METHODS

.n


du
.e

es


ot


en


io

ContentsSummary
TitlePage
Contents
1.Introduction
2.KeyIdea
3.Rootfindinginonedimension
4.Linearequations
5.Numericalintegration
6.Firstorderordinarydifferentialequations
7.Higherorderordinarydifferentialequations
8.Partialdifferentialequations

Contents

p
.n
TitlePage
1.Introduction
::::1.1.Objective
::::1.2.Books
du
::::::::General:
.e
::::::::Morespecialised:
es

::::1.3.Programming
::::1.4.Tools
::::::::1.4.1.Softwarelibraries
ot

::::::::1.4.2.Mathssystems
en

::::1.5.CourseCredit
::::1.6.Versions
::::::::1.6.1.Wordversion
io

::::::::1.6.2.NotationinHTMLformattednotes
::::::::1.6.3.Copyright
2.KeyIdea
3.Rootfindinginonedimension
::::3.1.Why?
::::3.2.Bisection
::::::::3.2.1.Convergence
::::::::3.2.2.Criteria
::::3.3.Linearinterpolation(regulafalsi)
::::3.4.NewtonRaphson
::::::::3.4.1.Convergence
::::3.5.Secant(chord)
::::::::3.5.1.Convergence

::::3.6.Directiteration
::::::::3.6.1.Convergence
::::3.7.Examples
::::::::3.7.1.Bisectionmethod
::::::::3.7.2.Linearinterpolation
::::::::3.7.3.NewtonRaphson
::::::::3.7.4.Secantmethod
::::::::3.7.5.Directiteration
::::::::::::3.7.5.1.Additionofx
::::::::::::3.7.5.2.Multiplcationbyx
::::::::::::3.7.5.3.Approximatingf'(x)
::::::::3.7.6.Comparison
::::::::3.7.7.Fortranprogram
4.Linearequations
::::4.1.Gausselimination
::::4.2.Pivoting
::::::::4.2.1.Partialpivoting

p
::::::::4.2.2.Fullpivoting

.n
::::4.3.LUfactorisation
::::4.4.Bandedmatrices
::::4.5.Tridiagonalmatrices du
::::4.6.Otherapproachestosolvinglinearsystems
::::4.7.Overdeterminedsystems
.e
::::4.8.Underdeterminedsystems
es

5.Numericalintegration
::::5.1.Manualmethod
::::5.2.Trapeziumrule
ot

::::5.3.Midpointrule
en

::::5.4.Simpson'srule
::::5.5.Quadratictriangulation
::::5.6.Rombergintegration
io

::::5.7.Gaussquadrature
::::5.8.Exampleofnumericalintegration
::::::::5.8.1.Programfornumericalintegration
6.Firstorderordinarydifferentialequations
::::6.1.Taylorseries
::::6.2.Finitedifference
::::6.3.Truncationerror
::::6.4.Eulermethod
::::6.5.Implicitmethods
::::::::6.5.1.BackwardEuler
::::::::6.5.2.Richardsonextrapolation
::::::::6.5.3.CrankNicholson
::::6.6.Multistepmethods

::::6.7.Stability
::::6.8.Predictorcorrectormethods
::::::::6.8.1.ImprovedEulermethod
::::::::6.8.2.RungeKuttamethods
7.Higherorderordinarydifferentialequations
::::7.1.Initialvalueproblems
::::7.2.Boundaryvalueproblems
::::::::7.2.1.Shootingmethod
::::::::7.2.2.Linearequations
::::7.3.Otherconsiderations
::::::::7.3.1.Truncationerror
::::::::7.3.2.Errorandstepcontrol
8.Partialdifferentialequations
::::8.1.Laplaceequation
::::::::8.1.1.Directsolution
::::::::8.1.2.Relaxation
::::::::::::8.1.2.1.Jacobi

p
::::::::::::8.1.2.2.GaussSeidel

.n
::::::::::::8.1.2.3.RedBlackordering
::::::::::::8.1.2.4.SuccessiveOverRelaxation(SOR)
::::::::8.1.3.Multigrid du
::::::::8.1.4.Themathematicsofrelaxation
::::::::::::8.1.4.1.JacobiandGaussSeidelforLaplaceequation
.e
::::::::::::8.1.4.2.SuccessiveOverRelaxationforLaplaceequation
es

::::::::::::8.1.4.3.Otherequations
::::::::8.1.5.FFT
::::::::8.1.6.Boundaryelements
ot

::::::::8.1.7.Finiteelements
en

::::8.2.Poissonequation
::::8.3.Diffusionequation
::::::::8.3.1.Semidiscretisation
io

::::::::8.3.2.Eulermethod
::::::::8.3.3.Stability
::::::::8.3.4.Modelforgeneralinitialconditions
::::::::8.3.5.CrankNicholson
::::::::8.3.6.ADI
::::8.4.Advection
::::::::8.4.1.Upwinddifferencing
::::::::8.4.2.Courantnumber
::::::::8.4.3.Numericaldispersion
::::::::8.4.4.Shocks
::::::::8.4.5.LaxWendroff
::::::::8.4.6.Conservativeschemes

io
en
ot
es
.e
du
.n
p
1.Introduction

TheselecturenotesarewrittenfortheNumericalMethodscourseaspartofthe
NaturalSciencesTripos,PartIB.Thenotesareintendedtocomplimentthe
materialpresentedinthelecturesratherthanreplacethem.

1.1Objective
Togiveanoverviewofwhatcanbedone
Togiveinsightintohowitcanbedone
Togivetheconfidencetotacklenumericalsolutions

Anunderstandingofhowamethodworksaidsinchoosingamethod.Itcanalso
provideanindicationofwhatcanandwillgowrong,andoftheaccuracywhich
maybeobtained.

Togaininsightintotheunderlyingphysics
"Theaimofthiscourseistointroducenumericaltechniquesthatcanbe
usedoncomputers,ratherthantoprovideadetailedtreatmentof

p
accuracyorstability"LectureSchedule.

.n
Unfortunatelythecourseisnowexaminableandthereforethematerialmustbe
du
presentedinamannerconsistentwiththis.
.e
1.2Books
es

General:
NumericalRecipesTheArtofScientificComputing,byPress,
ot

Flannery,Teukolsky&Vetterling(CUP)
NumericalMethodsthatWork,byActon(Harper&Row)
en

NumericalAnalysis,byBurden&Faires(PWSKent)
AppliedNumericalAnalysis,byGerald&Wheatley(AddisonWesley)
ASimpleIntroductiontoNumericalAnalysis,byHarding&Quinney
io

(InstituteofPhysicsPublishing)
ElementaryNumericalAnalysis,3rdEdition,byConte&deBoor
(McGrawHill)

Morespecialised:
NumericalMethodsforOrdinaryDifferentialSystems,byLambert
(Wiley)
NumericalSolutionofPartialDifferentialEquations:FiniteDifference
Methods,bySmith(OxfordUniversityPress)

Formanypeople,NumericalRecipesisthebibleforsimplenumerical
techniques.Itcontainsnotonlydetaileddiscussionofthealgorithmsandtheir
use,butalsosamplesourcecodeforeach.NumericalRecipesisavailablefor

threetastes:Fortran,CandPascal,withthesourcecodeexamplesbeing
tayloredforeach.

1.3Programming

Whileanumberofprogrammingexamplesaregivenduringthecourse,the
courseandexaminationdonotrequireanyknowledgeofprogramming.
Numericalresultsaregiventoillustrateapointandthecodeusedtocompute
thempresentedinthesenotespurelyforcompleteness.

1.4Tools

Unfortunatelythiscourseistooshorttobeabletoprovideanintroductiontothe
varioustoolsavailabletoassistwiththesolutionofawiderangeofmathematical
problems.Thesetoolsarewidelyavailableonnearlyallcomputerplatformsand
fallintotwogeneralclasses:

1.4.1Softwarelibraries

p
.n
Theseareintendedtobelinkedintoyourowncomputerprogramandprovide
routinesforsolvingparticularclassesofproblems.

NAG
du
IMFL
.e
NumericalRecipes
es

Thefirsttwoarecommercialpackagesprovidingobjectlibraries,whilethefinal
oftheselibrariesmirrorsthecontentoftheNumericalRecipesbookandis
ot

availableassourcecode.
en

1.4.2Mathssystems
io

Theseprovideashrinkwrappedsolutiontoabroadclassofmathematical
problems.Typicallytheyhaveeasytouseinterfacesandprovidegraphicalas
wellastextornumericoutput.Keyfeaturesincludealgebraicanalyticalsolution.
Thereisfiercecompetitionbetweenthevariousproductsavailableand,asa
result,developmentcontinuesatarapidrate.

Derive
Maple
Mathcad
Mathematica
Matlab
Reduce

1.5CourseCredit

Priortothe19951996academicyear,thiscoursewasnotexaminable.Since
then,however,therehavebeentwoexaminationquestionseachyear.Some
indicationofthetypeofexamquestionsmaybegainedfromearliertripos
papersandfromthelaterexamplessheets.Notethattherehas,unfortunately,
beenatendencyto

concentrateonthemoreanalysissideofthecourseintheexamination
questions.

Someofthetopicscoveredinthesenotesarenotexaminable.Thissituationis
indicatedbyanasteriskattheendofthesectionheading.

1.6Versions

TheselecturenotesarewritteninMicrosoftWord7.0forWindows95.The
sameWorddocumentisusedasthesourceforbothprintedandHTML

p
versions.ConversionfromWordtoHTMLisachievedthroughacombinationof

.n
custommacrostoadjusttheformattingandMicrosoftInternetAssistantfor
Word. du
1.6.1Wordversion
.e
TheWordversionofthenotesisavailableforthosewhomaywishtoalteror
es

printitout.TheWord7.0fileformatisinterchangeablewithWord6.0.

1.6.2NotationinHTMLformattednotes
ot

ThesourceWorddocumentcontainsgraphics,displayequationsandinline
en

equationsandsymbols.Allgraphicsandcomplexdisplayequations(wherethe
MicrosoftEquationEditorhasbeenused)areconvertedtoGIFfilesforthe
io

HTMLversion.However,manyofthesimplerequationsandmostoftheinline
equationsandsymbolsdonotusetheEquationEditorasthisisveryinefficient.
Asaconsequence,theyappearascharactersratherthanGIFfilesintheHTML
document.Thishasmajoradvantagesintermsofdocumentsize,butcancause
problemswitholderWorldWideWebbrowsers.

DuetolimitationsinHTMLandmanyolderWorldWideWebbrowsers,Greek
andSymbolsusedwithinthetextandsinglelineequationsmaynotbedisplayed
correctly.Similarly,somebrowsersdonothandlesuperscriptandsubscript.To
avoidconfusionwhenusingolderbrowsers,allGreekandSymbolsare
formattedinGreen.ThusifyoufindagreenRomancharacter,readitasthe
Greekequivalent.Table1ofthecorrespondencesisgivenbelow.Variablesand
normalsymbolsaretreatedinasimilarwaybutarecoloureddarkBlueto

distinguishthemfromtheGreek.Thecontextandcolourshoulddistinguishthem
fromHTMLhypertextlinks.Similarly,subscriptsareshownindarkCyanand
superscriptsindarkMagenta.Greeksubscriptsandsuperscriptsarethesame
Greenasthenormalcharacters,thecontextprovidingthekeytowhetheritisa
subscriptorsuperscript.Forasimilarreason,theuseofsomemathematical
symbols(suchaslessthanorequalto)hasbeenavoidedandtheirBasic
computerequivalentusedinstead.

Fortunatelymanynewerbrowsers(MicrosoftInternetExplorer3.0andNetscape
3.0onthePC,butonmanyUnixplatformstheGreekandSymbolcharactersare
unavailable)donothavethesamecharactersetlimitations.Thecolourisstill
displayed,butthecharactersappearasintended.

Greek/Symbolcharacter Name
alpha
beta
delta

p
Delta

.n
epsilon
phi du
Phi
lambda
.e
mu
es

pi
theta
ot

sigma
psi
en

Psi
<= lessthanorequalto
io

>= greaterthanorequalto
<> notequalto
=~ approximatelyequalto
vectorsarerepresentedas
vector
bold
Table1:Correspondencebetweencolourandcharacters.
1.6.3Copyright

Thesenotesmaybeduplicatedfreelyforthepurposesofeducationor
research.Anysuchreproductions,inwholeorinpart,shouldcontaindetailsof
theauthorandthiscopyrightnotice.

io
en
ot
es
.e
du
.n
p
2.KeyIdea

Thecentralideabehindthemajorityofmethodsdiscussedinthiscourseisthe
TaylorSeriesexpansionofafunctionaboutapoint.Forafunctionofasingle
variable,wemayrepresenttheexpansionas

(1)
.
Intwodimensionswehave

(2)
.
Similarexpansionsmaybeconstructedforfunctionswithmoreindependent
variables.

p
3.Rootfindinginonedimension

.n
3.1Why? du
Solutionsx=x0toequationsoftheformf(x)=0areoftenrequiredwhereitis
.e
impossibleorinfeasibletofindananalyticalexpressionforthevectorx.Ifthe
scalarfunctionfdependsonmindependentvariablesx1,x2,,xm,thenthe
es

solutionx0willdescribeasurfaceinm1dimensionalspace.Alternativelywe
mayconsiderthevectorfunctionf(x)=0,thesolutionsofwhichtypicallycollapse
toparticularvaluesofx.Forthiscoursewerestrictourattentiontoasingle
ot

independentvariablexandseeksolutionstof(x)=0.
en

3.2Bisection
io

Thisisthesimplestmethodforfindingaroottoanequation.Asweshallsee,it
isalsothemostrobust.Oneofthemaindrawbacksisthatweneedtwoinitial
guessesxaandxbwhichbrackettheroot:letfa=f(xa)andfb=f(xb)suchthat
fafb<=0.Anexampleofthisisshowngraphicallyinfigure1.Clearly,iffafb=0
thenoneorbothofxaandxbmustbearootoff(x)=0.

.n
Figure1:Graphicalrepresentationofthebisectionmethodshowingtwoinitial
du
guesses(xaandxbbrackettingtheroot).

Thebasicalgorithmforthebisectionmethodreliesonrepeatedapplicationof
.e
Letxc=(xa+xb)/2,
es

iffc=f(c)=0thenx=xcisanexactsolution,
elseiffafc<0thentherootliesintheinterval(xa,xc),
ot

elsetherootliesintheinterval(xc,xb).
en

Byreplacingtheinterval(xa,xb)witheither(xa,xc)or(xc,xb)(whicheverbrackets
theroot),theerrorinourestimateofthesolutiontof(x)=0is,onaverage,
io

halved.Werepeatthisintervalhalvinguntileithertheexactroothasbeenfound
ortheintervalissmallerthansomespecifiedtolerance.

3.2.1Convergence

Sincetheinterval(xa,xb)alwaysbracetstheroot,weknowthattheerrorinusing
eitherxaorxbasanestimateforrootatthenthiterationmustbeen<|xaxb|.
Nowsincetheinterval(xa,xb)ishalvedforeachiteration,then

en+1~en/2. (3)
Moregenerally,ifxnistheestimatefortherootx atthenthiteration,thenthe
*

errorinthisestimateis

n=xnx*. (4)

Inmanycaseswemayexpresstheerroratthen+1thtimestepintermsofthe
erroratthenthtimestepas

(5
|n+1|~C|n|p.
)
Indeedthiscriteriaappliestoalltechniquesdiscussedinthiscourse,butin
manycasesitappliesonlyasymptoticallyasourestimatexnconvergesonthe
exactsolution.Theexponentpinequation(5)givestheorderofthe
convergence.Thelargerthevalueofp,thefastertheschemeconvergesonthe
solution,atleastprovidedn+1<n.Forfirstorderschemes(i.e.p=1),|C|<1
forconvergence.

Forthebisectionmethodwemayestimatenasen.Theformofequation(3)
thensuggestsp=1andC=1/2,showingtheschemeisfirstorderand
convergeslinearly.Indeedconvergenceisguaranteedaroottof(x)=0will
alwaysbefoundprovidedf(x)iscontinuousovertheinitialinterval.

3.2.2Criteria

p
.n
Ingeneral,anumericalrootfindingprocedurewillnotfindtheexactrootbeing
sought(=0),ratheritwillfindsomesuitablyaccurateapproximationtoit.In
du
ordertopreventthealgorithmcontinuingtorefinethesolutionforever,itis
necessarytoplacesomeconditionsunderwhichthesolutionprocessistobe
.e
finishedoraborted.Typicallythiswilltaketheformofanerrortoleranceon
en=|anbn|,thevalueoffc,orboth.
es

Forsomemethodsitisalsoimportanttoensurethealgorithmisconvergingona
ot

solution(i.e.|n+1|<|n|forsuitablylargen),andthatthisconvergenceis
sufficientlyrapidtoattainthesolutioninareasonablespanoftime.The
en

guaranteedconvergenceofthebisectionmethoddoesnotrequiresuchsafety
checkswhich,combinedwithitsextremesimplicity,isoneofthereasonsforits
io

widespreadusedespitebeingrelativelyslowtoconverge.

3.3Linearinterpolation(regulafalsi)

Thismethodissimilartothebisectionmethodinthatitrequirestwoinitial
guessestobrackettheroot.However,insteadofsimplydividingtheregionin
two,alinearinterpolationisusedtoobtainanewpointwhichis(hopefully,but
notnecessarily)closertotherootthantheequivalentestimateforthebisection
method.Agraphicalinterpretationofthismethodisshowninfigure2.

Figure2

p
.n
du
.e
es


ot

Figure2:Rootfindingbythelinearinterpolation(regulafalsi)method.Thetwo
en

initialguesesxaandxbmustbrackettheroot.

io

Thebasicalgorithmforthelinearinterpolationmethodis

Let ,then
iffc=f(xc)=0thenx=xcisanexactsolution,
elseiffafc<0thentherootliesintheinterval(xa,xc),
elsetherootliesintheinterval(xc,xb).

Becausethesolutionremainsbracketedateachstep,convergenceis
guaranteedaswasthecaseforthebisectionmethod.Themethodisfirstorder
andisexactforlinearf.

3.4NewtonRaphson

ConsidertheTaylorSeriesexpansionoff(x)aboutsomepointx=x0:

(6
f(x)=f(x0)+(xx0)f'(x0)+(xx0)2f"(x0)+O(|xx0|3).
)
Settingthequadraticandhighertermstozeroandsolvingthelinear
approximationoff(x)=0forxgives

(7
. )
Subsequentiterationsaredefinedinasimilarmanneras

(8
. )
Geometrically,xn+1canbeinterpretedasthevalueofxatwhichaline,passing

p
throughthepoint(xn,f(xn))andtangenttothecurvef(x)atthatpoint,crossesthe

.n
yaxis.Figure3providesagraphicalinterpretationofthis.

Figure3 du
.e
es
ot
en
io

Figure3:GraphicalinterpretationoftheNewtonRaphsonalgorithm.

Whenitworks,NewtonRaphsonconvergesmuchmorerapidlythanthe
bisectionorlinearinterpolation.However,iff'vanishesataniterationpoint,or

indeedevenbetweenthecurrentestimateandtheroot,thenthemethodwillfail
toconverge.Agraphicalinterpretationofthisisgiveninfigure4.

Figure4

p
.n
du
.e
Figure4:DivergenceoftheNewtonRaphsonalgorithmduetothepresenceofa
es

turningpointclosetotheroot.


ot

3.4.1Convergence
en

TostudyhowtheNewtonRaphsonschemeconverges,expandf(x)aroundthe
io

rootx=x*,

f(x)=f(x*)+(xx*)f'(x*)+1/2(xx*)2f''(x*)+O(|xx*|3), (9)
andsubstituteintotheiterationformula.Thisthenshows

(10
)


sincef(x*)=0.Thus,bycomparisonwith(4),thereissecondorder(quadratic)
convergence.Thepresenceofthef'terminthedenominatorshowsthatthe
schemewillnotconvergeiff'vanishesintheneighbourhoodoftheroot.

p
3.5Secant(chord)

.n
ThismethodisessentiallythesameasNewtonRaphsonexceptthatthe
du
derivativef'(x)isapproximatedbyafinitedifferencebasedonthecurrentand
theprecedingestimatefortheroot,i.e.
.e
(11)
es

,
andthisissubstitutedintotheNewtonRaphsonalgorithm(8)togive
ot
en

(12)
.
ThisformulaisidenticaltothatfortheLinearInterpolationmethoddiscussedin
io

section3.3.Thedifferenceisthatratherthanreplacingoneofthetwoestimates
sothattherootisalwaysbracketed,theoldestpointisalwaysdiscardedin
favourofthenew.Thismeansitisnotnecessarytohavetwoinitialguesses
bracketingtheroot,butontheotherhand,convergenceisnotguaranteed.A
graphicalrepresentationofthemethodworkingisshowninfigure5andfailureto
convergeinfigure6.Insomecases,swappingthetwoinitialguessesx0andx1
willchangethebehaviourofthemethodfromconvergenttodivergent.

Figure5

p
.n

du
Figure5:Convergenceontherootusingthesecantmethod.

Figure6
.e
es
ot
en
io

Figure6:Divergenceusingthesecantmethod.

3.5.1Convergence

Theorderofconvergencemaybeobtainedinasimilarwaytotheearlier
methods.Expandingaroundtherootx=x*forxnandxn+1gives

f(xn)=f(x*)+nf'(x*)+1/2n2f''(x*)+O(|n|3), (13a)
f(xn1)=f(x*)+n1f'(x*)+1/2n12f''(x*)+O(|n1|3), (13b)
andsubstitutingintotheiterationformula

(14)

p
.n
du
.e
.
es

Notethatthisexpressionforn+1includesbothnandn1.Ingeneralwewould
likeitintermsofnonly.Theformofthisexpressionsuggestsapowerlaw
ot

relationship.Bywriting
en

(15)
,
io

andsubstitutingintotheerrorevolutionequation(14)gives

(16
)


whichweequatewithourassumedrelationshiptoshow

(17
)

Thusthemethodisofnonintegerorder1.61803(thegoldenratio).Aswith
NewtonRaphson,themethodmaydivergeiff'vanishesintheneighbourhoodof
theroot.

3.6Directiteration

Asimpleandoftenusefulmethodinvolvesrearrangingandpossibly
transformingthefunctionf(x)byT(f(x),x)toobtaing(x)=T(f(x),x).Theonly
restrictiononT(f(x),x)isthatsolutionstof(x)=0haveaonetoonerelationship
withsolutionstog(x)=xfortherootsbeingsort.Indeed,onereasonfor
choosingsuchatransformationforanequationwithmultiplerootsistoeliminate
knownrootsandthussimplifythelocationoftheremainingroots.Theefficiency
andconvergenceofthismethoddependsonthefinalformofg(x).

p
.n
Theiterationformulaforthismethodisthenjust
du
xn+1=g(xn).
(18
)
Agraphicalinterpretionofthisformulaisgiveninfigure7.
.e
es

Figure7
ot
en
io

Figure7:ConvergenceonarootusingtheDirectIterationmethod.

3.6.1Convergence

Theconvergenceofthismethodmaybedeterminedinasimilarmannertothe
othermethodsbyexpandingaboutx*.Hereweneedtoexpandg(x)ratherthan
f(x).Thisgives

(19
g(xn)=g(x*)+ng'(x*)+1/2n2g"(x*)+O(|n|3),
)
sothattheevolutionoftheerrorfollows

(20
)

p
.

.n
Themethodisclearlyfirstorderandwillconvergeonlyif|g'|<1.Thesignofg'
determineswhethertheconvergence(ordivergence)ismonotonic(positiveg')
du
oroscillatory(negativeg').Figure8showshowthemethodwilldivergeifthis
restrictionong'isnotsatisfied.Hereg'<1sothedivergenceisoscilatory.
.e
ObviouslyourchoiceofT(f(x),x)shouldtrytominimiseg'(x)inthe
es

neighbourhoodoftheroottomaximisetherateofconvergence.Inaddition,we
shouldchooseT(f(x),x)sothatthecurvature|g"(x)|doesnotbecometoolarge.
ot

Ifg'(x)<0,thenwegetoscillatoryconvergence/divergence.
en
io

Figure8

p
.n
du
.e
es
ot


en

Figure8:ThedivergenceofaDirectIterationwheng'<1.

3.7Examples
io

Considertheequation

(21
f(x)=cosx1/2.
)
3.7.1Bisectionmethod
Initialguessesx=0andx=/2.
Expectlinearconvergence:|n+1|~|n|/2.

Iteration Error en+1/en


0 0.261799 0.500001909862

0.130900
1 0.4999984721161



0.0654498
2 0.5000015278886



0.0327250
3 0.4999969442322



0.0163624
4 0.5000036669437



0.00818126
5 0.4999951107776



0.00409059
6 0.5000110008581

.n
0.00204534
7 0.4999755541866
du



0.00102262
.e
8 0.5000449824959


es


0.000511356
9 0.4999139542706
ot




en

0.000255634
10 0.5001721210794


io


0.000127861
11 0.4996574405018



0.0000638867
12 0.5006848060707



0.0000319871
13 0.4986322611303



0.0000159498
14 50274110020.188


0.0000
15 0801862



3.7.2Linearinterpolation
Initialguessesx=0andx=/2.
Expectlinearconvergence:|n+1|~c|n|.

Iteration Error en+1/en


0 0.261799
0.1213205550823



0.0317616
1 0.0963178807113



0.00305921

p
2 0.09340810209172

.n

0.000285755 du
3 0.09312907910623



.e
0.0000266121
4 0.09310313729469

es



0.00000247767
ot

5 0.09310037252741



en

0.000000230672
6 0.09310059304987

io



0.0000000214757
7 0.09310010849472



0.00000000199939
8 0.09310039562066



0.000000000186144
9 0.09310104005501



0.0000000000173302
10 0.09310567679542


0.00000000000161354
11 0.09316100003719



0.00000000000015031
12 9 0.09374663216227


0.00000000000001409
13 19 0.10000070962752


0.00000000000000140
14 92 0.1620777746239


0.00000000000000022
15 84

.n

Convergencelinear,butfast. du
3.7.3NewtonRaphson
.e
Initialguess:x=/2.
Notethatcannotusex=0asderivativevanisheshere.
es

Expectquadraticconvergence:n+1~cn2.
ot

IterationError en+1/en en+1/en2


0.2770714107399
en

0 0.0235988 0.00653855280777



io

0.000154302 0.2885971297087
1 0.0000445311143083



0.00000000687124 0.000000014553
2



1.0E15
3


4
Machineaccuracy

Solutionfoundtoroundofferror(O(1015))inthreeiterations.

3.7.4Secantmethod
Initialguessesx=0andx=/2.
Expectconvergence:n+1~cn1.618.

IterationError en+1/en |en+1|/|en|1.618


0.2777
0 0.261799 0.1213205550823



0.0317616 0.8203
1 0.09730712558561



0.00309063 0.3344
2 0.009399086917554



0.0000290491 0.5664
3 0.0008898244696049


p
0.0000000258486 0.0000083840517474

.n
4 0.4098
83
du

0.0000000000002167
5 16
.e


es


6
Machineaccuracy

ot

Convergencesubstantiallyfasterthanlinearinterpolation.
en

3.7.5Directiteration
io

Thereareavarietyofwaysinwhichequation(21)mayberearrangedintothe
formrequiredfordirectiteration.

3.7.5.1Additionofx

Use

xn+1=g(x)=xn+cosx1/2 (22)
Initialguess:x=0(alsoworkswithx=/2)
Expectconvergence:n+1~g'(x*)n~0.13n.

Iteration Error en+1/en


0 0.547198 0.30997006568


0.169615
1 0.1804233116175



0.0306025
2 0.1417596601585



0.00433820
3 0.1350620072841



0.000585926
4 0.1341210323488



0.0000785850
5 0.1339937647134


p
0.0000105299
6 0.1339775306508

.n



7
0.00000141077

du 0.1339750632633


.e
0.000000189008
8 0.1339747523914
es




ot

0.0000000253223
9 0.1339747969181


en


0.00000000339255
10 0.1339744440023
io




0.000000000454515
11 0.1339748963181



0.0000000000608936
12 0.1339759843399



0.00000000000815828
13 0.1339878013503



14 0.00000000000109311 0.1340617138257


0.00000000000014654
15 42



3.7.5.2Multiplcationbyx

Use

xn+1=g(x)=2xcosx (23)
Initialguess:x=/2(failswithx=0asthisisanewsolutiontog(x)=x)
Expectconvergence:n+1~g'(x*)n~0.81n.

Iteration Error en+1/en


0.0635232
0 0.9577980958138

.n
0.0608424
1 0.6773664418235

du

0.0412126
2 0.9070721090152

.e


es

0.0373828
3 0.7297714456916


ot


0.0272809
4 0.8754733164962
en




io

0.0238837
5 0.7600455540808



0.0181527
6 0.854809477378



0.0155171
7 0.778843985023



0.0120854
8 0.8410892481838



9 0.0101649 0.7908921878228



0.00803934
10 0.8319464035605



0.00668830
11 0.7987216482514



0.00534209
12 0.8258546748557



0.00441179
13 0.8038528579103



0.00354643
14 0.8218010788314

.n
0.00291446
15

du

3.7.5.3Approximatingf'(x)
.e
TheDirectIterationmethodiscloselyrelatedtotheNewtonRaphsonmethod
es

whenaparticularchoiceoftransformationT(f(x))ismade.Consider
ot

f(x)=f(x)+(xx)h(x)=0. ()
Rearrangingequation(24)foroneofthexvariablesandlabellingthedifferent
en

variablesfordifferentstepsintheinterationgives

xn+1=g(xn)=xnf(xn)/h(xn). ()
io

Nowifwechooseh(x)suchthatg'(x)=0everywhere(whichrequiresh(x)=f'(x)),
thenwerecovertheNewtonRaphsonmethodwithitsquadraticconvergence.

Insomesituationscalculationoff'(x)maynotbefeasible.Insuchcasesitmay
benecessarytorelyonthefirstorderandsecantmethodswhichdonotrequire
aknowledgeoff'(x).However,theconvergenceofsuchmethodsisveryslow.
TheDirectIterationmethod,ontheotherhand,providesuswithaframeworkfor
afastermethod.Todothisweselecth(x)asanapproximationtof'(x).Forthe
presentf(x)=cosx1/2wemayapproximatef'(x)as

(26
h(x)=4x(x)/2
)
Initialguess:x=0(failswithx=/2ash(x)vanishes).

Expectconvergence:n+1~g'(x*)n~0.026n.

Iteration Error en+1/en


0.0235988
0 0.02985973863078



0.000704654
1 0.02585084310882



0.0000182159
2 0.02572477890195



0.000000468600
3 0.02572151088348



0.0000000120531
4 0.02572134969427

.n
0.000000000310022
5 du 0.02572107785899



0.00000000000797410
.e
6 0.02570835580191


es


0.000000000000205001
7 0.02521207213623

ot



en


0.000000000000005168
8
50
io



9
Machineaccuracy

Theconvergence,whilestillformallylinear,issignificantlymorerapidthanwith
theotherfirstordermethods.Foramorecomplexexample,thecomputational
costofhavingmoreiterationsthanNewtonRaphsonmaybesignificantlyless
thanthecostofevaluatingthederivative.

Afurtherpotentialuseofthisapproachistoavoidthedivergenceproblems
associatedwithf'(x)vanishingintheNewtonRaphsonscheme.Sinceh(x)only
approximatesf'(x),andtheaccuracyofthisapproximationismoreimportant

closetotheroot,itmaybepossibletochooseh(x)insuchawayastoavoida
divergentscheme.

3.7.6Comparison

Figure9showsgraphicallacomparisonbetweenthedifferentapproachesto
findingtherootsofequation(21).TheclearwinneristheNewtonRaphson
scheme,withtheapproximatedderivativefortheDirectIterationprovingavery
goodalternative.

Figure9

p
.n
du
.e
es
ot
en

Figure9:Comparisonoftheconvergenceoftheerrorintheestimateoftheroot
io

tocosx=1/2forarangeofdifferentrootfindingalgorithms.

3.7.7Fortranprogram*

Thefollowingprogramwasusedtogeneratethedatapresentedfortheabove
examples.Notethatthisisincludedasanillustrativeexample.Noknowledgeof
Fortranoranyotherprogramminglanguageisrequiredinthiscourse.

PROGRAMRoots
INTEGER*4i,j
REAL*8x,xa,xb,xc,fa,fb,fc,pi,xStar,f,df
REAL*8Error(0:15,0:15)
f(x)=cos(x)0.5

df(x)=SIN(x)
pi=3.141592653
xStar=ACOS(0.5)
WRITE(6,*)'#',xStar,f(xStar)
C=====Bisection
xa=0
fa=f(xa)
xb=pi/2.0
fb=f(xb)
DOi=0,15
xc=(xa+xb)/2.0
fc=f(xc)
IF(fa*fc.LT.0.0)THEN
xb=xc
fb=fc
ELSE
xa=xc

p
fa=fc

.n
ENDIF
Error(0,i)=xcxStar
ENDDO
C=====Linearinterpolation
du
.e
xa=0
fa=f(xa)
es

xb=pi/2.0
fb=f(xb)
DOi=0,15
ot

xc=xa(xbxa)/(fbfa)*fa
en

fc=f(xc)
IF(fa*fc.LT.0.0)THEN
xb=xc
io

fb=fc
ELSE
xa=xc
fa=fc
ENDIF
Error(1,i)=xcxStar
ENDDO
C=====NewtonRaphson
xa=pi/2.0
DOi=0,15
xa=xaf(xa)/df(xa)
Error(2,i)=xaxStar
ENDDO

C=====Secant
xa=0
fa=f(xa)
xb=pi/2.0
fb=f(xb)
DOi=0,15
IF(fa.NE.fb)THEN
CIffa=fbtheneithermethodhasconverged(xa=xb)
Corwilldivergefromthispoint
xc=xa(xbxa)/(fbfa)*fa
xa=xb
fa=fb
xb=xc
fb=f(xb)
ENDIF
Error(3,i)=xcxStar
ENDDO

p
C=====Directiterationusingx+f(x)=x

.n
xa=0.0
DOi=0,15
xa=xa+f(xa)
Error(4,i)=xaxStar
du
.e
ENDDO
C=====Directiterationusingxf(x)=0rearrangedforx
es

CStartingpointpreventsconvergence
xa=pi/2.0
DOi=0,15
ot

xa=2.0*xa*(f(x)0.5)
en

Error(5,i)=xaxStar
ENDDO
C=====Directiterationusingxf(x)=0rearrangedforx
io

xa=pi/4.0
DOi=0,15
xa=2.0*xa*COS(xa)
Error(6,i)=xaxStar
ENDDO
C=====Directiterationusing4x(xpi)/pi/pitoapproximatef'
xa=pi/2.0
DOi=0,15
xa=xaf(xa)*pi*pi/(4.0*xa*(xapi))
Error(7,i)=xaxStar
ENDDO
C=====Outputresults
DOi=0,15

WRITE(6,100)i,(Error(j,i),j=0,7)
ENDDO
100FORMAT(1x,i4,8(1x,g12.6))
END

p
.n
du
.e
es
ot
en
io

4.Linearequations

SolvingequationoftheformAx=riscentraltomanynumericalalgorithms.
Thereareanumberofmethodswhichmaybeused,somealgebraicallycorrect,
whileothersiterativeinnatureandprovidingonlyapproximatesolutions.Which
isbestwilldependonthestructureofA,thecontextinwhichitistobesolved
andthesizecomparedwiththeavailablecomputerresources.

4.1Gausselimination

Thisiswhatyouwouldprobablydoifyouwerecomputingthesolutionofa
nontrivialsystembyhand.Forthesystem

(27)

p
,

.n
wefirstdividethefirstrowbya11andthensubtracta21timesthenewfirstrow
fromthesecondrow,a31timesthenewfirstrowfromthethirdrowandan1
du
timesthenewfirstrowfromthenthrow.Thisgives
.e
es

(28)
ot

.
Byrepeatingthisprocessforrows3ton,thistimeusingthenewcontentsof
en

element2,2,wegraduallyreplacetheregionbelowtheleadingdiagonalwith
zeros.Oncewehave
io

()


thefinalsolutionmaybeobtainedbybacksubstitution.

(30
)


Ifthearithmeticisexact,andthematrixAisnotsingular,thentheanswer
computedinthismannerwillbeexact(providednozerosappearonthe
diagonalseebelow).However,ascomputerarithmeticisnotexact,therewill
besometruncationandroundingerrorintheanswer.Thecumulativeeffectof
thiserrormaybeverysignificantifthelossofprecisionisatanearlystageinthe
computation.Inparticular,ifanumericallysmallnumberappearsonthe
diagonaloftherow,thenitsuseintheeliminationofsubsequentrowsmaylead
todifferencesbeingcomputedbetweenverylargeandverysmallvalueswitha
consequentiallossofprecision.Forexample,ifa22(a21/a11)a12wereverysmall,
106,say,andbotha23(a21/a11)a13anda33(a31/a11)a13were1,say,thenatthenext
stageofthecomputationthe3,3elementwouldinvolvecalculatingthe

p
differencebetween1/106=106and1.Ifsingleprecisionarithmetic(representing

.n
realvaluesusingapproximatelysixsignificantdigits)werebeingused,theresult
wouldbesimply1.0andsubsequentcalculationswouldbeunawareofthe
du
contributionofa23tothesolution.Amoreextremecasewhichmayoftenoccuris
if,forexample,a22(a21/a11)a12iszerounlesssomethingisdoneitwillnotbe
possibletoproceedwiththecomputation!
.e
es

Azerovalueoccuringontheleadingdiagonaldoesnotmeanthematrixis
singular.Consider,forexample,thesystem
ot

(31
en

)
,
io

thesolutionofwhichisobviouslyx1=x2=x3=1.However,ifweweretoapply
theGaussEliminationoutlinedabove,wewouldneedtodividethroughby
a11=0.Clearlythisleadstodifficulties!

4.2Pivoting

Oneofthewaysaroundthisproblemistoensurethatsmallvalues(especially
zeros)donotappearonthediagonaland,iftheydo,toremovethemby
rearrangingthematrixandvectors.Intheexamplegivenin(31)wecould
simplyinterchangerowsoneandtwotoproduce

(32
)
,
orcolumnsoneandtwotogive

(33
)
,
eitherofwhichmaythenbesolvedusingstandardGuassElimination.

Moregenerally,supposeatsomestageduringacalculationwehave

p
(34

.n
)

du

.e
wheretheelement2,5(201)isnumericallythelargestvalueinthesecondrow
andtheelement6,2(155)thenumericallylargestvalueinthesecondcolumn.
es

Asdiscussedabove,theverysmall106valueforelement2,2islikelytocause
problems.(Inanextremecasewemightevenhavethevalue0appearingonthe
ot

diagonalclearlysomethingmustbedonetoavoidadividebyzeroerror
occurring!)Toremovethisproblemwemayagainrearrangetherowsand/or
en

columnstobringalargervalueintothe2,2element.
io

4.2.1Partialpivoting

Inpartialorcolumnpivoting,werearrangetherowsofthematrixandthe
righthandsidetobringthenumericallylargestvalueinthecolumnontothe
diagonal.Forourexamplematrixthelargestvalueisinelement6,2andsowe
simplyswaprows2and6togive

(35
)

.
Notethatourvariablesremaininthesameorderwhichsimplifiesthe
implementationofthisprocedure.Therighthandsidevector,however,has
beenrearranged.Partialpivotingmaybeimplementedforeverystepofthe
solutionprocess,oronlywhenthediagonalvaluesaresufficientlysmallasto
potentiallycauseaproblem.Pivotingforeverystepwillleadtosmallererrors
beingintroducedthroughnumericalinaccuracies,butthecontinualreordering
willslowdownthecalculation.

p
4.2.2Fullpivoting

.n
Thephilosophybehindfullpivotingismuchthesameasthatbehindpartial
du
pivoting.Themaindifferenceisthatthenumericallylargestvalueinthecolumn
orrowcontainingthevaluetobereplaced.Inourexampleaboveelementthe
magnitudeofelement2,5(201)isthegreatestineitherrow2orcolumn2sowe
.e
shallrearrangethecolumnstobringthiselementontothediagonal.Thiswillalso
es

entailarearrangementofthesolutionvectorx.Therearrangedsystem
becomes
ot
en
io

(36
)

.
Theultimatedegreeofaccuracycanbeprovidedbyrearrangingbothrowsand
columnssothatthenumericallylargestvalueinthesubmatrixnotyetprocessed
isbroughtontothediagonal.Inourexampleabove,thelargestvalueis6003
occurringatposition4,6inthematrix.Wemaybringthisontothediagonalfor
thenextstepbyinterchangingcolumnsoneandsixandrowstwoandfour.The
orderinwhichwedothisisunimportant.Thefinalresultis

(37)

.
Againthisprocessmaybeundertakenforeverystep,oronlywhenthevalueon
thediagonalisconsideredtoosmallrelativetotheothervaluesinthematrix.

Ifitisnotpossibletorearrangethecolumnsorrowstoremoveazerofromthe
diagonal,thenthematrixAissingularandnosolutionexists.

4.3LUfactorisation

p
AfrequentlyusedformofGaussEliminationisLUFactorisationalsoknownas
LUDecompositionorCroutFactorisation.Thebasicideaistofindtwomatrices

.n
LandUsuchthatLU=A,whereLisalowertriangularmatrix(zeroabovethe
du
leadingdiagonal)andUisanuppertriangularmatrix(zerobelowthediagonal).
Notethatthisdecompositionisunderspecifiedinthatwemaychoosethe
relativescaleofthetwomatricesarbitrarily.Byconvention,theLmatrixisscaled
.e
tohavealeadingdiagonalofunitvalues.OncewehavecomputedLandUwe
needsolveonlyLy=bthenUx=y,aprocedurerequiringO(n2)operations
es

comparedwithO(n3)operationsforthefullGausselimination.Whilethe
factorisationprocessrequiresO(n3)operations,thisneedbedoneonlyonce
ot

whereaswemaywishtosolveAx=bforwithwholerangeofb.
en

Sincewehavedecidedthediagonalelementsliiinthelowertriangularmatrixwill
alwaysbeunity,itisnotnecessaryforustostoretheseelementsandsothe
io

matricesLandUcanbestoredtogetherinanarraythesamesizeasthatused
forA.Indeed,inmostimplementationsthefactorisationwillsimplyoverwriteA.

ThebasicdecompositionalgorithmforoverwritingAwithLandUmaybe
expressedas

#Factorisation
FORi=1TOn
FORp=iTOn


NEXTp
FORq=i+1TOn


NEXTq
NEXTi
#ForwardSubstitution
FORi=1TOn
FORq=n+1TOn+m


NEXTq
NEXTi
#BackSubstitution
FORi=n1TO1
FORq=n+1TOn+m

p
.n

NEXTq du
NEXTi

.e

Thisalgorithmassumestherighthandside(s)areinitiallystoredinthesame
es

arraystructureasthematrixandarepositionedinthecolumn(s)n+1(ton+mfor
mrighthandsides).Toimprovetheefficiencyofthecomputationforrighthand
ot

sidesknowninadvance,theforwardsubstitutionloopmaybeincorporatedinto
thefactorisationloop.
en

Figure10indicateshowtheLUFactorisationprocessworks.Wewanttofind
io

vectorsliTandujsuchthataij=liTuj.Whenweareatthestageofcalculatingthe
ithelementofuj,wewillalreadyhavetheinonzeroelementsofliTandthefirst
i1elementsofuj.Theithelementofujmaythereforebechosensimplyas
uj(i)=aijliTujwherethedotproductiscalculatedassuminguj(i)iszero.

Figure10:DiagramaticrepresentationofhowLUfactorisationworksfor
calculatinguijtoreplaceaijwherei<j.Thewhiteareasrepresentzerosin
theLandUmatrices.

AswithnormalGaussElimination,thepotentialoccurrenceofsmallorzero
valuesonthediagonalcancausecomputationaldifficulties.Thesolutionisagain
pivotingpartialpivotingisnormallyallthatisrequired.However,ifthematrixis
tobeusedinitsfactorisedform,itwillbeessentialtorecordthepivotingwhich
hastakenplace.Thismaybeachievedbysimplyrecordingtherow

interchangesforeachiintheabovealgorithmandusingthesamerow
interchangesontherighthandsidewhenusingLinsubsequentforward
substitutions.

4.4Bandedmatrices

TheLUFactorisationmayreadilybemodifiedtoaccountforbandedstructure
suchthattheonlynonzeroelementsfallwithinsomedistanceoftheleading
diagonal.Forexample,ifelementsoutsidetherangeai,ibtoai,i+bareallzero,
thenthesummationsintheLUFactorisationalgorithmneedbeperformedonly
fromk=iork=i+1tok=i+b.Moreover,thefactorisationloopFORq=i+1TOn
canterminateati+binsteadofn.

Oneproblemwithsuchbandedstructurescanoccurifa(near)zeroturnsupon
thediagonalduringthefactorisation.Caremustthenbetakeninanypivotingto
trytomaintainthebandedstructure.Thismayrequire,forexample,pivotingon
boththerowsandcolumnsasdescribedinsection4.2.2.

p
Makinguseofthebandedstructureofamatrixcansavesubstantiallyonthe

.n
executiontimeand,ifthematrixisstoredintelligently,onthestorage
requirements.SoftwarelibrariessuchasNAGandIMSLprovidearangeof
du
routinesforsolvingsuchbandedlinearsystemsinacomputationallyand
storageefficientmanner.
.e
4.5Tridiagonalmatrices
es

Atridiagonalmatrixisaspecialformofbandedmatrixwherealltheelements
ot

arezeroexceptforthoseonandimmediatelyaboveandbelowtheleading
diagonal(b=1).Itissometimespossibletorearrangetherowsandcolumnsofa
en

matrixwhichdoesnotinitiallyhavethisstructureinordertogainthisstructure
andhencegreatlysimplifythesolutionprocess.Asweshallseelaterinsections
6to8,tridiagonalmatricesfrequentlyoccurinnumericalsolutionofdifferential
io

equations.

Atridiagonalsystemmaybewrittenas

aixi1+bixi+cixi+1=ri (38)
fori=1,,n.Clearlyx1andxn+1arenotrequiredandweseta1=cn=0toreflect
this.Solution,byanalogywiththeLUFactorisation,maybeexpressedas

#Factorisation
FORi=1TOn
bi=biaici1
ci=ci/bi
NEXTi

#ForwardSubstitution
FORi=1TOn
ri=(riairi1)/bi
NEXTi
#BackSubstitution
FORi=n1TO1
ri=riciri+1
NEXTi

4.6Otherapproachestosolvinglinearsystems

Thereareanumberofothermethodsforsolvinggenerallinearsystemsof
equationsincludingapproximateiterativetechniques.Manylargematriceswhich
needtobesolvedinpracticalsituationshaveveryspecialstructureswhichallow
solutioneitherexactorapproximatemuchfasterthanthegeneralO(n3)
solverspresentedhere.Weshallreturntothistopicinsection8.1wherewe

p
shalldiscussasystemwithaspecialstructureresultingfromthenumerical

.n
solutionoftheLaplaceequation.

4.7Overdeterminedsystems*
du
.e
IfthematrixAcontainsmrowsandncolumns,withm>n,thesystemis
probablyoverdetermined(unlesstherearemnredundantrows).Whilethe
es

solutiontoAx=rwillnotexistinanalgebraicsense,itcanbevaluableto
determinethesolutioninanapproximatesense.Theerrorinthisapproximate
ot

solutionisthene=Axr.Theapproximatesolutionischosenbyoptimisingthis
errorinsomemanner.MostusefulamongtheclassesofsolutionistheLeast
en

Squaressolution.Inthissolutionweminimisetheresidualsumofsquares,
whichissimplyrss=eTe.Substitutingforeweobtain
io

rss=eTe
=[xTATrT][Axr]
=xTATAx2xTATr+rTr, (39)
andsetting tozerogives

(40)
.
Thus,ifwesolvethembymproblemATAx=ATr,thesolutionvectorxwillgive
usthesolutioninaleastsquaressense.

Warning:ThematrixATAisoftenpoorlyconditioned(nearlysingular)andcan
leadtosignificanterrorsintheresultingLeastSquaressolutionduetorounding

error.Whiletheseerrorsmaybereducedusingpivotingincombinationwith
GaussElimination,itisgenerallybettertosolvetheLeastSquaresproblem
usingtheHouseholdertransformation,asthisproduceslessroundingerror,or
betterstillbySingularValueDecompositionwhichwillhighlightanyredundantor
nearlyredundantvariablesinx.

TheHouseholdertransformationavoidsthepoorlyconditionednatureofATAby
solvingtheproblemdirectlywithoutevaluatingthismatrix.SupposeQisan
orthogonalmatrixsuchthat

QTQ=I, (41)
whereIistheidentitymatrixandQischosentotransformAinto

(42
, )
whereRisasquarematrixofasizenand0isazeromatrixofsizemnbyn.
TherighthandsideofthesystemQAx=Qrbecomes

p
.n
(43
du , )
wherebisavectorofsizenandcisavectorofsizemn.
.e
Nowtheturningpoint(globalminimum)intheresidualsumofsquares(40)
occurswhen
es
ot
en

(44
)
io


vanishes.Foranontrivialsolution,thatoccurswhen

(45
Rx=b.
)
Thissystemmaybesolvedtoobtaintheleastsquaressolutionxusinganyof
thenormallinearsolversdiscussedabove.

Furtherdiscussionofthesemethodsisbeyondthescopeofthiscourse.

4.8Underdeterminedsystems*

IfthematrixAcontainsmrowsandncolumns,withm>n,thesystemisunder
determined.Thesolutionmapsoutanmdimensionalsubregioninn
dimensionalspace.Solutionofsuchsystemstypicallyrequiressomeformof
optimisationinordertofurtherconstrainthesolutionvector.

Linearprogrammingrepresentsonemethodforsolvingsuchsystems.InLinear
Programming,thesolutionisoptimisedsuchthattheobjectivefunctionz=cTxis
minimised.The"Linear"indicatesthattheunderdeterminedsystemofequations
islinearandtheobjectivefunctionislinearinthesolutionvariablex.The
"Programming"arosetoenhancethechancesofobtainingfundingforresearch
intothisareawhenitwasdevelopinginthe1960s.

p
.n
du
.e
es
ot
en
io

5.Numericalintegration

Therearetwomainreasonsforyoutoneedtodonumericalintegration:
analyticalintegrationmaybeimpossibleorinfeasible,oryoumaywishto
integratetabulateddataratherthanknownfunctions.Inthissectionweoutline
themainapproachestonumericalintegration.Whichispreferabledependsin
partontheresultsrequired,andinpartonthefunctionordatatobeintegrated.

5.1Manualmethod

Ifyouweretoperformtheintegrationbyhand,oneapproachistosuperimpose
agridonagraphofthefunctiontobeintegrated,andsimplycountthesquares,
countingonlythosecoveredby50%ormoreofthefunction.Providedthegrid
issufficientlyfine,areasonablyaccurateestimatemaybeobtained.Figure11
demonstrateshowthismaybeachieved.

Figure11

p
.n
du
.e
es
ot
en
io

Figure11:Manualmethodfordeterminingintegralbysuperimposingagridona
graphoftheintegrand.Theboxesindicatedingreyarecounted.

5.2Trapeziumrule

ConsidertheTaylorSeriesexpansionintegratedfromx0tox0+x:

(46)

.
Theapproximationrepresentedby /2[f(x0)+f(x0+x)]xiscalledtheTrapezium
1

Rulebasedonitsgeometricinterpretationasshowninfigure12.

Figure12

p
.n
du
.e
es
ot


en

Figure12:Graphicalinterpretationofthetrapeziumrule.
io

Aswecanseefromequation(46),theerrorintheTrapeziumRuleis
proportionaltox3.Thus,ifweweretohalvex,theerrorwouldbedecreased
byafactorofeight.However,thesizeofthedomainwouldbehalved,thus
requiringtheTrapeziumRuletobeevaluatedtwiceandthecontributions
summed.Thenetresultistheerrordecreasingbyafactoroffourratherthan
eight.TheTrapeziumRuleusedinthismannerissometimestermedthe
CompoundTrapeziumRule,butmoreoftensimplytheTrapeziumRule.In
generalitconsistsofthesumofintegrationsoverasmallerdistancexto
obtainasmallererror.

Supposeweneedtointegratefromx0tox1.Weshallsubdividethisintervalinto
nstepsofsizex=(x1x0)/nasshowninfigure13.

Figure13

p
.n

du
Figure13:CompoundTrapeziumRule.

TheCompoundTrapeziumRuleapproximationtotheintegralistherefore
.e
es

(47)
ot

.
en

WhiletheerrorforeachstepisO(x ),thecumulativeerrorisntimesthisor
3

O(x2)~O(n2).
io

Theaboveanalysisassumesxisconstantovertheintervalbeingintegrated.
Thisisnotnecessaryandanextensiontothisproceduretoutiliseasmallerstep
sizexiinregionsofhighcurvaturewouldreducethetotalerrorinthe
calculation,althoughitwouldremainO(x2).Wewouldchoosetoreducexin
theregionsofhighcurvatureaswecanseefromequation(46)thattheleading
ordertruncationerrorisscaledbyf".

5.3Midpointrule

AvariantontheTrapeziumRuleisobtainedbyintegratingtheTaylorSeries
fromx0x/2tox0+x/2:

(48
)
.
Byevaluatingthefunctionf(x)atthemidpointofeachintervaltheerrormaybe
slightlyreducedrelativetotheTrapeziumrule(thecoefficientinfrontofthe
curvaturetermis1/24fortheMidpointRulecomparedwith1/12forthe
TrapeziumRule)butthemethodremainsofthesameorder.Figure14provides
agraphicalinterpretationofthisapproach.

Figure14

p
.n
du
.e
es
ot


en

Figure14:Graphicalinterpretationofthemidpointrule.Thegreyregiondefines
themidpointruleasarectangularapproximationwiththedashedlinesshowing
io

alternativetrapeziodalaproximationscontainingthesamearea.

Againwemayreducetheerrorwhenintegratingtheintervalx0tox1by
subdividingitintonsmallersteps.ThisCompoundMidpointRuleisthen

(49
)
,
withthegraphicalinterpretationshowninfigure15.Thedifferencebetweenthe
TrapeziumRuleandMidpointRuleisgreatlydiminishedintheircompound
forms.Comparisonofequations(47)and(49)showtheonlydifferenceisin

thephaserelationshipbetweenthepointsusedandthedomain,plushowthe
firstandlastintervalsarecalculated.

Figure15

p
.n
du
.e
Figure15:CompoundMidpointRule.
es

TherearetwofurtheradvantagesoftheMidpointRuleovertheTrapeziumRule.
Thefirstisthatisrequiresonefewerfunctionevaluationsforagivennumberof
ot

subintervals,andthesecondthatitcanbeusedmoreeffectivelyfordetermining
theintegralnearanintegrablesingularity.Thereasonsforthisareclearfrom
en

figure16.
io

Figure16

p
.n

du
Figure16:ApplyingtheMidpointRulewherethesingularintegrandwouldcause
theTrapeziumRuletofail.
.e
5.4Simpson'srule
es

Analternativeapproachtodecreasingthestepsizexfortheintegrationisto
increasetheaccuracyofthefunctionsusedtoapproximatetheintegrand.
ot

IntegratingtheTaylorseriesoveraninterval2xshows
en

(50)
io


WhereastheerrorintheTrapeziumrulewasO(x ),Simpson'sruleistwo
3

ordersmoreaccurateatO(x5),givingexactintegrationofcubics.

Toimprovetheaccuracywhenintegratingoverlargerintervals,theintervalx0to
x1mayagainbesubdividedintonsteps.Thethreepointevaluationforeach
subintervalrequiresthatthereareanevennumberofsubintervals.Hencewe
mustbeabletoexpressthenumberofintervalsasn=2m.TheCompound
Simpson'sruleisthen

(51)

,
andthecorrespondingerrorO(nx )orO(x ).
5 4

5.5Quadratictriangulation*

Simpson'sRulemaybeemployedinamanualwaytodeterminetheintegralwith
nothingmorethanaruler.Theapproachistocoverthedomaintobeintegrated
withatriangleortrapezium(whicheverisgeometricallymoreappropriate)asis

p
showninfigure17.Theintegrandmaycrossthesideofthetrapezium(triangle)

.n
connectingtheendpoints.Foreacharclikeregionsocreated(therearetwoin
figure17)themaximumdeviation(indicatedbyarrowsinfigure17)fromthe
du
lineshouldbemeasured,asshouldthelengthofthechordjoiningthepointsof
crossing.FromSimpson'srulewemayapproximatetheareabetweeneachof
.e
thesearcsandthechordas
es

(52
area=2/3chordmaxDeviation,
)
ot

rememberingthatsomeincreasetheareawhileothersdecreaseitrelativetothe
initialtrapezoidal(triangular)estimate.Theoverallestimate(ignoringlinear
en

measurementerrors)willbeO(l5),wherelisthelengthofthe(longest)chord.
io

Figure17

p
.n

du
Figure17:Quadratictriangulationtodeterminetheareausingamanual
combinationoftheTrapeziumandSimpson'sRules.
.e
5.6Rombergintegration
es

WiththeCompoundTrapeziumRuleweknowfromsection5.2theerrorin
someestimateT(x)oftheintegralIusingastepsizexgoeslikecx2as
ot

x0,forsomeconstantc.LikewisetheerrorinT(x/2)willbecx2/4.Fromthis
wemayconstructarevisedestimateT(1)(x/2)forIasaweightedmeanofT(x)
en

andT(x/2):
io

T(1)(x/2)=T(x/2)+(1)T(x)
=[I+cx /4+O(x )]+(1)[I+cx +O(x )].
2 4 2 4
(53)
Bychoosingtheweightingfactor=4/3weelimatetheleadingorder(O(x2))
errorterms,relegatingtheerrortoO(x4).Thuswehave

T(1)(x/2)=[4T(x/2)T(x)]/3. (54)
Comparisonwithequation(51)showsthatthisformulaispreciselythatfor
Simpson'sRule.

Thissameprocessmaybecarriedouttohigherordersusingx/4,x/8,to
eliminatethehigherordererrorterms.FortheTrapeziumRuletheerrorsareall
evenpowersofxandasaresultitcanbeshownthat

T(m)(x/2)=[22mT(m1)(x/2)T(m1)(x)]/(22m1). ()
AsimilarprocessmayalsobeappliedtotheCompoundSimpson'sRule.

5.7Gaussquadrature

Bycarefulselectionofthepointsatwhichthefunctionisevaluateditispossible
toincreasetheprecisionforagivennumberoffunctionevaluations.The
Midpointruleisanexampleofthis:withjustasinglefunctionevaluationit
obtainsthesameorderofaccuracyastheTrapeziumRule(whichrequirestwo
points).

OnewidelyusedexampleofthisisGaussquadraturewhichenablesexact
integrationofcubicswithonlytwofunctionevaluations(incontrastSimpson's
Rule,whichisalsoexactforcubics,requiresthreefunctionevaluations).Gauss
quadraturehastheformula

(56

p
)
.

.n
IngeneralitispossibletochooseMfunctionevaluationsperintervaltoobtaina
formulaexactforallpolynomialsofdegree2M1andless.
du
TheGaussQuadratureaccuratetoorder2M1maybedeterminedusingthe
.e
sameapproachrequiredforthetwopointscheme.Thismaybederivedby
comparingtheTaylorSeriesexpansionfortheintegralwiththatforthepoints
es

x0+xandx0+x:
ot
en
io

(57
)

.
Equatingthevarioustermsreveals

+=1
(2+2)/4=1/6, (58)
thesolutionofwhichgivesthepositionsstatedinequation(56).

5.8Exampleofnumericalintegration

Considertheintegral

(59)
,
whichmaybeintegratednumericallyusinganyofthemethodsdescribedinthe
previoussections.Table2givestheerrorinthenumericalestimatesforthe
TrapeziumRule,MidpointRule,Simpson'sRuleandGaussQuadrature.The
resultsarepresentedintermsofthenumberoffunctionevaluationsrequired.
Thecalculationswereperformedindoubleprecision.

No. Trapezium Simpson's Gauss


MidpointRule
intervals Rule Rule Quadrature
1.14189790E+
1
00

.n
2.00000000E+ 2.21441469E 6.41804253E0
2
00 01 du 2


.e
4.29203673E 5.23443059E 9.43951023E 3.05477319E0
4
01 02 02 3
es


ot

1.03881102E 1.29090855E 4.55975498E 1.79666460E0


8
en

01 02 03 4


io

2.57683980E 3.21637816E 2.69169948E 1.10640837E0


16
02 03 04 5


6.42965622E 8.03416309E 1.65910479E 6.88965642E0
32
03 04 05 7


1.60663902E 2.00811728E 1.03336941E 4.30208237E0
64
03 04 06 8

4.01611359E 5.02002859E 6.45300022E 2.68818500E0


128
04 05 08 9


1.00399815E 1.25499060E 4.03225719E 1.68002278E1
256
04 05 09 0


2.50997649E 3.13746618E 2.52001974E 1.04984909E1
512
05 06 10 1


6.27492942E 7.84365898E 1.57500679E 6.55919762E1
1024
06 07 11 3


1.56873161E 1.96091438E 9.82769421E

p
2048
06 07 13

.n

du
3.92182860E 4.90228564E
4096
07 08
.e


es

9.80457133E 1.22557182E
8192
08 08
ot

en


2.45114248E 3.06393221E
16384
08 09
io


6.12785222E 7.65979280E
32768
09 10


1.53194190E 1.91497040E
65536
09 10


131072 3.82977427E 4.78341810E
10 11


9.57223189E 1.19970700E
262144
11 11


2.39435138E 3.03357339E
524288
11 12


5.96145355E
1048576
12


Table2:Errorinnumericalintegrationof(59)asafunctionofthenumberof
subintervals.
5.8.1Programfornumericalintegration*

p
.n
Notethatthisprogramiswrittenforclarityratherthanspeed.Thenumberof
functionevaluationsactuallycomputedmaybeapproximatelyhalvedforthe
du
TrapeziumruleandreducedbyonethirdforSimpson'sruleifthecompound
formulationsareused.Notealsothatthisexampleisincludedforillustrative
.e
purposesonly.NoknowledgeofFortranoranyotherprogramminglanguageis
requiredinthiscourse.
es

PROGRAMIntegrat
ot

REAL*8x0,x1,Value,Exact,pi
INTEGER*4i,j,nx
en

C=====Functions
REAL*8TrapeziumRule
REAL*8MidpointRule
io

REAL*8SimpsonsRule
REAL*8GaussQuad
C=====Constants
pi=2.0*ASIN(1.0D0)
Exact=2.0
C=====Limits
x0=0.0
x1=pi
C=========================================================
==============
C=Trapeziumrule=
C=========================================================
==============

WRITE(6,*)
WRITE(6,*)'Trapeziumrule'
nx=1
DOi=1,20
Value=TrapeziumRule(x0,x1,nx)
WRITE(6,*)nx,Value,ValueExact
nx=2*nx
ENDDO
C=========================================================
==============
C=Midpointrule=
C=========================================================
==============
WRITE(6,*)
WRITE(6,*)'Midpointrule'
nx=1
DOi=1,20

p
Value=MidpointRule(x0,x1,nx)

.n
WRITE(6,*)nx,Value,ValueExact
nx=2*nx
ENDDO
du
C=========================================================
.e
==============
C=Simpson'srule=
es

C=========================================================
==============
WRITE(6,*)
ot

WRITE(6,*)'Simpson''srule'
en

WRITE(6,*)
nx=2
DOi=1,10
io

Value=SimpsonsRule(x0,x1,nx)
WRITE(6,*)nx,Value,ValueExact
nx=2*nx
ENDDO
C=========================================================
==============
C=GaussQuadrature=
C=========================================================
==============
WRITE(6,*)
WRITE(6,*)'Gaussquadrature'
nx=1
DOi=1,10

Value=GaussQuad(x0,x1,nx)
WRITE(6,*)nx,Value,ValueExact
nx=2*nx
ENDDO
END

FUNCTIONf(x)
C=====parameters
REAL*8x,f
f=SIN(x)
RETURN
END

REAL*8FUNCTIONTrapeziumRule(x0,x1,nx)
C=====parameters
INTEGER*4nx
REAL*8x0,x1

p
C=====functions

.n
REAL*8f
C=====localvariables
INTEGER*4i
du
REAL*8dx,xa,xb,fa,fb,Sum
.e
dx=(x1x0)/DFLOAT(nx)
Sum=0.0
es

DOi=0,nx1
xa=x0+DFLOAT(i)*dx
xb=x0+DFLOAT(i+1)*dx
ot

fa=f(xa)
en

fb=f(xb)
Sum=Sum+fa+fb
ENDDO
io

Sum=Sum*dx/2.0
TrapeziumRule=Sum
RETURN
END

REAL*8FUNCTIONMidpointRule(x0,x1,nx)
C=====parameters
INTEGER*4nx
REAL*8x0,x1
C=====functions
REAL*8f
C=====localvariables
INTEGER*4i

REAL*8dx,xa,fa,Sum
dx=(x1x0)/Dfloat(nx)
Sum=0.0
DOi=0,nx1
xa=x0+(DFLOAT(i)+0.5)*dx
fa=f(xa)
Sum=Sum+fa
ENDDO
Sum=Sum*dx
MidpointRule=Sum
RETURN
END

REAL*8FUNCTIONSimpsonsRule(x0,x1,nx)
C=====parameters
INTEGER*4nx
REAL*8x0,x1

p
C=====functions

.n
REAL*8f
C=====localvariables
INTEGER*4i
du
REAL*8dx,xa,xb,xc,fa,fb,fc,Sum
.e
dx=(x1x0)/DFLOAT(nx)
Sum=0.0
es

DOi=0,nx1,2
xa=x0+DFLOAT(i)*dx
xb=x0+DFLOAT(i+1)*dx
ot

xc=x0+DFLOAT(i+2)*dx
en

fa=f(xa)
fb=f(xb)
fc=f(xc)
io

Sum=Sum+fa+4.0*fb+fc
ENDDO
Sum=Sum*dx/3.0
SimpsonsRule=Sum
RETURN
END

REAL*8FUNCTIONGaussQuad(x0,x1,nx)
C=====parameters
INTEGER*4nx
REAL*8x0,x1
C=====functions
REAL*8f

C=====localvariables
INTEGER*4i
REAL*8dx,xa,xb,fa,fb,Sum,dxl,dxr
dx=(x1x0)/DFLOAT(nx)
dxl=dx*(0.5D0SQRT(3.0D0)/6.0D0)
dxr=dx*(0.5D0+SQRT(3.0D0)/6.0D0)
Sum=0.0
DOi=0,nx1
xa=x0+DFLOAT(i)*dx+dxl
xb=x0+DFLOAT(i)*dx+dxr
fa=f(xa)
fb=f(xb)
Sum=Sum+fa+fb
ENDDO
Sum=Sum*dx/2.0
GaussQuad=Sum
RETURN

p
END

.n

du

.e
es


ot
en
io

6.Firstorderordinarydifferentialequations

6.1Taylorseries

Thekeyideabehindnumericalsolutionofodesisthecombinationoffunction
valuesatdifferentpointsortimestoapproximatethederivativesintherequired
equation.Themannerinwhichthefunctionvaluesarecombinedisdetermined
bytheTaylorSeriesexpansionforthepointatwhichthederivativeisrequired.
Thisgivesusafinitedifferenceapproximationtothederivative.

6.2Finitedifference

Considerafirstorderodeoftheform

(60)
,
subjecttosomeboundary/initialconditionf(t=t0)=c.Thefinitedifference
solutionofthisequationproceedsbydiscretisingtheindependentvariabletto

p
t0,t0+t,t0+2t,t0+3t,Weshalldenotetheexactsolutionatsome

.n
t=tn=t0+ntbyyn=y(t=tn)andourapproximatesolutionbyYn.Wethenlookto
solve du
Y'n=f(tn,Yn) (61)
.e
ateachofthepointsinthedomain.
es

IfwetaketheTaylorSeriesexpansionforthegridpointsintheneighbourhood
ofsomepointt=tn,
ot
en
io

()

,
wemaythentakelinearcombinationsoftheseexpansionstoobtainan
approximationforthederivativeY'natt=tn,viz.

()
.

ThelinearcombinationischosensoastoeliminatetheterminYn,requiring

(64
)
and,dependingonthemethod,possiblysomeofthetermsofhigherorder.We
shalllookatvariousstrategiesforchoosingiinthefollowingsections.Before
doingso,weneedtoconsidertheerrorassociatedwithapproximatingynbyYn.

6.3Truncationerror

Theglobaltruncationerroratthenthstepisthecumulativetotalofthe
truncationerroratthepreviousstepsandis

(65
En=Ynyn.
)
Incontrast,thelocaltruncationerrorforthenthstepis

p
(66
en=Ynyn*,

.n
)
whereyn*theexactsolutionofourdifferentialequationbutwiththeinitial
du
conditionyn1*=Yn1.NotethatEnisnotsimplythesumofen.Italsodependson
thestabilityofthemethod(seesection6.7fordetails)andweaimfor
.e
En=O(en).
es

6.4Eulermethod

TheEulermethodisthesimplestfinitedifferenceschemetounderstandand
ot

implement.Byapproximatingthederivativein(61)as
en

(67
Y'n=~(Yn+1Yn)/t,
)
io

inourdifferentialequationforYnweobtain

(68
Yn+1=Yn+tf(tn,Yn).
)
Giventheinitial/boundaryconditionY0=c,wemayobtainY1fromY0+tf(t0,Y0),
Y2fromY1+tf(t1,Y1)andsoon,marchingforwardsthroughtime.Thisprocess
isshowngraphicallyinfigure18.

Figure18

p
.n

du
Figure18:Sketchofthefunctiony(t)(darkline)andtheEulermethodsolution
(arrows).Eacharrowistangentaltotothesolutionof(60)passingthroughthe
.e
pointlocatedatthestartofthearrow.Notethatthispointneednotbeonthe
desiredy(t)curve.
es

TheEulermethodistermedanexplicitmethodbecauseweareabletowrite
downanexplicitsolutionforYn+1intermsof"known"valuesattn.Inspectionof
ot

ourapproximationforY'nshowstheerrortermisofordert2inourtimestep
en

formula.ThisshowsthattheEulermethodisafirstordermethod.Moreoverit
canbeshownthatifYn=yn+O(t2),thenYn+1=yn+1+O(t2)providedtheschemeis
stable(seesection6.7).
io

6.5Implicitmethods

TheEulermethodoutlinedintheprevioussectionmaybesummarisedbythe
updateformulaYn+1=g(Yn,tn,t).IncontrastimplicitmethodshavehaveYn+1on
bothsides:Yn+1=h(Yn,Yn+1,tn,t),forexample.Suchimplicitmethodsare
computationallymoreexpensiveforasinglestepthanexplicitmethods,but
offeradvantagesintermsofstabilityand/oraccuracyinmanycircumstances.
Oftenthecomputationalexpenseperstepismorethancompensatedforbyit
beingpossibletotakelargersteps(seesection6.7).

6.5.1BackwardEuler

ThebackwardEulermethodisalmostidenticaltoitsexplicitrelative,theonly
differencebeingthatthederivativeY'nisapproximatedby

(69
Y'n=~(YnYn1)/t,
)
togivetheevolutionequation

(70
Yn+1=Yn+tf(tn+1,Yn+1).
)
Thisisshowngraphicallyinfigure19.

Figure19

p
.n
du
.e
es
ot
en
io

Figure19:Sketchofthefunctiony(t)(darkline)andtheEulermethodsolution
(arrows).Eacharrowistangentaltotothesolutionof(60)passingthroughthe
pointlocatedattheendofthearrow.Notethatthispointneednotbeonthe
desiredy(t)curve.

Thedependenceoftherighthandsideonthevariablesattn+1ratherthantn
meansthatitisnot,ingeneralpossibletogiveanexplicitformulaforYn+1onlyin
termsofYnandtn+1.(Itmay,however,bepossibletorecoveranexplicitformula
forsomefunctionsf.)

AsthederivativeY'nisapproximatedonlytothefirstorder,theBackwardEuler
methodhaserrorsofO(t2),exactlyasfortheEulermethod.Thesolution

processwill,however,tendtobemorestableandhenceaccurate,especiallyfor
stiffproblems(problemswheref'islarge).Anexampleofthisisshowninfigure
20.

Figure20

p
.n
du
.e

es

Figure20:Comparisonofordinarydifferentialequationsolversforastiff
problem.
ot

6.5.2Richardsonextrapolation
en

TheRombergintegrationapproach(presentedinsection5.6)ofusingtwo
approximationsofdifferentstepsizetoconstructamoreaccurateestimatemay
io

alsobeusedfornumericalsolutionofordinarydifferentialequations.Again,if
wehavetheestimateforsometimetcalculatedusingatimestept,thenfor
boththeEulerandBackwardEulermethodstheapproximatesolutionisrelated
tothetruesolutionbyY(t,t)=y(t)+ct2.Similarlyanestimateusingastepsize
t/2willfollowY(t,t/2)=y(t)+1/4ct2ast0.Combiningthesetwoestimatesto
tryandcanceltheO(t2)errorsgivestheimprovedestimateas

Y(1)(t,t/2)=[4Y(t,t/2)Y(t,t)]/3. (71)
Thesameapproachmaybeappliedtohigherordermethodssuchasthose
presentedinthefollowingsections.Itisgenerallypreferable,however,toutilise
ahigherordermethodtostartwith,theexceptionbeingthatcalculatingboth
Y(t,t)andY(t,t/2)allowsthetwosolutionstobecomparedandthusthe
trunctationerrorestimated.

6.5.3CrankNicholson

IfweusecentraldifferencesratherthantheforwarddifferenceoftheEuler
methodorthebackwarddifferenceofthebackwardEuler,wemayobtaina
secondordermethodduetocancellationofthetermsofO(t2).Usingthesame
discretisationoftweobtain

(72
Y'n+1/2=~(Yn+1Yn)/t.
)
SubstitutionintoourdifferentialequationforYngives

(Yn+1Yn)/t=~f(tn+1/2,Yn+1/2). (73)
Therequirementforf(tn+1/2,Yn+1/2)isthensatisfiedbyalinearinterpolationforf
betweentn1/2andtn+1/2toobtain

Yn+1Yn=~1/2[f(tn+1,Yn+1)+f(tn,Yn)]t. (74)
AswiththeBackwardEuler,themethodisimplicitanditisnot,ingeneral,

p
possibletowriteanexplicitexpressionforYn+1intermsofYn.

.n
FormalproofthattheCrankNicholsonmethodissecondorderaccurateis
slightlymorecomplicatedthanfortheEulerandBackwardEulermethodsdueto
du
thelinearinterpolationtoapproximatef(tn+1/2,Yn+1/2).Theoverallapproachismuch
thesame,however,witharequirementforTaylorSeriesexpansionsabouttn::
.e
es

(75a)
ot


en
io

(75b)


Substitutionoftheseintotheleftandrighthandsidesofequation(74)reveals

(76a)

and

(76
b)

whichareequaluptoO(t ).
3

6.6Multistepmethods

Asanalternative,theaccuracyoftheapproximationtothederivativemaybe
improvedbyusingalinearcombinationofadditionalpoints.Byutilisingonly
Yns+1,Yns+2,,Ynwemayconstructanapproximationtothederivativesoforders
1tosattn.Forexample,ifs=2then

Y'n=fn,
Y"n=~(fnfn1)/t (77)
andsowemayconstructasecondordermethodas

p
Yn+1=Yn+tY'n+t2Y"n

.n
=Yn+ /2t(3fnfn1).
1
(78)
du
Fors=3wealsohaveY'"nandsocanmakeuseofasecondorderonesided
finitedifferencetoapproximateY"n=f'n=(3fn4fn1+fn2)/2tandincludethethird
orderY'"n=f"n=(fn2fn1+fn2)/t2toobtain
.e
Yn+1=Yn+tY'n+1/2t2Y"n+1/6t3Y"n
es

=Yn+1/12t(23fn16fn1+5fn2). (79a)
ThesemethodsarecalledAdamsBashforthmethods.Notethats=1recovers
ot

theEulermethod.
en

ImplicitAdamsBashforthmethodsarealsopossibleifweuseinformationabout
fn+1inadditiontoearliertimesteps.Thecorrespondings=2methodthenuses
io

Y'n=fn,
Y"n=~(fn+1fn1)/2t
Y'"n=~(fn+12fn+fn1)/t2, (80)
togive

Yn+1=Yn+tY'n+1/2t2Y"n+1/6t3Y'"n
=Yn+(1/12)t(5fn+1+8fnfn1). (81)
ThisfamilyofimplicitmethodsisknownasAdamsMoultonmethods.

6.7Stability

Thestabilityofamethodcanbeevenmoreimportantthanitsaccuracyas
measuredbytheorderofthetruncationerror.Supposewearesolving

y'=y, (82)
forsomecomplex.Theexactsolutionisbounded(i.e.doesnotincrease
withoutlimit)providedRe<=0.SubstitutingthisintotheEulermethodshows

Yn+1=(1+t)Yn=(1+t)2Yn1==(1+t)n+1Y0. (83)
IfYnistoremainboundedforincreasingnandgivenRe<0werequire

(84
|1+t|<=1.
)
Ifwechooseatimesteptwhichdoesnotsatisfy(84)thenYnwillincrease
withoutlimit.Thiscondition(84)ontisveryrestrictiveif<<0asit
demonstratestheEulermethodmustuseverysmalltimestepst<2||1ifthe

p
solutionistoconvergeony=0.

.n
Thereasonwhyweconsiderthebehaviourofequation(82)isthatitisamodel
forthebehaviourofsmallerrors.Supposethatatsomestageduringthesolution
du
processourapproximatesolutionisy$=y+whereisthe(small)error.
Substitutingthisintoourdifferentialequationoftheformy'=f(t,y)andusinga
.e
TaylorSeriesexpansiongives
es
ot

(85
)
en

.
io

Thus,totheleadingorder,theerrorobeysanequationoftheformgivenby(82
),with=f/y.Asitisdesirableforerrorstodecrease(andthusthesolution
remainstable)ratherthanincrease(andthesolutionbeunstable),thelimitonthe
timestepsuggestedby(84)appliesfortheapplicationoftheEulermethodto
anyordinarydifferentialequation.Aconsequenceofthedecayoferrorspresent
atonetimestepasthesolutionprocessproceedsisthatmemoryofaparticular
timestep'scontributiontotheglobaltruncationerrordecaysasthesolution
advancesthroughtime.Thustheglobaltruncationerrorisdominatedbythe
localtrunctionerror(s)ofthemostrecentstep(s)andO(En)=O(en).

Incomparison,solutionof(82)bytheBackwardEulermethod

(86
Yn+1=Yn+tYn+1,
)

canberearrangedforYn+1and

(87
Yn+1=Yn/(1t)=Yn1/(1t)2==Y0/(1t)n+1,
)
whichwillbestableprovided

(88
|1t|>1.
)
ForRe<=0thisisalwayssatisfiedandsotheBackwardEulermethodis
unconditionallystable.

TheCrankNicholsonmethodmaybeanalysedinasimilarfashionwith

(89
(1t/2)Yn+1=(1+t/2)Yn,
)
toarriveat

(90

p
Yn+1=[(1+t/2)/(1t/2)]n+1Y0,
)

.n
withthemagnitudeoftheterminsquarebracketsalwayslessthanunityfor
du
Re<0.Thus,likeBackwardEuler,CrankNicholsonisunconditionallystable.

Ingeneral,explicitmethodsrequirelesscomputationperstep,butareonly
.e
conditionallystableandsomayrequirefarsmallerstepsizesthananimplicit
methodofnominallythesameorder.
es

6.8Predictorcorrectormethods
ot

Predictorcorrectormethodstrytocombinetheadvantagesofthesimplicityof
en

explicitmethodswiththeimprovedstabilityandaccuracyofimplicitmethods.
TheyachievethisbyusinganexplicitmethodtopredictthesolutionYn+1(p)attn+1
andthenutilisef(tn+1,Yn+1(p))asanapproximationtof(tn+1,Yn+1)tocorrectthis
io

predictionusingsomethingsimilartoanimplicitstep.

6.8.1ImprovedEulermethod

ThesimplestofthesemethodscombinestheEulermethodasthepredictor

Yn+1(1)=Yn+tf(tn,Yn), (91)
andthentheBackwardEulertogivethecorrector

(92
Y(2)n+1=Yn+tf(tn,Yn+1(1)).
)
Thefinalsolutionisthemeanofthese:

(93
Yn+1=(Yn+1(1)+Yn+1(2))/2.
)
Tounderstandthestabilityofthismethodweagainusethey'=ysothatthe
threestepsdescribedbyequations(91)to(93)become

Yn+1(1)=Yn+tYn, (94a)
Yn+1 =Yn+tYn+1
(2) (1)

=Yn+t(Yn+tYn)
=(1+t+ t )Yn,
2 2
(94b)
Yn+1=(Yn+1 +Yn+1 )/2
(1) (2)

=[(1+t)Yn+(1+t+ t )Yn]/2
2 2

=(1+t+ /2 t )Yn
1 2 2

=(1+t+ /2 t ) Y0.
1 2 2 n
(94c)
Convergencerequires|1+t+ /2 t |<1(forRe<0)whichinturnrestricts
1 2 2

t<2||1.Thusthestabilityofthismethod,commonlyknownastheImproved
Eulermethod,isidenticaltotheEulermethod.Thisisnotsurprisingasitis
limitedbythestabilityoftheinitialpredictivestep.Theaccuracyofthemethod

p
is,however,secondorderasmaybeseenbycomparisonof(94c)withthe

.n
TaylorSeriesexpansion.

6.8.2RungeKuttamethods
du
.e
TheImprovedEulermethodisthesimplestofafamilyofsimilarpredictor
correctormethodsfollowingtheformofasinglepredictorstepandoneormore
es

correctorsteps.Thecorrectorstepmayberepeatedafixednumberoftimes,or
untiltheestimateforYn+1convergestosometolerance.
ot

OnesubgroupofthisfamilyaretheRungeKuttamethodswhichuseafixed
en

numberofcorrectorsteps.TheImprovedEulermethodisthesimplestofthis
subgroup.Perhapsthemostwidelyusedoftheseisthefourthordermethod:
io

k(1)=tf(tn,Yn), (95a)
k =tf(tn+ /2t,Yn+k ),
(2) 1 (1)
(95b)
k =tf(tn+ /2t,Yn+k ),
(3) 1 (2)
(95c)
k =tf(tn+t,Yn+k ),
(4) (3)
(95d)
Yn+1=Yn+(k +2k +2k +k )/6.
(1) (2) (3) (4)
(95e)
InordertoanalysethisweneedtoconstructTaylorSeriesexpansionsfor
k(2)=tf(tn+1/2t,Yn+1/2k(1))=t[f(tn,Yn)+(t/2)(f/t+ff/y)],andsimilarlyfork(3)and
k(4).ThisisthencomparedwithafullTaylorSeriesexpansionforYn+1upto
fourthorderrequiringY"=df/dt=f/t+ff/y,Y'"=d2f/dt2=2f/t2+2f2f/ty+f/tf/y+f2
2
f/y2+f(f/y)2,andsimilarlyforY"".Alltermsuptoordert4canbeshownto
match,withtheerrorcominginatt5.

Gotonextdocument(HighODE)

p
.n
du
.e
es
ot
en
io

7.Higherorderordinarydifferentialequations

7.1Initialvalueproblems

Thediscussionsofarhasbeenforfirstorderordinarydifferentialequations.All
themethodsgivenmaybeappliedtohigherordinarydifferentialequations,
provideditispossibletowriteanexplicitexpressionforthehighestorder
derivativeandthesystemhasacompletesetofinitialconditions.Consider
someequation

(96)
,
whereatt=t0weknowthevaluesofy,dy/dt,d2y/dt2,,dn1y/dtn1.Bywriting
x0=y,x1=dy/dt,x2=d2y/dt2,,xn1=dn1y/dtn1,wemayexpressthisasthesystem
ofequations

x0'=x1

p
x1'=x2

.n
x2'=x3
du ....
xn2'=xn1
xn1'=f(t,x0,x1,,xn2), (97)
.e
andusethestandardmethodsforupdatingeachxiforsometn+1before
proceedingtothenexttimestep.Adecisionneedstobemadeastowhether
es

thevaluesofxifortnortn+1aretobeusedontherighthandsideoftheequation
forxn1'.Thisdecisionmayaffecttheorderandconvergenceofthemethod.
ot

Detailedanalysismaybeundertakeninamannersimilartothatforthefirstorder
ordinarydifferentialequations.
en

7.2Boundaryvalueproblems
io

Forsecond(andhigher)orderodes,two(ormore)initial/boundaryconditions
arerequired.Ifthesetwoconditionsdonotcorrespondtothesamepointin
time/space,thenthesimpleextensionofthefirstordermethodsoutlinedin
section7.1cannotbeappliedwithoutmodification.Therearetworelatively
simpleapproachestosolvesuchequations.

7.2.1Shootingmethod

Supposewearesolvingasecondorderequationoftheformy"=f(t,y,y')
subjecttoy(0)=c0andy(1)=c1.Withtheshootingmethodweapplythey(0)=c0
boundaryconditionandmakesomeguessthaty'(0)=0.Thisgivesustwo
initialconditionssothatwemayapplythesimpletimesteppingmethods
alreadydiscussedinsection7.1.Thecalculationproceedsuntilwehaveavalue

fory(1).Ifthisdoesnotsatisfyy(1)=c1tosomeacceptabletolerance,we
reviseourguessfory'(0)tosomevalue1,say,andrepeatthetimeintegration
toobtainannewvaluefory(1).Thisprocesscontinuesuntilwehity(1)=c1tothe
acceptabletolerance.Thenumberofiterationswhichwillneedtobemadein
ordertoachieveanacceptabletolerancewilldependonhowgoodthe
refinementalgorithmforis.Wemayusetherootfindingmethodsdiscussedin
section3toundertakethisrefinement.

Thesameapproachcanbeappliedtohigherorderordinarydifferential
equations.Forasystemofordernwithmboundaryconditionsatt=t0andnm
boundaryconditionsatt=t1,wewillrequireguessesfornminitialconditions.
Thecomputationalcostofrefiningthesenmguesseswillrapidlybecomelarge
asthedimensionsofthespaceincrease.

7.2.2Linearequations

Thealternativeistorewritetheequationsusingafinitedifferenceapproximation
withstepsizet=(t1t0)/NtoproduceasystemofN+1simultaneousequations.

p
Considerthesecondorderlinearsystem

.n
y"+ay'+by=c,
du (98)
withboundaryconditionsy(t0)=andy'(t1)=.Ifweusethecentraldifference
approximations
.e
y'i=~(Yi+1Yi1)/2t, (99a)
es

y"i=~(Yi+12Yi+Yi1)/t2, (99b)
wecanwritethesystemas
ot

Y0=,
en

(1+ /2at)Y0+(bt 2)Y1+(1 /2at)Y2=ct ,


1 2 1 2

(1+1/2at)Y1+(bt22)Y2+(11/2at)Y3=ct2,
io


(1+1/2at)Yn1+(bt22)Yn1+(11/2at)Yn=ct2,
YnYn1=t (100)
Thistridiagonalsystemmaybereadilysolvedusingthemethod

discussedinsection4.5.
Higherorderlinearequationsmaybecateredforinasimilarmannerandthe
matrixrepresentingthesystemofequationswillremainbanded,butnotas
sparseastridiagonal.ThesolutionmaybeundertakenusingthemodifiedLU
decompositionintroducedinsection4.4.

Nonlinearequationsmayalsobesolvedusingthisapproach,butwillrequirean
iterativesolutionoftheresultingmatrixsystemAx=basthematrixAwillbea
functionofx.Inmostcircumstancesthisismostefficientlyachievedthrougha

NewtonRaphsonalgorithm,similarinprincipletothatintroducedinsection3.4
butwhereasystemoflinearequationsrequiressolutionforeachiteration.

7.3Otherconsiderations*
7.3.1Truncationerror*
7.3.2Errorandstepcontrol*

p
.n
du
.e
es
ot
en
io

8.Partialdifferentialequations

8.1Laplaceequation

ConsidertheLaplaceequationintwodimensions

(101)
,
insomerectangulardomaindescribedbyxin[x0,x1],yin[y0,y1].Supposewe
discretisethesolutionontoam+1byn+1rectangulargrid(ormesh)givenby
xi=x0+ix,yj=y0+jywherei=0,m,j=0,n.Themeshspacingis
x=(x1x0)/mandy=(y1y0)/n.Letij=(xi,yj)betheexactsolutionatthemesh
pointi,j,andij=~ijbetheapproximatesolutionatthatmeshpoint.

ByconsideringtheTaylorSeriesexpansionforaboutsomemeshpointi,j,

p
(102a)
,

.n
(102
du , b)
(102
.e
, b)
es

(102
, b)
ot

itisclearthatwemayapproximate /x and /y tothefirstorderusingthefour


2 2 2 2
en

adjacentmeshpointstoobtainthefinitedifferenceapproximation
io

(103)

fortheinternalpoints0<i<m,0<j<n.Inadditiontothiswewillhaveeither
Dirichlet,vonNeumannormixedboundaryconditionstospecifytheboundary
valuesofij.Thesystemoflinearequationsdescribedby(103)incombination
withtheboundaryconditionsmaybesolvedinavarietyofways.

8.1.1Directsolution

Providedtheboundaryconditionsarelinearin,ourfinitedifference
approximationisitselflinearandtheresultingsystemofequationsmaybe
solveddirectlyusingGaussEliminationasdiscussedinsection4.1.This
approachmaybefeasibleifthetotalnumberofmeshpoints(m+1)(n+1)
requiredisrelativelysmall,butasthematrixAusedtorepresentthecomplete

systemwillhave[(m+1)(n+1)]2elements,thestorageandcomputationalcostof
suchasolutionwillbecomeprohibitiveevenforrelativelymodestmandn.

ThestructureofthesystemensuresAisrelativelysparse,consistingofa
tridiagonalcorewithonenonzerodiagonalaboveandanotherbelowthis.These
nonzerodiagonalsareoffsetbyeithermornfromtheleadingdiagonal.
Providedpivoting(ifrequired)isconductedinsuchawaythatitdoesnotplace
anynonzeroelementsoutsidethisbandthensolutionbyGaussEliminationor
LUDecompositionwillonlyproducenonzeroelementsinsidethisband,
substantiallyreducingthestorageandcomputationalrequirements(see
section4.4).Carefulchoiceoftheorderofthematrixelements(i.e.byxorbyy)
mayhelpreducethesizeofthismatrixsothatitneedcontainonlyO(m3)
elementsforasquaredomain.

BecauseofthewidespreadneedtosolveLaplace'sandrelatedequations,
specialisedsolvershavebeendevelopedforthisproblem.Oneofthebestof
theseisHockney'smethodforsolvingAx=bwhichmaybeusedtoreducea

p
blocktridiagonalmatrix(andthecorrespondingrighthandside)oftheform

.n
du
.e
(10
4)
es
ot

,
en

intoablockdiagonalmatrixoftheform
io

(10
5)

,
where and arethemselvesblocktridiagonalmatricesandIisanidentiy
matrix..Thisprocessmaybeperformediterativelytoreduceanndimensional
finitedifferenceapproximationtoLaplace'sequationtoatridiagonalsystemof
equationswithn1applications.ThecomputationalcostisO(plogp),wherepis

thetotalnumberofmeshpoints.Themaindrawbackofthismethodisthatthe
boundaryconditionsmustbeabletobecastintotheblocktridiagonalformat.

8.1.2Relaxation

Analternativetodirectsolutionofthefinitedifferenceequationsisaniterative
numericalsolution.Theseiterativemethodsareoftenreferredtoasrelaxation
methodsasaninitialguessatthesolutionisallowedtoslowlyrelaxtowardsthe
truesolution,reducingtheerrorsasitdoesso.Thereareavarietyof
approacheswithdifferingcomplexityandspeed.Weshallintroducethese
methodsbeforelookingatthebasicmathematicsbehindthem.

8.1.2.1Jacobi

TheJacobiIterationisthesimplestapproach.Forclarityweconsiderthespecial
casewhenx=y.TofindthesolutionforatwodimensionalLaplaceequation
simply:

p
1. Initialiseijtosomeinitialguess.

.n
2. Applytheboundaryconditions.
3. Foreachinternalmeshpointset
du
(10
*ij=(i+1,j+i1,j+i,j+1+i,j1)/4.
.e
6)
1. Replaceoldsolutionwithnewestimate*.
es

2. Ifsolutiondoesnotsatisfytolerance,repeatfromstep2.
ot

Thecoefficientsintheexpression(hereall1/4)usedtocalculatetherefined
estimateisoftenreferredtoasthestencilortemplate.Higherorder
en

approximationsmaybeobtainedbysimplyemployingastencilwhichutilises
morepoints.Otherequations(e.g.thebiharmonicequation,4=0)maybe
solvedbyintroducingastencilappropriatetothatequation.
io

Whileverysimpleandcheapperiteration,theJacobiIterationisveryslowto
converge,especiallyforlargergrids.Correctionstoerrorsintheestimateij
diffuseonlyslowlyfromtheboundariestakingO(max(m,n))iterationstodiffuse
acrosstheentiremesh.

8.1.2.2GaussSeidel

TheGaussSeidelIterationisverysimilartotheJacobiIteration,theonly
differencebeingthatthenewestimate*ijisreturnedtothesolutionijassoon
asitiscompleted,allowingittobeusedimmediatelyratherthandeferringits
usetothenextiteration.Theadvantagesofthisare:

Lessmemoryrequired(thereisnoneedtostore*).
Fasterconvergence(althoughstillrelativelyslow).

Ontheotherhand,themethodislessamenabletovectorisationas,foragiven
iteration,thenewestimateofonemeshpointisdependentonthenew
estimatesforthosealreadyscanned.

8.1.2.3RedBlackordering

AvariantontheGaussSeidelIterationisobtainedbyupdatingthesolutionijin
twopassesratherthanone.Ifweconsiderthemeshpointsasachessboard,
thenthewhitesquareswouldbeupdatedonthefirstpassandtheblacksquares
onthesecondpass.Theadvantages

Nointerdependenceofthesolutionupdateswithinasinglepassaids
vectorisation.
Fasterconvergenceatlowwavenumbers.

p
8.1.2.4SuccessiveOverRelaxation(SOR)

.n
Ithasbeenfoundthattheerrorsinthesolutionobtainedbyanyofthethree
du
precedingmethodsdecreaseonlyslowlyandoftendecreaseinamonotonic
manner.Hence,ratherthansetting
.e
*ij=(i+1,j+i1,j+i,j+1+i,j1)/4,
es

foreachinternalmeshpoint,weuse

(10
ot

*ij=(1)ij+(i+1,j+i1,j+i,j+1+i,j1)/4,
7)
en

forsomevalue.Theoptimalvalueofwilldependontheproblembeing
solvedandmayvaryastheiterationprocessconverges.Typically,however,a
valueofaround1.2to1.4producesgoodresults.Insomespecialcasesitis
io

possibletodetermineanoptimalvalueanalytically.

8.1.3Multigrid*

Thebigproblemwithrelaxationmethodsistheirslowconvergence.If=1then
applicationofthestencilremovesalltheerrorinthesolutionatthewavelength
ofthemeshforthatpoint,buthaslittleimpactonlargerwavelengths.Thismay
beseenifweconsidertheonedimensionalequationd2/dx2=0subjectto
(x=0)=0and(x=1)=1.Supposeourinitialguessfortheiterativesolutionis
thati=0forallinternalmeshpoints.WiththeJacobiIterationthecorrectionto
theinternalpointsdiffusesonlyslowlyalongfromx=1.

.n
Multigridmethodstrytoimprovetherateofconvergencebyconsideringthe
du
problemofahierarchyofgrids.Thelargerwavelengtherrorsinthesolutionare
dissipatedonacoarsergridwhiletheshorterwavelengtherrorsaredissipated
.e
onafinergrid.fortheexampleconsideredabove,thesolutionwouldconverge
inonecompleteJacobimultigriditeration,comparedwiththeslowasymptotic
es

convergenceabove.

Forlinearproblems,thebasicmultigridalgorithmforonecompleteiterationmay
ot

bedescribedas
en

1. Selecttheinitialfinestgridresolutionp=P0andsetb(p)=0andmakesome
initialguessatthesolution(p)
io

2. Ifatcoarsestresolution(p=0)thensolveA(p)(p)=b(p)exactlyandjumpto
step7
3. Relaxthesolutionatthecurrentgridresolution,applyingboundary
conditions
4. Calculatetheerrorr=A(p)b(p)
5. Coarsentheerrorb(p1)rtothenextcoarsergridanddecrementp
6. Repeatfromstep2
7. Refinethecorrectiontothenextfinestgrid(p+1)=(p+1)+(p)and
incrementp
8. Relaxthesolutionatthecurrentgridresolution,applyingboundary
conditions
9. Ifnotatcurrentfinestgrid(P0),repeatfromstep7
10. Ifnotatfinaldesiredgrid,incrementP0andrepeatfromstep7

11. Ifnotconverged,repeatfromstep2.

TypicallytherelaxationstepswillbeperformedusingSuccessiveOver
RelaxtionwithRedBlackorderingandsomerelaxationcoefficient.The
hierarchyofgridsisnormallychosentodifferindimensionsbyafactorof2in
eachdirection.Thefactoristypicallylessthanunityandeffectivelydamps
possibleinstabilitiesintheconvergence.Therefiningofthecorrectiontoafiner
gridwillbeachievedby(bi)linearorhigherorderinterpolation,andthe
coarseningmaysimplybebysubsamplingoraveragingtheerrorvectorr.

Ithasbeenfoundthatthenumberofiterationsrequiredtoreachagivenlevelof
convergenceismoreorlessindependentofthenumberofmeshpoints.Asthe
numberofoperationspercompleteiterationfornmeshpointsisO(n)+O(n/2d)+
+O(n/22d)+,wheredisthenumberofdimensionsintheproblem,thenitcan
beseenthattheMultigridmethodmayoftenbefasterthanadirectionsolution
(whichwillrequireO(n3),O(n2)orO(nlogn)operations,dependingonthe
methodused).Thisisparticularlytrueifnislargeortherearealargenumberof

p
dimensionsintheproblem.Forsmallproblems,thecoefficientinfrontofthen
fortheMultigridsolutionmayberelativelylargesothatdirectsolutionmaybe

.n
faster.
du
AfurtheradvantageofMultigridandotheriterativemethodswhencomparedwith
directsolution,isthatirregularshapeddomainsorcomplexboundaryconditions
.e
areimplementedmoreeasily.ThedifficultywiththisfortheMultigridmethodis
thatcaremustbetakeninordertoensureconsistentboundaryconditionsinthe
es

embeddedproblems.
ot

8.1.4Themathematicsofrelaxation*
en

Inprinciple,relaxationmethodswhicharethebasisoftheJacobi,GaussSeidel,
SuccessiveOverRelaxationandMultigridmethodsmaybeappliedtoany
io

systemoflinearequationstointerativelyimproveanapproximationtotheexact
solution.ThebasisforthisisidenticaltotheDirectIterationmethoddescribedin
section3.6.Westartbywritingthevectorfunction

f(x)=Axb, (108)
andsearchforthevectorofrootstof(x)=0bywriting

(10
xn+1=g(xn),
9)
where

(11
g(x)=D1{[A+D]xb},
0)

withDadiagonalmatrix(zeroforalloffdiagonalelements)whichmaybe
chosenarbitrarily.Wemayanalysethissystembyfollowingourearlieranalysis
fortheDirectIterationmethod(section3.6).Letusassumetheexactsolutionis
x*=g(x*),then

n+1=xn+1x*
=D {[A+D]xnb}D1{[A+D]x*b}
1

=D1[A+D](xnx*)
=D1[A+D]n
={D1[A+D]}n+10.
Fromthisitisclearthatconvergencewillbelinearandrequires

||n+1||=||Bn||<||n||, (111)
whereB=D [A+D]forsomesuitablenorm.Asanyerrorvectornmaybe
1

writtenasalinearcombinationoftheeigenvectorsofourmatrixB,itissufficient
forustoconsidertheeigenvalueproblem

p
(11
Bn=n,

.n
2)
andrequiremax(||)tobelessthanunity.Intheasymptoticlimit,thesmallerthe
du
magnitudeofthismaximumeigenvaluethemorerapidtheconvergence.The
convergenceremains,however,linear.
.e
SincewehavetheabilitytochoosethediagonalmatrixD,andsinceitisthe
es

eigenvaluesofB=D1[A+D]ratherthanAitselfwhichareimportant,careful
choiceofDcanaidthespeedatwhichthemethodconverges.Typicallythis
meansselectingDsothatthediagonalofBissmall.
ot
en

8.1.4.1JacobiandGaussSeidelforLaplaceequation*

ThestructureofthefinitedifferenceapproximationtoLaplace'sequationlends
io

itselftotheserelaxationmethods.Inonedimension,

(11
3)


andbothJacobiandGaussSeideliterationstakeDas2I(Iistheidentitymatrix)
onthediagonaltogiveB=D1[A+D]as

(11
4)


Theeigenvaluesofthismatrixaregivenbytherootsof

(11
det(BI)=0.
5)
Inthiscasethedeterminantmaybeobtainedusingtherecurrencerelation

(11
det(B)(n)=det(B)(n1)1/4det(B)(n2),
6)
wherethesubscriptgivesthesizeofthematrixB.Fromthiswemaysee

p
det(B)(1)=,

.n
det(B)(2)= ,
2

det(B)(3)=3+,
du
det(B)(4)= +(1/16),
4 2

det(B)(5)=5+3(3/16),
.e
det(B)(6)= (5/4) +(3/8) (1/64),
6 4 2

es

(117)
whichmaybesolvedtogivetheeigenvalues
(1)=0,
ot

(2)=1/4,
2

en

2(3)=0,1/2,
(4)=(35)/8,
2

io

2(5)=0,1/4,3/4,
(118)
Itcanbeshownthatforasystemofanysizefollowingthisgeneralform,allthe
eigenvaluessatisfy||<1,thusprovingtherelaxationmethodwillalways
converge.Asweincreasethenumberofmeshpoints,thenumberofeigen
valuesincreasesandgraduallyfillsuptherange||<1,withthenumerically
largesteigenvaluesbecomingclosertounity.Asaresultof1,the
convergenceoftherelaxationmethodslowsconsiderablyforlargeproblems.A
similaranalysismaybeappliedtoLaplace'sequationintwoormore
dimensions,althoughtheexpressionsforthedeterminantandeigenvaluesis
correspondinglymorecomplex.

Thelargeeigenvaluesareresponsiblefordecreasingtheerroroverlarge
distances(manymeshpoints).Themultigridapproachenablesthesolutionto

convergeusingamuchsmallersystemofequationsandhencesmallereigen
valuesforthelargerdistances,bypassingtheslowconvergenceofthebasic
relaxationmethod.

8.1.4.2SuccessiveOverRelaxationforLaplaceequation*

TheanalysisoftheJacobiandGaussSeideliterationsmaybeappliedequally
welltoSuccessiveOverRelaxation.ThemaindifferenceisthatD=(2/)Iso
that

(119)

p
andthecorrespondingeigenvaluesarerelatedby(1)2equaltothevalues

.n
tabulatedabove.Thusifischoseninappropriately,theeigenvaluesofBwill
exceedunityandtherelaxationmethodwilldiverge.Ontheotherhand,careful
du
choiseofwillallowtheeigenvaluesofBtobelessthanthoseforJacobiand
GaussSeidel,thusincreasingtherateofconvergence.
.e
8.1.4.3Otherequations*
es

Relaxationmethodsmaybeappliedtootherdifferentialequationsormore
generalsystemsoflinearequationsinasimilarmanner.Asaruleofthumb,the
ot

solutionwillconvergeiftheAmatrixisdiagonallydominant,i.e.thenumerically
largestvaluesoccuronthediagonal.Ifthisisnotthecase,SORcanstillbe
en

used,butitmaybenecessarytochoose<1whereasforLaplace'sequation
>=1producesabetterrateofconvergence.
io

8.1.5FFT*

OneofthemostcommonwaysofsolvingLaplace'sequationistotakethe
Fouriertransformoftheequationtoconvertitintowavenumberspaceandthere
solvetheresultingalgebraicequations.Thisconversionprocesscanbevery
efficientiftheFastFourierTransformalgorithmisused,allowingasolutiontobe
evaluatedwithO(nlogn)operations.

InitssimplestformtheFFTalgorithmrequirestheretoben=2pmeshpointsin
thedirection(s)tobetransformed.Theefficiencyofthealgorithmisachievedby
firstcalculatingthetransformofpairsofpoints,thenofpairsoftransforms,then

ofpairsofpairsandsoonuptothefullresolution.Theideaistodivideand
conquer!DetailsoftheFFTalgorithmmaybefoundinanystandardtext.

8.1.6Boundaryelements*
8.1.7Finiteelements*
8.2Poissonequation

ThePoissonequation2=f(x)maybetreatedusingthesametechniquesas
Laplace'sequation.Itissimplynecessarytosettherighthandsidetof,scaled
suitablytoreflectanyscalinginA.

8.3Diffusionequation

Considerthetwodimensionaldiffusionequation,

(12

p
, 0)

.n
subjecttou(x,y,t)=0ontheboundariesx=0,1andy=0,1.Supposetheinitial
conditionsareu(x,y,t=0)=u0(x,y)andwewishtoevaluatethesolutionfort>0.
du
Weshallexploresomeoftheoptionsforachievingthisinthefollowingsections.
.e
8.3.1Semidiscretisation
es

Oneofthesimplestandmostusefulapproachesistodiscretisetheequationin
spaceandthensolveasystemof(coupled)ordinarydifferentialequationsin
ot

timeinordertocalculatethesolution.Usingasquaremeshofstepsize
x=y=1/m,andtakingthediffusivityD=1,wemayutiliseourearlier
en

approximationfortheLaplacianoperator(equation(103))toobtain
io

(12
1)
fortheinternalpointsi=1,m1andj=1,m1.Ontheboundaries(i=0,j),(i=m,j),
(i,j=0)and(i,j=m)wesimplyhaveuij=0.IfUijrepresentsourapproximationofu
atthemeshpointsxij,thenwemustsimplysolvethe(m1)2coupledordinary
differentialequations

(12
. 2)
Inprinciplewemayutiliseanyofthetimesteppingalgorithmsdiscussedin
earlierlecturestosolvethissystem.Asweshallsee,however,careneedstobe
takentoensurethemethodchosenproducesastablesolution.

8.3.2Eulermethod

ApplyingtheEulermethodYn+1=Yn+tf(Yn,tn)toourspatiallydiscretised
diffusionequationgives

(12
, 3)
wheretheCourantnumber

(12
=t/x2,
4)
describesthesizeofthetimesteprelativetothespatialdiscretisation.Aswe
shallsee,stabilityofthesolutiondependsonincontrasttoanordinary
differentialequationwhereitisafunctionofthetimesteptonly.

8.3.3Stability

StabilityoftheEulermethodsolvingthediffusionequationmaybeanalysedina

p
similarwaytothatforordinarydifferentialequations.Westartbyaskingthe

.n
question"doestheEulermethodconvergeast>infinity?"Theexactsolution
willhaveu>0andthenumericalsolutionmustalsodothisifitistobestable.
du
Wechoose
.e
(12
U(0)i,j=sin(i)sin(j),
es

5)
forsomeandchosenasmultiplesof/mtosatisfyu=0ontheboundaries.
ot

Substitutingthisinto(123)gives
en

U(1)i,j=sin(i)sin(j)+{sin[(i+1)]sin(j)+sin[(i1)]sin(j)
+sin(i)sin[(j+1)]+sin(i)sin[(j1)]4sin(i)sin(j)}
io

=sin(i)sin(j)+{[sin(i)cos()+cos(i)sin()]sin(j)+[sin(i)cos()

cos(i)sin()]sin(j)
+sin(i)[sin(j)cos()+cos(j)sin()]+sin(i)[sin(j)cos()

cos(j)sin()]4sin(i)sin(j)}
=sin(i)sin(j)+2{sin(i)cos()sin(j)+sin(i)sin(j)cos()2

sin(i)sin(j)}
=sin(i)sin(j){1+2[cos()+cos()2]}
=sin(i)sin(j){14[sin (/2)+sin (/2)]}.
2 2
(126)
Applyingthisatconsecutivetimesshowsthesolutionattimetnis

U(n)i,j=sin(i)sin(j){14[sin2(/2)+sin2(/2)]}n, (127)

whichthenrequires|14[sin2(/2)+sin2(/2)]|<1forthistoconvergeas
n>infinity.Forthistobesatisfiedforarbitraryandwerequire<1/4.Thus
wemustensure

(12
t<x2/4.
8)
Adoublingofthespatialresolutionthereforerequiresafactoroffourmoretime
stepssooveralltheexpenseofthecomputationincreasessixteenfold.

Theanalysisforthediffusionequationinoneorthreedimensionsmaybe
computedinasimilarmanner.

8.3.4Modelforgeneralinitialconditions

p
.n
du
.e
es
ot
en
io

You might also like