You are on page 1of 14

PURWANCHAL CAMPUS

DHARAN-8, SUNSARI
NEPAL

A complete lab report on


Numerical methods
SUBMITTED BY SUBMITTED TO:
NAME:JEETENDRA DEV CHHETRI DEPARTMENT OF ELECTRONICS
ROLL NO:2072/BEL/18 COMPUTER ENGINEERING
FACULTY: ELECTRICAL
GROUP: A
DATE:2074/04/25
TITLE: BISECTION METHOD
Newton’s method is a popular technique for the solution of nonlinear equations, but
alternative methods exist which may be preferable in certain situations. The Bisection
method is yet another technique for finding a solution to the nonlinear equation f (x) =
0, which can be used provided that the function f is continuous. The motivation for this
technique is drawn from Bolzano’s theorem for continuous functions:
Theorem (Bolzano) : If the function f (x) is continuous in [a, b] and f (a)*f
(b)< 0 (i.e. the function
f has values with different signs at a and b), then a value c ∈ (a, b) exists such that f (c)
= 0.

The bisection algorithm attempts to locate the value c where the plot of f crosses
over zero, by checking whether it belongs to either of the two sub-intervals [a, xm ], [xm ,
b], where xm is the midpoint

𝑎+𝑏
x m= 2
Algorithm of BISECTION METHOD.
Step-1. Start of the program.
Step-2. Input the variable x1, x2 for the task.
Step-3. Check f(x1)*f(x2)<0
Step-4. If yes proceed
Step-5. If no exit and print error message
Step-6. Repeat 7-11 if condition not satisfied
Step-7. X0=(x1+x2)/2
Step-8. If f(x0)*f(x1)<0
Step-9. X2=x0
Step-10. Else
Step-11. X1=x0
Step-12. Condition:
Step-13. if | (x1-x2)/x1) | < maximum possible error or f(x0)=0
Step-14. Print output
Step-15. End of program.
coding:
//PROGRAM: BISECTION METHOD.
//2074/03/21
#include<stdio.h>
#include<math.h>
#include<conio.h>
#include<process.h>
#include<string.h>
#define EPS 0.00001
#define F(x) (x)*log10(x)-1.2
void Bisect();
int count=1,n;
float root=1;
int main()
{
printf("\n Solution by BISECTION method \n");
printf("\n Equation is ");
printf("\n\t\t\t x*log(x) - 1.2 = 0\n\n");
printf("Enter the number of iterations:");
scanf("%d",&n);
Bisect();
getch();
}
void Bisect()
{
float x0,x1,x2;
float f0,f1,f2;
int i=0;
for(x2=1;;x2++)
{
f2=F(x2);
if (f2>0)
{
break;

Page 3 of 14
}
}
for(x1=x2-1;;x2--)
{
f1=F(x1);
if(f1<0)
{
break;
}
}
printf("\t\t-----------------------------------------");
printf("\n\t\t ITERATIONS\t\t ROOTS\n");
printf("\t\t-----------------------------------------");
for(;count<=n;count++)
{
x0=((x1+x2)/2.0);
f0=F(x0);
if(f0==0)
{
root=x0;
}
if(f0*f1<0)
{
x2=x0;
}
else
{
x1=x0;
f1=f0;
}
printf("\n\t\t ITERATION %d", count);
printf("\t :\t %f",x0);
if(fabs((x1-x2)/x1) < EPS)
{
printf("\n\t\t---------------------------------");
printf("\n\t\t Root = %f",x0);
printf("\n\t\t Iterations = %d\n", count);
printf("\t\t------------------------------------");
getch();
}
}
printf("\n\t\t----------------------------------------");
printf("\n\t\t\t Root = %7.4f",x0);
printf("\n\t\t\t Iterations = %d\n", count-1);
printf("\t\t------------------------------------------");
getch();
}
OUTPUT:
Solution by BISECTION method

Page 4 of 14
Equation is
x*log(x) - 1.2 = 0

Enter the number of iterations: 15


-----------------------------------------
ITERATIONS ROOTS
-----------------------------------------
ITERATION 1 : 2.500000
ITERATION 2 : 2.750000
ITERATION 3 : 2.625000
ITERATION 4 : 2.687500
ITERATION 5 : 2.718750
ITERATION 6 : 2.734375
ITERATION 7 : 2.742188
ITERATION 8 : 2.738281
ITERATION 9 : 2.740234
ITERATION 10 : 2.741211
ITERATION 11 : 2.740723
ITERATION 12 : 2.740479
ITERATION 13 : 2.740601
ITERATION 14 : 2.740662
ITERATION 15 : 2.740631
----------------------------------------
Root = 2.7406
Iterations = 15
-----------------------------------------

THE REGULA FALSI (FALSE POSITION) METHOD:


Most numerical equation-solving methods usually converge faster than Bisection. The price
for that is that some of them (e.g. Newton’s method and Secant) can fail to converge at all,
and all of them can sometimes converge much slower than Bisection—sometimes
prohibitively slowly. None can guarantee Bisection’s reliable and steady guaranteed
convergence rate. Regula Falsi, like Bisection, always converges, usually considerably faster
than Bisection—but sometimes much slower than Bisection.
When numerically solving an equation manually, by calculator, or when a computer program
run has to solve equations so many times that the speed of convergence becomes important,
then it could be preferable to first try a usually-faster method, going to Bisection only if the
faster method fails to converge, or fails to converge at a useful rate.
The fact that Regula Falsi always converges, and has versions that do well at avoiding
slowdowns, makes it a good choice when speed is needed, and when Newton’s method
doesn’t converge, or when the evaluation of the derivative is too time-consuming for
Newton’s to be useful.

Algorithm of FALSE POSITION or REGULA-FALSI METHOD.

Page 5 of 14
Step-1. Start of the program.
Step-2. Input the variable x0, x1,e, n for the task.
Step-3. f0=f(x0)
Step-4. f2=f(x2)
Step-5. for i=1 and repeat if i<=n
Step-6. x2 = (x1.f1-xo.f1)/(f1-f0)
Step-7. f2 = x2
Step-8. if | f2 | <=e
Step-9. print “convergent “, x2, f2
Step-10. if sign (f2)!=sign(f0)
Step-11. x1=x2 & f1 = f2
Step-12. else
Step-13. X0 = x2 & f0 = f2
Step-14. End loop
Step-15. Print output
Step-16. End the program.

//PROGRAM: FALSE POSITION or REGULA-FALSI METHOD.


#include<stdio.h>
#include<math.h>
#include<conio.h>
#include<string.h>
#include<process.h>
#define EPS 0.00005
#define f(x) cos(x)-x*exp(x)
void FAL_POS();
int main()
{
printf("\n Solution by FALSE POSITION method\n");
printf("\n Equation is ");
printf("\n\t\t\t cos(x)-x*exp(x)=0\n\n");
FAL_POS();
}
void FAL_POS()
{
float f0,f1,f2;
float x0,x1,x2;
int itr;
int i;
printf("Enter the number of iteration:");
scanf("%d",&itr);
Page 6 of 14
for(x1=0.0;;)
{
f1=f(x1);
if(f1>0)
{
break;
}
else
{
x1=x1+0.1;
}
}
x0=x1-0.1;
f0=f(x0);
printf("\n\t\t-----------------------------------------");
printf("\n\t\t ITERATION\t x2\t\t F(x)\n");
printf("\t\t--------------------------------------------");
for(i=0;i<itr;i++)
{
x2=x0-((x1-x0)/(f1-f0))*f0;
f2=f(x2);
if(f0*f2>0)
{
x1=x2;
f1=f2;
}
else
{
x0=x2;
f0=f2;
}
if(fabs(f(2))>EPS)
{
printf("\n\t\t%d\t%f\t%f\n",i+1,x2,f2);
}
}
printf("\t\t--------------------------------------------");
printf("\n\t\t\t\tRoot=%f\n",x2);
printf("\t\t-------------------------------------------");
getch();
}
output:

Solution by FALSE POSITION method


Equation is
cos(x)-x*exp(x)=0

Page 7 of 14
Enter the number of iteration:15
-----------------------------------------
ITERATION x2 F(x)
--------------------------------------------
1 1.169755 -3.377644
2 0.267211 0.615449
3 0.694865 -0.623979
4 0.427878 0.253484
5 0.573166 -0.176538
6 0.487164 0.090711
7 0.535763 -0.055608
8 0.507540 0.030817
9 0.523678 -0.018102
10 0.514367 0.010284
11 0.519712 -0.005956
12 0.516635 0.003411
13 0.518403 -0.001966
14 0.517386 0.001129
15 0.517971 -0.000650
--------------------------------------------
Root=0.517971
-------------------------------------------
SIMPSON’S 1/3rd RULE:
ALGORITHM OF SIMPSON’S 1/3rd RULE
Step-1. Start of the program.
Step-2. Input Lower limit a
Step-3. Input Upper limit b
Step-4. Input number of subintervals n
Step-5. h=(b–a)/n
Step-6. sum=0
Step-7. sum=fun(a)+4*fun(a+h)+fun(b)
Step-8. for i=3; i<n; i + = 2
Step-9. sum + = 2*fun(a+(i – 1)*h) + 4*fun(a+i*h)
Step-10. End of loop i
Step-11. result=sum*h/3
Step-12. Print Output result
Step-13. End of Program
Step-14. Start of Section fun
Step-15. temp = 1/(1+(x*x))
Step-16. Return temp
Step-17. End of Section fun
coding:
//PROGRAM: SIMPSON’S 1/3rd METHOD OF NUMERICAL INTEGRATION
#include<stdio.h>
#include<conio.h>
#include<math.h>

Page 8 of 14
#include<process.h>
#include<string.h>
float fun(float);
int main()
{
float result=1;
float a,b;
float sum,h;
int i,j,n;
clrscr();
printf("\n Enter the range - ");
printf("\n Lower Limit a - ");
scanf("%f",&a);
printf("\n Upper limit b - ");
scanf("%f",&b);
printf("\n\n Enter number of sub intervals - ");
scanf("%d",&n);
h=(b-a)/n;
sum=0;
sum=fun(a)+4*fun(a+h)+fun(b);
for(i=3;i<n;i+=2)
{
sum+=2*fun(a+(i-1)*h)+4*fun(a+i*h);
}
result=(sum*h)/3;
printf("\n\nValue of integral is %6.4f\t",result);
getch();
return 0;
}
float fun(float x)
{
float temp;
temp=1/(1+(x*x));
return temp;
}

ALGORITHM OF SIMPSON’S 3/8th RULE


Step-1. Start of the program.
Step-2. Input Lower limit a
Step-3. Input Upper limit b
Step-4. Input number of sub itervals n
Step-5. h = (b –a)/n
Step-6. sum = 0
Step-7. sum = fun(a) + fun (b)
Step-8. for i = 1; i < n; i++
Step-9. if i%3=0:
Step-10. sum + = 2*fun(a + i*h)

Page 9 of 14
Step-11. else:
Step-12. sum + = 3*fun(a+(i)*h)
Step-13. End of loop i
Step-14. result = sum*3*h/8
Step-15. Print Output result
Step-16. End of Program
Step-17. Start of Section fun
Step-18. temp = 1/(1+(x*x))
Step-19. Return temp
Step-20. End of section fun
PROGRAM: SIMPSON’S 3/8th METHOD OF NUMERICAL INTEGRATION
#include<stdio.h>
#include<coni
o.h> float
fun(int);

void main()
{
int n,a,b,i;
float h, sum=0,
result; //clrscr();

printf("enter range");
scanf("%d",&n);
printf("enter lower
limit");
scanf("%d",&a);
printf("enter upper
limit");
scanf("%d",&b); h=(b-
a)/n;
sum=fun(a)+fun(b);
for(i=0;i<n;i++)

{
if (i%2==0)
sum+=2*fun(a+i
*h); else
sum+=3*fun(a+i*h);
}
result=sum*3/
8*h;
printf("%f",
result); getch();
}

float fun(int x)
{
float val;
val=1/(1+(x*

Page 10 of 14
x));
return(val);
}

Gauss Elimination method:


Gauss Elimination method can be adopted to find the solution of linear simultaneous
equations arising in engineering problems. In the method, equations are solved by elimination
procedure of the unknowns successively.

The method overall reduces the system of linear simultaneous equations to an upper
triangular matrix. Then backward substitution is used to derive the unknowns. This is the key
concept in writing an algorithm or program, or drawing a flowchart for Gauss Elimination.

Partial pivoting or complete pivoting can be adopted in Gauss Elimination method. So, this
method is considered superior to the Gauss Jordan method.

In the Gauss Elimination method algorithm and flowchart given below, the elimination
process is carried out until only one unknown remains in the last equation. It is
straightforward to program, and partial pivoting can be used to control rounding errors.

GAUSS ELIMINATION ALGORITHM:

1. Start
2. Declare the variables and read the order of the matrix n.
3. Take the coefficients of the linear equation as:
Do for k=1 to n
Do for j=1 to n+1
Read a[k][j]
End for j
End for k
4. Do for k=1 to n-1
Do for i=k+1 to n
Do for j=k+1 to n+1
a[i][j] = a[i][j] – a[i][k] /a[k][k] * a[k][j]
End for j

Page 11 of 14
End for i
End for k
5. Compute x[n] = a[n][n+1]/a[n][n]
6. Do for k=n-1 to 1
sum = 0
Do for j=k+1 to n
sum = sum + a[k][j] * x[j]
End for j
x[k] = 1/a[k][k] * (a[k][n+1] – sum)
End for k
7. Display the result x[k]
8. Stop

CODING:
#include<stdio.h>
int main()
{
int i,j,k,n;
float A[20][20],c,x[10],sum=0.0;
printf("\nEnter the order of matrix: ");
scanf("%d",&n);
printf("\nEnter the elements of augmented matrix row-wise:\n\n");
for(i=1; i<=n; i++)
{
for(j=1; j<=(n+1); j++)
{
printf("A[%d][%d] : ", i,j);
scanf("%f",&A[i][j]);
}
}
for(j=1; j<=n; j++) /* loop for the generation of upper triangular matrix*/
{
for(i=1; i<=n; i++)
{
if(i>j)
{
c=A[i][j]/A[j][j];
for(k=1; k<=n+1; k++)
{
A[i][k]=A[i][k]-c*A[j][k];
}
}
}
}
x[n]=A[n][n+1]/A[n][n];

Page 12 of 14
/* this loop is for backward substitution*/
for(i=n-1; i>=1; i--)
{
sum=0;
for(j=i+1; j<=n; j++)
{
sum=sum+A[i][j]*x[j];
}
x[i]=(A[i][n+1]-sum)/A[i][i];
}
printf("\nThe solution is: \n");
for(i=1; i<=n; i++)
{
printf("\nx%d=%f\t",i,x[i]); /* x1, x2, x3 are the required solutions*/
}
return(0);
}
OUTPUT:
Enter the order of matrix: 3
Enter the elements of augmented matrix row-wise:
A[1][1] : 10
A[1][2] : -7
A[1][3] : 3
A[1][4] : 5
A[2][1] : -6
A[2][2] : 8
A[2][3] : 4
A[2][4] : 7
A[3][1] : 2
A[3][2] : 6
A[3][3] : 9
A[3][4] : -1
The solution is:
x1=-7.809086
x2=-8.690904
x3=7.418178

CONCLUSION AND DISCUSSION:


Numerical methods are algorithms used for computing numeric data. They are used to
provide ‘approximate’ results for the problems being dealt with and their necessity is felt
when it becomes impossible or extremely difficult to solve a given problem analytically.

It is important to recognize under what conditions a method can be followed and what
starting value(s) to choose from in order to ensure that the chosen method shall work
(converge).

Numerical methods can be used for-

Page 13 of 14
 finding root(s) of equations - Bisection method, Newton Raphson, Fixed Point
iteration etc
 solving ODEs - Euler method, Improved Euler, RK methods, Midpoint method,
Predictor Corrector methods etc
 finding values of integrals - Midpoint, Trapezoidal, Simpson’s rule
 interpolation - Lagrange interpolation, Newton interpolation, Spline interpolation
etc
There is huge use of numerical methods in engineering world. We can develop algorithms
and code the programs in computer and felt in real world practice. We can conclude that
without numerical methods using computer programming it is quite tough to solve the
complex engineering calculations.

Page 14 of 14

You might also like