You are on page 1of 625

International Journal of Advances in Engineering & Technology (IJAET)

ISSN : 2231-1963

VOLUME- ISSUEVOLUME-5, ISSUE-1

SMOOTH, SIMPLE AND TIMELY PUBLISHING OF REVIEW AND RESEARCH ARTICLES!

01-11Date: 01-11-2012

International Journal of Advances in Engineering & Technology, Nov 2012. IJAET ISSN: 2231-1963

Table of Content
S. No.

Article Title & Authors (Volume 5, Issue 1, November-2012) Novel 3D Matching Self-Localisation Algorithm Miguel Pinto, A. Paulo Moreira, Anbal Matos, Hber Sobreira Nanofluid Thermal Conductivity-A Review

Page Nos

1.

1-12

2.

13-28

Ravi Sankar. B, Nageswara Rao. D, Srinivasa Rao. Ch.

3.

Improved Performance of Helixchanger over Segmental Baffle Heat Exchanger using Kerns Method
Sunil Kumar Shinde, Mustansir Hatim Pancha and S. Pavithran Establishment of an Empirical Model that Correlates RainfallIntensity- Duration-Frequency for Makurdi Area, Nigeria Martins Okey Isikwue, Sam Baba Onoja and Kefas J. Laudan Fastica Based Blind Source Separation for CT Imaging under Noise Conditions Rohit Kumar Malik and Ketaki Solanki Improvement of Transient Stability Through SVC

29-39

4.

40-46

5.

47-55

6.

V. Ganesh, K. Vasu, K. Venkata Rami Reddy, M. Surendranath Reddy and T. Gowri Manohar Simulation of Secure AODV in Gray Hole Attack for Mobile Ad-Hoc Network

56-66

7.

67-76

Onkar V. Chandure, Aditya P. Bakshi, Saudamini P. Tidke, Priyanka M. Lokhande Certain Investigations on Gravity Waves in the Mesospheric Region 77-83

8.

Vivekanand Yadav and R. S. Yadav Analytical Modelling of Supercharging Diesel Radial Centrifugal Compressors with Vanes-Based Diffuser

9.

84-106

Waleed F. Faris, Hesham A. Rakha, Raed M. Kafafy, Moumen Idres, Salah A.M. Elmoselhy Hybrid Lean-Agile Design of Mobile Robots 107-121 122-129

10. 11. i

Salah A.M. Elmoselhy Pressure Drop of CuO-Base Oil Nanofluid Flow inside an Inclined

Vol. 5, Issue 1, pp. i-vi

International Journal of Advances in Engineering & Technology, Nov 2012. IJAET ISSN: 2231-1963
Tube Mahdi Pirhayati, Mohammad Ali Akhavan-Behabadi, Morteza Khayat A Maximum Power Point Tracking Method based on Artificial Neural Network for A PV System Abdessamia Elgharbi, Dhafer Mezghani, Abdelkader Mami An Elaboration of Quantum Dots and its Applications

12.

130-140

13.

141-145

Sambeet Mishra, Bhagabat Panda, Suman Saurav Rout Analysis and Evaluation of Cognitive Behavior in Software Interfaces using an Expert System Saad Masood Butt & Wan Fatimah Wan Ahmad An Innovative Approach and Design Issues for New Intelligent ELearning System Gopal Sakarkar, Shrinivas Deshpande, Vilas Thakare Low Transition Test Pattern Generator Architecture for Mixed Mode Built-In-Self-Test (BIST) P. Sakthivel, K. Nirmal Kumar, T. Mayilsamy Cryptography Scheme of an Optical Switching System using Pico/Femto Second Soliton Pulse I. S. Amiri, M. Nikmaram, A. Shahidinejad, J. Ali Comparison of Different Stress-Strain Models for Confined Self Compacting Concrete (SCC) under Axial Compression P. Srilakshmi and M. V. Seshagirirao A Study on Electronic Document Management System Integration Needs in the Public Sector Toms Leikums Special Dynamical Solutions of the Wheels of New Designed Car Body KhashayarTeimoori and Muhammad Hassani An Assessment of Distributed Generation Islanding Detection Methods Chandra Shekhar Chandrakar, Bharti Dewani, Deepali Chandrakar

14.

146-154

15.

155-162

16.

163-175

17.

176-184

18.

185-193

19.

194-205

20.

206-217

21.

218-226

22.

Study of Widely used Treatment Technologies for Hospital Wastewater and their Comparative Analysis

227-240

ii

Vol. 5, Issue 1, pp. i-vi

International Journal of Advances in Engineering & Technology, Nov 2012. IJAET ISSN: 2231-1963
Jafrudeen and Naved Ahsan Implementation of the Hybrid Lean-Agile Manufacturing System Strategic Facet in Automotive Sector Salah A.M. Elmoselhy Mechanical Evaluation of Joining Methodologies in Multi Material Car Body Irfan Dost, Shoukat Alim Khan, Majid Aziz Improved AODV based on Link Quality Metrics

23.

241-258

24.

259-268

25.

269-275

Balaji V, V. Duraisamy Improvement of Power Quality of a Distributed Generation Power System Aruna Garipelly Finding Critical Buckling Load of Rectangular Plate using Integrated Force Method G. S. Doiphode and S. C. Patodi Influence of type of Chemical Admixtures on Sand and Cement Content of Ordinary Grade Concrete M. K. Maroliya Enhancement of Safety Performance at Construction Site

26.

276-287

27.

288-297

28.

298-302

29.

303-312

Aref Charehzehi, Alireza Ahankoob A Modified Swifter Start Algorithm for Evaluating High Bandwidth Delay Product Networks Ehab Aziz Khalil Influence of Soil-Industrial Effluents Interaction on Subgrade Strength of an Expansive Soil-A Comparative Study A. V. Narasimha Rao, M. Chittaranjan Implementation of Browser based IDE to Code in the Cloud

30.

313-325

31.

326-335

32.

Lakshmi M. Gadhikar, Deepa Vincent, Lavanya Mohan, Megha V. Chaudhari Hamming Distance based Compression Techniques with Security

336-348

33. 34. iii

349-353 354-360

Atul S. Joshi, Prashant R. Deshmukh Association Models for Prediction with Apriori Concept

Vol. 5, Issue 1, pp. i-vi

International Journal of Advances in Engineering & Technology, Nov 2012. IJAET ISSN: 2231-1963
Smitha.T, V.Sundaram A Study OF Multiple Human Tracking for Visual Surveillance

35.

361-374

Shalini Agarwal, Shaili Mishra Modelling and Parametric Study of Gas Turbine Combustion Chamber M. Sadrameli & M. Jafari Statistical Techniques in Anomaly Intrusion Detection System

36.

375-386

37.

387-398

Hari Om & Tanmoy Hazra Abrasive Wear Behaviour of Bamboo-Glass Fiber Reinforced Epoxy Composites using Taguchi Approach

38.

399-405

Raghavendra Yadav Eagala, Allaka Gopichand, Gujjala Raghavendra, Sardar Ali S A Multiple Kernel Fuzzy C-Means Clustering Algorithm for Brain MR Image Segmentation M. Ganesh and V. Palanisamy Feature Extraction using Histogram of Radon Transform for Palmprint Matching Jitendra Chaudhari, Pradeep M. Patil, Y. P. Kosta Study of Mobile Node Based Coverage Recovery Process for WSN Deployed in Large Food Grain Warehouse Neha Deshpande & A. D. Shaligram LBG Algorithm for Fingerprint Classification

39.

406-415

40.

416-421

41.

422-429

42.

430-435

Sudeep Thepade, Dimple Parekh, Unnati Thapar, Vandana Tiwari Optimal Placement of SVC And STATCOM for Voltage Stability Enhancement under Contingency using Cat Swarm Optimization G. Naveen Kumar, M. Surya Kalavathi and R. Harini Krishna Autonomic Traffic Lights Control using Ant Colony Algorithm

43.

436-447

44.

448-455

Wadhah Z. Tareq, Rabah N. Farhan CPW Fed Slot Coupled Wideband and Multiband Antennas for Wireless Applications Mahesh A. Maindarkar and Veeresh G. Kasabegoudar

45.

456-461

46. iv

Design and Implementation of IEEE 802.16 MAC Layer Simulator

462-469

Vol. 5, Issue 1, pp. i-vi

International Journal of Advances in Engineering & Technology, Nov 2012. IJAET ISSN: 2231-1963
H. M. Shamitha, H. M. Guruprasad, Kishore. M, Ramesh. K Topology Optimization of Continuum Structures using Optimality Criterion approach in ANSYS Dheeraj Gunwant & Anadi Misra A Design of Robust PID Controller for Non-Minimum Network Control System Dewashri Pansari, Balram Timande, Deepali Chandrakar Structural and Magnetic Properties of Cu Substituted Ni Zn Nanocrystalline Ferrite Synthesis by Sol-Gel Auto Combustion Technique Vidyadhar V. Awati, Maheshkumar L. Mane , Sopan M. Rathod Comparative Parametric Analysis for Stability of 6T and 8T SRAM Cell Manpreet Kaur, Ravi Kumar Sharma A Survey on Energy Efficient Server Consolidation through VM live Migration Jyothi Sekhar, Getzi Jeba, S. Durga Design an Energy Efficient DSDV Routing Protocol for Mobile Ad Hoc Network Dheeraj Kumar Anand, Shiva Prakash Semantic Web in Medical Information Systems

47.

470-485

48.

486-493

49.

494-502

50.

503-514

51.

515-525

52.

526-535

53.

536-543

Prashish Rajbhandari, Rishi Gosai, Rabi C Shah, Pramod KC Magnetic and Dielectric Properties of COxZn1-XFe2O4 Synthesized by Metallo-Organic Decomposition Technique Anshu Sharma, Kusum Parmar, R.K. Kotnala and N.S. Negi Application of Metaheuristics in Transmission Network Expansion Planning-An Overview Bharti Dewani, M.B. Daigavane, A.S. Zadgaonkar An Efficient Variable Speed Stand Alone Wind Energy Conversion System & Efficient Control Techniques for Variable Wind Applications R. Jagatheesan, K. Manikandan

54.

544-554

55.

555-561

56.

562-571

57. v

Fuzzy like PID Controller Tuning by Multi-Objective Genetic Algorithm for Load Frequency Control in Nonlinear Electric Power

572-583

Vol. 5, Issue 1, pp. i-vi

International Journal of Advances in Engineering & Technology, Nov 2012. IJAET ISSN: 2231-1963
Systems M. A. Tammam, M. A. S. Aboelela, M. A. Moustafa, A. E. A. Seif

58.

Economic Load Dispatch using Simple and Refined Genetic Algorithm Lily Chopra and Raghuwinder Kaur Experimental Investigations on the Performance and Emission Characteristics of Diesel Engine using Preheated Pongamia Methyl Ester as Fuel Dinesha P., Mohanan P. Analytical Model of Surface Potential and Threshold Voltage of Biaxial Strained Silicon nMosfet including QME Shiromani Balmukund Rahi and Garima Joshi Members of IJAET Fraternity

584-590

59.

591-600

60.

601-607

A-J

vi

Vol. 5, Issue 1, pp. i-vi

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

NOVEL 3D MATCHING SELF-LOCALISATION ALGORITHM


Miguel Pinto, A. Paulo Moreira, Anbal Matos, Hber Sobreira
INESC Porto - Institute for Systems and Computer Engineering of Porto, Department of Electrical and Computer Engineering, Faculty of Engineering, University of Porto, Porto, Portugal

ABSTRACT
A new and fast methodology is discussed as a solution to pinpointing the location of a robot called RobVigil, in a robust way, without environment preparation, even in dynamic scenarios. This solution does not require a high computational power. Firstly, the EKF-SLAM is used to find the location of the vehicle, allowing to map the surrounding area in the 3D space. Afterwards, the constructed map is used in a 3D matching algorithm during the normal operation of the vehicle to pinpoint its location. The 3D matching algorithm uses data from a tilting Laser Range Finder. Experimental results on the performance and accuracy of the proposed method are presented in this paper.

KEYWORDS:
Range Finder

Mobile Robot, Service Robots,3D Matching, Simultaneous Localisation and Mapping, Laser

I.

INTRODUCTION

To be truly autonomous, a robot must be able to pinpoint its location inside dynamic environments, moving in an unlimited area, without preparation needs. The application of localisation systems, which need the preparation of the indoor building and a certain setup time, in some situations, became impractical both in terms of aesthetic and functional. Furthermore, the associated costs cannot be considered negligible. In order to fulfil this definition of autonomy, the fundamental motivation and opportunity of this work, is the implementation of a robust strategy of localisation that runs in a short execution time. The developed approach is a three-dimensional map based localisation method, with the objective of solve the problem of accumulative error when the odometry is used, using the environment infrastructure, without constraints in terms of the navigation area and with no need of prepare the environment with artificial landmarks or beacons. It is applied to the RobVigil robot, shown in Fig. 1. The main application of the RobVigil is the surveillance of public facilities, i.e. dynamic environments, as shopping malls, hospitals or others service scenarios. The use of a robot of this type, in these scenarios, allows systematic and uninterruptable inspection routines, with minimum human intervention.

Vol. 5, Issue 1, pp. 1-12

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Fig. 1. The RobVigil performing a surveillance routine in a Shopping Mall. he

This is a differential wheel traction vehicle, equipped with odometers and a tilting Laser Range Finder, which acquires three-dimensional data about the surrounding environment and is used for dimensional mapping and localisation. To perform the surveillance task, the RobVigil is equipped with sensors t detect dangerous to situations, like fires, floods or gas leaks. It is equipped as well with three surveillance camera cameras, including an omnidirectional, an High Resolution, and finally, a thermal camera.7 , 7

II.

MANUSCRIPT ORGANIZATION

The paper organization can be described as the following: in the second section a literature review is done, while in third section, the strategy adopted for this localisation methodology is described. The adopted localisation method is described in Section V. The experimental setup is described in t Section IX. the The experimental results about the accuracy and performance of the localisation algorithm are shown xperimental in Section X. Two possible research topics referred as possible future works are presented at the Section XI. Finally, conclusions about th improvements and contributes made in this work are . the presented in the Section XII.

III.

LITERATURE REVIEW

Different sensors and techniques for the localisation of vehicles are described in [1]. These sensors and techniques are divided into absolute and relative localisation. The dead-reckoning are sensors of relative localisation leading to an increase of the error over time reckoning time. Examples of dead-reckoning sensors are: odometry (the most commonly used) accelerometers, reckoning used), gyroscopes, inertial navigation sensors (INS) or inertial measurement units (IMU) an Doppler-effect and sensors (DVL). Due to their high frequency rate, the dead dead-reckoning sensors are commonly used to fuse with more ensors complex localisation techniques or sensors, through probabilistic methods as is example the Kalman Filters and the Particle Filters, [2]. Examples include infrared sensors, ultrasound sonars, laser range [2]. finders, artificial vision and techniques bas on passive or active beacons. based The sensors and techniques of absolute localisation give information about the robot's position in the world frame. Examples are the attitude sensors, d amples digital compasses, GPS and passive or active beacons, as is example the acoustic beacons [1]. The two essential localisation techniques based on active or passive beacons are the triangulation and the: trilateration, [3]. Unfortunately, this methods require environment preparation. The algorithms concerning the localisation of mobile robots can be divided in two large areas: the he obots matching algorithms and the Simultaneous Localisation and Mapping algorith (SLAM). algorithms There are matching algorithms that need a prior knowledge about the navigation area as is example area, the works [4] and [5]. Another example is the Perfect Match described by M.Lauren et al. [6], used in the Robotic Soccer Middle Size League (MSL) at RoboCup. The Perfect Match is a time saver algorithm. There are other type of matching algorithms, which compute the overlapping zone between algorithms, consecutive observations, to obtain the vehicle displacement. E Examples are the family of Ite Iterative Closest Point algorithms (ICP) [7] [7].

Vol. 5, Issue 1, pp. 1 1-12

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
In addition to the computational time spent, the ICP approach has another problem, sometimes there is no sufficient overlapping between two consecutive laser scans and it is hard to find a correct solution. The most common solutions for the SLAM problem are: the Extended Kalman Filter applied (EKFSLAM) as are examples the works [8] to [10], and the FastSlam or the Rao-Blackwellized Particle Filters, as is example the works [11] to [13]. The EKF-SLAM is a variant of the Extended Kalman Filter and uses only one state matrix representing the vehicle state and the landmarks of the feature map. This state matrix is increased ever that a new feature is found. In contrary, the FastSlam solution can be seen as a robot and a collection of N landmarks estimation problems. Each particle has its pose estimative and tiny state matrices representing each landmarks of the feature map. The EKF-SLAM computationally complexity isO(N ), while the FastSlam has a lower computational complexity,O(M log N), with M particles and where N landmarks.

IV.

STRATEGY OF LOCALISATION

The localistation methodology is divided in the following steps: 1) Pre-Localisation and mapping: locating the vehicle, using a two-dimensional SLAM algorithm. Once the location is obtained, it is possible to build and store the 3D map of the environment. This procedure is performed in a controlled scenario, without dynamic objects or people moving in the navigation area. It is a preparation task, performed offline and executed only once. The aim is to obtain a 3D map of the facility. Regarding the pre-Localisation and mapping step, once the SLAM has been applied, the 2D feature map with segments and points is obtained. The same map is after used to determine the vehicles location (pre-localisation procedure). The SLAM solution is based in the state of art EKF-SLAM algorithms, as the described in [2]. Finally, still offline and with the vehicles position obtained during the pre-localisation, the threedimensional map can be built and the respective distance and gradient matrices can be created and stored (mapping procedure) The stored distance and gradient matrices are used as look-up tables for the 3D Matching localisation procedure, in the normal vehicle operation, as described in the next section. To create the distance matrix, the distance transform is applied in the 3D space, on the occupancy grid of the building. Furthermore, the Sobel filter, again in the 3D space, is applied to obtain the gradient matrices, in both the directions x and y. 2) Localisation (section V): the stored 3D map makes it possible to pinpoint the location of the vehicle by comparing the 3D map and the observation modules readings, during the normal vehicle operation (3D Matching). The used data, corresponds to 3D points acquired by the observation module (tilting LRF) on the upper side of the indoor environment, which can be considered almost static (without dynamic objects moving). The upper side/headroom of a building remains static during large periods of time, and doesn't suffer perturbations of people and dynamic objects crossing the navigation area. As the localisation algorithm is applied in the RobVigil, which navigates in public facilities, with people an dynamic objects crossing the navigation area, only data and the map about the headroom of the building is used, aiming to improve the methodology accuracy and robustness. Videos and downloads about this localisation methodology can be found at [14].

V.

3D MATCHING LOCALISATION ALGORITHM

The light computational Perfect Match algorithm, described in [6] by M.Lauren et al., was adapted from 2D to the 3D space, using Laser Range Finder data, maintaining the low computational requirements. The 3D matching localisation procedure uses the result of the 3D Perfect Match in a position tracking algorithm, which has three fundamental stages: 1) the Kalman Filter Prediction, 2) the 3D Perfect Match procedure, and 3) the Kalman Filter Update. The Kalman Filter equations can be seen in [2].

Vol. 5, Issue 1, pp. 1-12

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
The Kalman Filter Prediction stage takes the previously estimated vehicle state ( ( )) and, using odometry, it estimates the next vehicle state( ( + 1)). The previous vehicle state ( )) is equal to the following vector: where the variables , and (2D coordinates and orientation relatively to the x direction). Aiming to model the vehicles kinematic movement, the following transition function ( ( ), , ) was used:
( , , , )= ( )+ ( )+ + 2 ( )+ + 2 ( )+ + ( )+ (2) ( )= ( ) ( ) ( ) (1)

VI.

KALMAN FILTER PREDICTION

where and represent the linear and angular displacement respectively between two consecutive time steps ( + 1) and ( ). The kinematic models error was modelled as additive Gaussian noise with zero mean and co-variance . The vector is represented as the following :
= (3)

Therefore, the new vehicle state appears equal to the following equation: ( + 1) = (

where = 0, since state is given by:

( ), , ,

(4)

is modelled as Gaussian noise with zero mean. Therefore, the new estimated
( + 1) = ( )+ 2 ( )+ 2 ( )+ ( )+

The Kalman prediction stage also uses the previous estimated covariance matrix ( ( )) and computes the next ( ( + 1)). The estimated covariance after the prediction stage is given by the equation:
( + 1) = ( ) +

( )+

(5)

where, the gradient of the model transition function


= 1 0 0 0 1 0

The gradient of the transition function The covariance


= ( )+ 0 0 2

depends (increase) with the vehicles translation and rotation ( and ):


( )+ 0 0 2 0 0

in order to the noise

2 ( )+ 2 1 ( )+

(6)

), represented by

, is equal to:
(7)

, equal to

, is the identity matrix.

(8) )

) + (

Vol. 5, Issue 1, pp. 1-12

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
When the vehicle moved 1 meter in front, the odometry, accumulate a translational and rotational error equal to and , respectively. When the vehicle rotates 1 radian, the odometry error of the vehicle's rotation is equal to . The parameters were obtained measuring the difference between the real and estimated, with odometry, displacement and rotation of the vehicle, during 40 samples. The obtained values were: =0.18264 meters/meters, =0.08961radians/meters and =0.02819 radians/radians.

VII.

PERFECT MATCH

Consider the list of points that constitute the scan of the laser range finder, in the laser frame ( ). Each point ( ) has coordinates in the laser referential equal to , , . The correspondence of this point in the world frame, results in the point PntList(i), with coordinates ( , , ) . This point can be computed with the following expression: = + ,
= 0 0 0 0 1 (9)

where the matrix is the rotation of the vehicle in relation to the world referential: The developed 3D Perfect Match takes the vehicles state obtained in the Kalman Filter Prediction ( + 1)) and runs the following steps: 1) matching error; 2) optimisation routine Resilient step ( Back-Propagation (RPROP); and 3) second derivative. The first two steps are continuously iterated until the maximum number of iterations is reached. The last and third step is performed only once after obtained the RPROPs result. The distance matrix, stored in memory, is used to compute the matching error. The matching error is computed through the cost value of the list of points of the Laser Range Finder scan ( ):
= , =1 + (10)

where and are representative of the distance matrix and cost function values, which correspond to the point ( ), with coordinates ( , , ) . is the number of points in the list of threedimensional points . The parameter is an adjustable parameter. Aiming to have a cost function obeying to the condition 0.5 1, when the distance of the point ( ) is equal or higher than 1 meters, 1 , the value of , used in the work, was 1 meter. The cost function was designed to achieve a similar behaviour with the quadratic error function for the case of small values of , neglecting points with large values of . After computing the matching error, it is possible to apply the RPROP algorithm to each vehicle state variable. The algorithm takes the previous state of the vehicle ( 1) and estimates a new position for the vehicle ( ), which is used in the next RPROP iteration. The initial state of the vehicle to be used in the RPROP algorithm is given by the Kalman Filter (0) = ( + 1). Prediction stage, The distance and gradients matrices ( and the ), stored in memory, are used to compute the cost function derivatives in order to the vehicle state, used in the RPROP algorithm. The RPROP routine can be described as the following: during a number limit of iterations, the next steps are performed to each variable that is intended to estimate, , and . 1) If the actual derivative ( ), ( ) and ( ), depending on the variable, is different of zero, they are compared with the previous derivatives, 2) If the product ( ) ( 1) is lower than zero, it means that the algorithm already ( 1), ( 1) and ( 1).

to converge to the local minima, and then the direction of the convergence should be maintained with the same value.

passes a local minima, and then the direction of the convergence need to be inverted. 3) If the product ( ) ( 1) is higher than zero, it means that the algorithm continues

Vol. 5, Issue 1, pp. 1-12

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
The RPROP algorithm can be seen with more detail in the following pseudo-code.
, ( 1) > 0 ( 1) < 0 ( ), ( ), ( )

( )0

( ) ( 1) ( ) ( 1) ( )>0 ( )<0 ( ) ( ) ( )

( )

( 1) ( ) ( 1) + ( )

The values , , and , , are empirical and adjustable values. They are tested in the , , 1,2 and , , 0,1 . The best performance was achieved to = intervals = = = 1.2, and = = = = 0.5. (0), (0), The initial value of the variables steps (0) are also empirical and adjustable (0) = (0) = 0.01 meters and (0) = parameters. The best performance was achieved to 0.05 radians. The limitation of the number of iterations in the RPROP routine makes it possible to guarantee a maximum time of execution for the algorithm. Therefore, as it is intended to operate online, it is necessary to ensure that this maximum time is lower than the observation module sample rate (100 milliseconds). The following experiment was conducted aiming to obtain the maximum number of iterations: the 3D Perfect Match algorithm was applied with the knowledge of the true vehicles position. For the large majority of the cases, the estimated position reached the true position and was achieved in less than 8 iterations. On the contrary, in the few cases where the solution not reached the real position, it was close enough to be achieved in the following cycle (next observation module time step). That way, the number of maximum iterations used in the RPROP routine was ten iterations. The gradient in order to the state is given by the following expression:
=
.

2 2

(11)

where

is the gradient of the cost function, of each point i, in order to the vehicle stat. The partial = , , , are given by the following vector:
; ; + (12)

derivatives,

Using the equations presented at (9), the vector (12) can be re-written as the following expressions:

Vol. 5, Issue 1, pp. 1-12

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
= ( , , ); ( , , ); ( , , ) ( , , ) sin cos cos sin (13)

where ( , , ) and ( , , ) are the gradient values at the position ( , , ), of the precomputed gradient matrices, stored in memory, in x and y direction, respectively. To completely perform the 3D Perfect Match stage, it is necessary to calculate the second derivative. The analysis of the second derivative allows to find an error classification for the 3D Perfect Match solution. For an actual set of 3D LRF data, if the cost function E has a perfect distinctive minimum, the second derivative, , is high. In contrary, when for those points the cost function is flat, and there are no a distinctive minimum, the second derivative is low. Therefore, a higher second derivative represents situations where the confidence in the solution obtained by the 3D Perfect Match algorithm is higher. In the other hand, a lower second derivative functions represent cases where the confidence is lower. The 3D Perfect Match's covariance matrix represent the error, i.e. the confidence, that there are in the solution obtained by algorithm: where diag(. , . , . ) is the diagonal matrix 3x3. The parameters K and K , are normalized values. The algorithm was tested with values of K and K in the gap between 10 , 10 . The best performance achieved was K = 10 and K = 10 . To compute the second derivative, the previously defined cost function equation (10) is replaced by the quadratic cost function (15). This occurs in order to ascertain which cost function is positively definite for all laser scan points. The cost function is given by:
= , = 1 2 1 (15) = / , / , / (14)

The second derivative of the total cost function are given by:
= , = (16)

where

is equal to the following vector:


( , = , ); ( , , ); + (17)

VIII.

The Kalman filter update stage combines the estimated state using the odometry ( + 1), and the Perfect Match procedure ( + 1). The Kalman Filter equations can be seen in [2] The observation model ( , ) in the update stage is equal to the vehicle state :

KALMAN FILTER UPDATE

( , , ) ( , , )

( , , ( , ,

) )

sin cos

cos sin

cos sin

sin cos

(18)

(19)

Vol. 5, Issue 1, pp. 1-12

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
( , )= + + + (20)

where is modelled as white noise, with a Gaussian distribution with zero mean ( = 0) and covariance matrix . Therefore, in the update stage the observation is equal to the state obtained after the application of the 3D Perfect Match:
= ( + 1) (21)

The estimated observation is equal to the present estimative of the vehicle state, propagated during the Kalman Filter Prediction stage:
( , )= ( + 1) (22)

In that way, the innovation of the Kalman filter ( ( + 1)) is equal to:
( + 1) = ( , )

(23)

In this stage, the propagated covariance matrix, using odometry ( ( + 1)), and the covariance matrix computed in the 3D Perfect Match procedure ( ( + 1)), are used to determine the Kalman Filter gain ( ( + 1)):
( + 1) = ( + 1) ( + 1) + ( + 1) (24)

The gradient of the observation module, in order to the vehicle state and the observation noise, and respectively, are identity matrices. Therefore, the previous equation can be re-written as the following:
( + 1) = ( + 1) = ( + 1) = ( + 1) ( + 1) + ( + 1) (25)

Therefore, after the update stage the new estimated state (


( + 1) +

( + 1) ( + 1), ( + 1)

( + 1)), is given by the expression:

(26)

The new covariance matrix, decreases with the following equation:


( + 1) (27)

where

is the square identity matrix with the dimension 3x3.

IX.

EXPERIMENTAL SETUP

The RobVigil is a differential wheel traction vehicle, equipped with odometers and a tilting Laser Range Finder. The Laser Range Finder (LRF) Hokuyo URG-04LX-UG01 was used to perceive the environment. To obtain a three-dimensional sensor, a tilting platform was created based on the dc servo motor, the AX-12 Dynamixel Bioloid. The complete LRF solution is shown in Fig. 1 (right image).

Vol. 5, Issue 1, pp. 1-12

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
The tilting Laser Range Finder is used as a sensor for the environment mapping and self-localisation. The AX-12 motor allows the LRF to rotate between the angles of 35 and 90 (angles between LRFs plan and the horizontal), with a resolution of 0.29. The LRFs has a range angle of 240, with an angular resolution of 0.35. The LRF also has a distance range of 5 meters. A scan of 769 points is obtained at every 100 milliseconds. In this experiment the tilting LRF rotates at a speed of 10 rpm. The RobVigil moves in this experience at an average speed of 0.4 m/s. The robot is shown at Fig. 2 on the right, while the developed observation module solution.

Fig. 2. Image on the left: the observation module developed. Image on the right: the RobVigil robot equipped with the observation module.

X.

EXPERIMENTAL RESULTS

The Fig. shows a large indoor environment. The mapping of this environment was obtained with success applying the strategy described for pre-localisation and mapping at Section IV. In the figures of Fig. , the red points represent the upper side of the building (above 1.8 meters of height), almost static, used to perform the 3D Matching.

Fig. 3 Occupancy grid of the a scenario, with a square shape of 60 x 60 meters.

Aiming to evaluate the accuracy of the 3D matching localisation algorithm presented in this paper, at Section V, experiments were made with the high precision Ground Truth (GND) Positioning system, the SICK NAV 350, a commercial solution for autonomous guided vehicles (AGVs). The SICK Nav350 uses reflectors to output its self location, with a sample rate of 125 milliseconds and an accuracy of 4 millimetres to 25 millimetres. The Fig. shows the trajectory in a corridor of the mapped building Fig. . Only in this part of the building it is available the GND system. Therefore, only in corridor are shown accuracy results.

Vol. 5, Issue 1, pp. 1-12

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Fig. 4 The black vehicle position is the NAV350's estimated location. The blue vehicle trajectory 3D Matching
estimative. The green circles are the reflectors placed in the scenario. The corridor has 25 x 8 meters.

In the figure, the circles are the reflectors used by the Nav350, the black line is the true robot trajectory, while the blue line is the trajectory estimated by the localisation algorithm (3D matching). The Euclidian distance average between the 3D matching and the Nav350 position is 0.08 meters, with a standard deviation of 0.086 meters. The absolute average orientation difference is 1.18, with a standard deviation of 1.72. The reached accuracy is the sufficient and the desired, to perform the RobVigil application (surveillance of public facilities). As the number of points acquired by the LRF is limited to 769 points, the maximum time spent in the localisation algorithm is also limited. The maximum time spent is 20 milliseconds and is lower than the sample rate imposed by the observation module (100 milliseconds), which allows the algorithm to be used online, with three-dimensional data, in the Mini ITX, EPIA M10000G, processor of 1.0GHz. In this three-dimensional map based approach the time saver algorithm, Perfect Match described by M. Lauer et al. [6], was adapted to operate in the three-dimensional space, with Laser Range Finder data, instead the use of artificial vision. The computational complexity and needs of the Perfect Match was maintained, even using three-dimensional. Since the computation time spent is not compromised, the experiment conducted by M. Lauer et al [6], which elects the Perfect Match as a faster algorithm when compared with the Particle Filter algorithm, remains valid. In this experiment, while the Perfect Match has a spent time of 4.2 milliseconds, the Particle Filter, using 200 and 500 particles, spends 17.9 milliseconds (four times higher) and 48.3 milliseconds (ten times higher), respectively. Furthermore, comparing with the localisation method described by M. Lauer et al [6], the localisation procedure of this three-dimensional map based approach, was improved. It was applied the Extended Kalman Filter as multi fusion sensor system, aiming to join the odometry information and the threedimensional Perfect Match result. Comparing the localisation algorithm proposed and described in this paper with the ICP algorithms, it is faster and can be applied online in smaller cycle times, even when using a bigger quantity of Laser Range Finder points. The MbICP algorithm described in [7], which already shows improvements to the standard ICP, takes an average time of 76 milliseconds to find the optimal solution, using only 361 Laser Range Finder points, in a sample rate of 200 milliseconds, in a Pentium IV 1.8GHz. Furthermore, the 3D matching algorithm, proposed in this work, has a limited time of execution, depending on the number of the used points. In the ICP algorithms, the point to point match step is necessary to obtain the correspondence with the previous scan. Such step is not limited in the execution time and is widely dependent on the overlapping zone between consecutive scans. This overlapping zone influences as well the quality of the reached solution.

XI.

FUTURE WORK

The 3D matching algorithm needs an initialization of the vehicle's initial pose. This initialization needs to be close enough to the true vehicle location, allowing after, the correct operation of the 3D matching algorithm. At this moment, this initialization is passed as parameter of the 3D Matching algorithm, and therefore, it is not executed in an autonomous way. As future work it is intended to implement a Initial Position Computation algorithm to initialize the 3D matching in an autonomous way.

10

Vol. 5, Issue 1, pp. 1-12

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
The tilting Laser Range Finder has a set of parameters that are used to transform the 2D points of the LRF in 3D points about the surrounding environment. These parameters are about the translation and rotation of the tilting LRF, relatively to the vehicle frame. They are, at the moment, carefully measured. In the future, it is intended to develop a methodology of calibration capable to be executed autonomously.

XII. CONCLUSIONS
The three-dimensional map based localisation algorithm presented here, improves computational requirement comparatively to 2D and 3D SLAM algorithms. Furthermore, the time necessary to locate the robot is also reduced comparatively to Particle Filters and ICP algorithms. The approach described in this paper allows the localisation algorithm to be executed online. The contributes made in this work were: i) adaptation of a light computational matching algorithm, the Perfect Match, to be used in the 3D space instead 2D, using Laser Range Finder data, maintaining the low computational requirements. ii) Improvement of the fusion system between the matching algorithm described in [6] by M.Lauren et al. and odometry data, using an EKF. iii) Only 3D data about the upper side of a building (almost a static scenario) is used, becoming the localisation more robust and reliable, since in dynamic scenarios. The use of three-dimensional data about the upper side of a building, increases the quantity and quality of the information, especially because it is almost static. iv) The localisation methodology can be used with any observation module, which acquires 3D data: Kinect, 3D camera (MESA), stereo vision or commercial 3D LRF. v) Development of a localisation methodology that becomes the robot RobVigil an economically practicable robotic platform.

ACKNOWLEDGEMENTS
This work is funded (or part-funded) by the ERDF European Regional Development Fund through the COMPETE Programme (operational programme for competitiveness) and by National Funds through the FCT Fundao para a Cincia e a Tecnologia (Portuguese Foundation for Science and Technology) within project FCOMP - 01-0124-FEDER-022701. Miguel Pinto acknowledges FCT for his PhD grant (SFRH/BD/60630/2009).

REFERENCES
[1]. J. Borenstein, H. R. Everett, L.Feng and D. Wehe, "Mobile Robot Positioning and Sensors and Techniques", Journal of Robotic Systems, Special Issue on Mobile Robots, Vol. 14 No. 4, pp. 231-249, April 1997. [2]. S. Thrun, S. , & Burgard, W., & Fox, D. (2005). Probabilistic Robotics. Cambridge, Massachusetts: The MIT Press. [3]. Hber Sobreira, A. Paulo Moreira and Joo Sena Esteves, "Characterization of Position and Orientation Measurement Uncertainties in a Low-Cost Mobile Platform", 9th Portuguese Conference on Automatic Control, Controlo 2010, Coimbra, Portugal, pp. 635-640, 8-10 September, 2010. [4]. A. Sousa, P. Moreira and P. Costa, "Multi Hypotheses Navigation for Indoor Cleaning Robots, 3rd International Workshop on Intelligent Robotics (IRobot 2008), pp. 71-82, Lisbon, Portugal, October, 2008. [5]. M. Pinto, A. P. Moreira and A. Matos, Localization of Mobile Robots Using an Extended Kalman Filter in a LEGO NXT, IEEE Transactions On Education, Vol 55, No 1, pp. 135-144, February 2012. [6]. M. Lauer, S. Lange and M. Riedmiller, "Calculating the perfect match: an efficient and accurate approach for robot self-localization", RoboCup Symposium, pp. 142-53, Osaka, Japan, 13-19 July, 2005. [7]. J. Minguez, F. Lamiraux, and L. Montesano, "Metric-based scan matching algorithms for mobile robot displacement estimation", IEEE Transactions On Robotics, Vol 22, No 5, pp. 1047-1054, October 2006. [8]. CSorba, M., (1997), Simultaneous Localisation and Map Building, Thesis, (PhD), Robotic Research Group Department of Engineering of Oxford, Oxford, England. [9]. Andrea Garulli, Antonio Giannitrapani, Andrea Rossi, Antonio Vicino, Mobile robot SLAM for linebased environment representation, 44th IEEE Conference on Decision and Control Decision and Control, 2005 and 2005 European Control Conference, (CDC-ECC '05), pp. 2041 - 2046, Seville, Spain, 12-15 Dec. 2005.

11

Vol. 5, Issue 1, pp. 1-12

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
[10]. L. Tesli, I. krjanc and G. Klanar, "Using a LRF sensor in the Kalman-filtering-based localization of a mobile robot", ISA Transactions (Elsevier), Vol. 49, No. 1, pp. 145-153, January 2010. [11]. Thrun and D. Fox, "A Real-Time Algorithm for Mobile Robot Mapping With Applications to MultiRobot and 3D Mapping". Best Conference Paper Award, IEEE International Conference on Robotics and Automation. San Francisco, Vol. 1, pp 321-328, April 2000. [12]. D. Hhnel, W. Burgard, and S. Thrun. Learning compact 3D models of indoor and outdoor environments with a mobile robot. Robotics and Autonomous Systems (Elsevier), Vol. 44, No. 1, pp. 15-27, July 2003. [13]. G. Grisetti , C. Stachniss and W. Burgard, "Improved Techniques for Grid Mapping with RaoBlackwellized Particle Filters", IEEE Transactions on Robotics, Vol. 23, No. 1, pp. 34-46, February 2007. [14]. Webpage of SLAM for 3D Map building to be used in a Matching 3D algorithm, available at: www.fe.up.pt/~dee09013, June, 2012.

AUTHORS
Miguel Pinto graduated with a M.Sc. degree in Electrical Engineering from the University of Porto, Portugal in 2009. Since 2009, he has been a Ph.D. student at this department, developing his research within the Robotic and Intelligent Systems Unit of INESC Porto (the Institute for Systems and Computer Engineering of Porto). His main research areas are in Process Control and Robotics, navigation and localisation of autonomous vehicles. A. Paulo Moreira graduated with a degree in Electrical Engineering from the University of Porto in 1986. He then pursued graduate studies at the University of Porto, completing a M.Sc. degree in Electrical Engineering - Systems in 1991 and a Ph.D. degree in Electrical Engineering in 1998. From1986 to 1998 he also worked as an assistant lecturer in the Electrical Engineering Department of the University of Porto. He is currently a lecturer in Electrical Engineering, developing his research within the Robotic and Intelligent Systems Unit of INESC Porto. His main research areas are Process Control and Robotics. Anbal Matos completed a B.Sc., an M.Sc. and a Ph.D. degree in Electrical and Computer Engineering at the University Porto in 1991, 1994, and 2001, respectively. He is currently working as an assistant lecturer at the Electrical and Computer Engineering Department of the University of Porto and he is also a researcher at the Robotics and Intelligent Systems Unit at INESC Porto. His research areas include modelling, navigation and control of autonomous vehicles, nonlinear control systems, and marine robotics. Heber Sobreira graduated with a M.Sc. degree in Electrical Engineering from the University of Porto in 2009. Since then, he has been a Ph.D. student at Electrical and Computer Engineering Department of the University of Porto, developing his research within the Robotic and Intelligent Systems Unit of INESC-Porto (the Institute for Systems and Computer Engineering of Porto). His main research area is navigation and control of indoor autonomous vehicles.

12

Vol. 5, Issue 1, pp. 1-12

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

NANOFLUID THERMAL CONDUCTIVITY-A REVIEW


Ravi Sankar.B1, Nageswara Rao. D2,Srinivasa Rao.Ch.3
Lecturer, Mechanical Engg. Deptt., R.V.R&J.C. College of Engg., Guntur, A.P, India. 2 Vice-Chancellor, Centurion University, Odisa, India 3 Associate Professor, Mechanical Engineering Department, Andhra University College of Engineering, Visakhapatnam, A.P. India.
1

ABSTRACT
The fluids dispersed with nanoparticles known as nanofluids are promising for heat transfer enhancement due to their high thermal conductivity. In the present study, a literature review of nanofluid thermal conductivity is performed. The possible mechanisms are presented for the high thermal conductivity of nanofluids. The effect of some parameters such as particle volume fraction, particle size, and temperature on thermal conductivity is presented. Theoretical models are explained, model predictions are compared with experimental data, and discrepancies are indicated.

KEYWORDS: Thermal conductivity, Volume Fraction, Particle size, Temperature

I.

INTRODUCTION

Cooling is one of the most important challenges facing numerous industrial sectors. Despite the considerable amount of research and development focusing on industrial heat transfer requirements, major improvements in cooling capabilities have been lacking because conventional heat transfer fluids have poor heat transfer properties. One of the usual methods used to overcome this problem is to increase the surface area available for heat exchange, which usually leads to impractical or unacceptable increases in the size of the heat management system. Thus there is a current need to improve the heat transfer capabilities of conventional heat transfer fluids. Choi et al. [1] reported that the nanofluids (the fluids engineered by suspending metallic nanoparticles in conventional heat transfer fluids) were proved to have high thermal conductivities compared to those of currently used heat transfer fluids, and leading to enhancement of heat transfer. Choi et al. [2] produced nanofluids by suspending nanotubes in oil and experimentation had carried out to measure effective thermal conductivity of nanofluids. They reported a 150 % thermal conductivity enhancement of poly (-olefin) oil with the addition of multiwalled carbon nanotubes (MWCNT) at 1 % volume fraction. The results showed that the measured thermal conductivity was anomalously greater than theoretical predictions and is nonlinear with nanotube concentration. When compared to other nanofluids, the nanofluids with nanotubes provide the highest thermal conductivity enhancement. Yang et al [3] addressed the effects of dispersant concentration, dispersing energy, and nanoparticle loading on thermal conductivity and steady shear viscosity of nanotube-in-oil dispersions. Thermal conductivity enhancement 200% was observed for poly (-olefin) oil containing 0.35 % (v/v) MWCNT. It was found that fluids with large scale agglomerates have high thermal conductivities. Dispersion energy, applied by sonication, can decrease agglomerate size, but also breaks the nanotubes, decreasing both the thermal conductivity and viscosity of nanotube dispersions. In the present work first experimental studies on thermal conductivity of nanofluids affected by parameters like volume fraction, particle size, temperature etc are presented followed by theoretical

13

Vol. 5, Issue 1, pp. 13-28

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
models. Comparison of theoretical and experimental results is performed and possible mechanisms are explained for the discrepancy.

II.

EXISTING STUDIES ON NANOFLUID THERMAL CONDUCTIVITY

Studies regarding the thermal conductivity of nanofluids showed that high enhancements of thermal conductivity than base fluids. It is possible to obtain larger thermal conductivity enhancements with low particle volume fraction [4-8]. Such enhancement values exceed the predictions of theoretical models developed for suspensions with larger particles. This is considered as an indication of the presence of additional thermal transport enhancement mechanisms of nanofluids.

2.1. Effect of particle Volume Fraction


There are many studies in the literature about the effect of particle volume fraction of nanofluid, which is the volumetric concentration of the nanoparticles in the fluid, on the thermal conductivity. Eastman et al. [4] prepared Cu-ethylene glycol nanofluids and found that these fluids have much higher effective thermal conductivity than either pure ethylene glycol. The effective thermal conductivity of ethylene glycol was shown to be increased by up to 40% with an addition of approximately 0.3 vol.% Cu nanoparticles of mean diameter 10 nm. The addition of dispersant yielded a greater thermal conductivity than the same concentration of nanoparticles in the ethylene glycol without the dispersant and no effect of either particle size or particle thermal conductivity was observed. Jana et al. [5] used conductive nanomaterials such as carbon nanotubes (CNTs), copper nanoparticles (Cu) and gold nanoparticles (Au), as well as their hybrids such as CNT-Cu or CNT-Au to enhance the thermal conductivity of fluids. They observed a 70 % thermal conductivity enhancement for 0.3 % (v/v) Cu nanoparticles in water. The results demonstrated that mono-type nanoparticle suspensions have greatest enhancement in thermal conductivity, among which the enhancement with Cu nanoparticle was the highest. The experimentally measured thermal conductivities of several nanofluids were consistently greater than the theoretical predictions obtained from existing models. Liu et al [6] dispersed Cu nanoparticles in ethylene glycol, water, and synthetic engine oil using chemical reduction method (one-step method). Experimental results illustrated that nanofluids with low concentration of Cu have considerably higher thermal conductivity than those of base liquids. For Cu-water at 0.1 vol.%, thermal conductivity is increased by 23.8%. A strong dependence of thermal conductivity on the measured time was observed for Cu-water nanofluid. Murshed et al [7] prepared nanofluids by dispersing TiO2 nanoparticles (in rod-shapes and in spherical) shapes in deionized water. The experimental results demonstrated that the thermal conductivity increases with an increase of particle volume fraction. The particle size and shape also have effects on this enhancement of thermal conductivity.Zhu et al [8] studied thermal conductivities of Fe3O4 aqueous nanofluids. The results illustrated that Fe3O4 nanofluids have higher thermal conductivities than other oxide aqueous nanofluids at the same volume fraction. The experimental values are higher than those predicted by the existing models. The abnormal thermal conductivities of Fe3O4 nanofluids are attributed to the observed nanoparticle clustering and alignment. Ceylan et al [9] prepared Ag-Cu alloy nanoparticles by the inert gas condensation (IGC) process. Xray diffraction (XRD) patterns demonstrated that particles were phase separated as pure Cu and Ag with some Cu integrated in the Ag matrix. Thermal transport measurements have shown that there is a limit to the nanoparticle loading for the enhancement of the thermal conductivity. This maximum value was determined to be 0.006 vol. % of Ag-Cu nanoparticles, which led to the enhancement of the thermal conductivity of the pump oil by 33 percent. Zhang et al [10] measured the effective thermal conductivities and thermal diffusivities of Au/toluene, Al2O3/water, and carbon nanofiber (CNF)/water nanofluids and the influence of the volume fraction on thermal conductivity of the nanofluids was discussed. The measured results demonstrated that the effective thermal conductivities of the nanofluids show no anomalous enhancements. Putnam et al [11] described an optical beam deflection technique for measurements of the thermal diffusivity of fluid mixtures and suspensions of nanoparticles with a precision of better than 1%. Solutions of C60C70fullerenes in toluene and suspensions of alkanethiolate-protected Au nanoparticles were measured to maximum volume fractions of 0.6% and 0.35vol%, respectively.

14

Vol. 5, Issue 1, pp. 13-28

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
The largest increase in thermal conductivity they have observed was 1.3%0.8% for 4nm diameter Au particles suspended in ethanol. As seen in the Fig. 1, there exists significant discrepancy between experimental and theoretical data. This discrepancy can be explained by the fact that parameters such as PH of the nanofluid, dispersant, severity of clustering, and method of production of nanofluids usually differ in each experiment. Experimental results of Wen and Ding [59] are relatively higher than the results of other research groups and they are predicted best by the model of Koo and Kleinstreuer [57]. However, since the size distribution of particles is not known in detail, it is difficult to reach a conclusion about the validity of the models. Dependency of the data of Lee et al. [21] on particle volume fraction is somewhat low and none of the models have such a small slope in the figures. Hamilton and Crosser [54] model is relatively closer to the experimental data of Lee et al. [21] and Das et al. [15]. It was noted that clusters as large as 100 nm were observed in the study of Lee et al. [21]. Therefore, it may be suggested that those samples are closer to the validity range of the Hamilton and Crosser model. However, Das et al. [15] also considered the effect of temperature in their study and indicated that this agreement is just a coincidence.

Figure 1. Comparison of the experimental results of the thermal conductivity ratio for Al 2O3 nanofluid with theoretical model as a function of particle volume fraction (zerin et al [23]).

2.2. Effect of Particle Size


Particle size is another important parameter of thermal conductivity of nanofluids. It is possible to produce nanoparticles of various sizes, generally ranging between 5 and 100 nm. Xie et al [12] prepared nanofluids containing Al2O3 nanoparticles with diameters in a range of 12 nm and 304 nm. Nanoparticle suspensions, containing a small amount of Al2O3, have significantly higher thermal conductivity than the base fluid. The enhanced thermal conductivity increases with an increase in the difference between the PH value of aqueous suspension and the isoelectric point of Al2O3 particle. They concluded that there is an optimal particle size which yields the greatest thermal conductivity enhancement. Kim et al [13] measured thermal conductivity of water- and ethylene glycol-based nanofluids containing alumina, zinc-oxide, and titanium-dioxide nanoparticles using the transient hotwire method. Measurements were conducted by varying the particle size and volume fraction. For nanofluids containing 3 vol. % TiO2 in ethylene glycol, the thermal conductivity enhancement for the 10 nm sample (16 %) was approximately double the enhancement for the 70 nm sample. The results illustrated that the thermal-conductivity enhancement ratio relative to the base fluid increases linearly with decreasing the particle size but no existing empirical or theoretical correlation can explain the behaviour. Li et al [14] used a steady state technique to evaluate the effective thermal conductivity ofAl2O3distilled water nanofluids with nanoparticle diameters of 36 and 47nm. Tests were conducted over a temperature range of 2737C for volume fractions ranging from 0.5% to 6.0%. It was

15

Vol. 5, Issue 1, pp. 13-28

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
observed that up to 8 % greater thermal conductivity enhancement for aqueous nanofluids containing 36 nm Al2O3 particles compared to nanofluids containing 47 nm Al2O3 particles. The thermal conductivity enhancement of the two nanofluids demonstrated a nonlinear relationship with respect to temperature, volume fraction, and nanoparticle size.

Figure 2.Comparison of the experimental results of the thermal conductivity ratio for Al 2O3/water nanofluid with Hamilton and Crosser model [54] and Xue and Xu [46] as a function of the particle size at various values of the particle volume fraction (zerin et al [23]).

The most significant finding was that the effect of variations in particle size had on the effective thermal conductivity of the Al2O3distilled water nanofluids. The largest enhancement difference observed occurred at a temperature of approximately 32C and at a volume fraction of between 2% and 4%. From the experimental results it can be observed that an optimal size exists for different nanoparticle and base fluid combinations. When Fig. 2 (Colours indicate different values of particle volume fraction; red 1%, brown 2%, blue 3%, and black 4%)was observed, it was seen that Hamilton and model predicts increasing thermal conductivity with increasing particle size. The Hamilton and Crosser model [54] does not take the effect of particle size on thermal conductivity into account, but it becomes slightly dependent on particle size due to the fact that particle thermal conductivity increases with increasing particle size. However, the model still fails to predict experimental data for particle sizes larger than 40 nm since particle size dependence diminishes with increasing particle size. This trend of increasing thermal conductivity with decreasing particle size is due to the fact that these models are either based on Brownian motion (Koo and Kleinstreuer [57] and Jang and Choi [31] models) or based on liquid layering around nanoparticles (Yu and Choi [55], Xie et al. [29], Xue and Xu [46], and Sitprasert et al. [56] models). Fig.2 demonstrates thermal conductivity ratio for Al2O3/water nanofluid with Xue and Xu model [46] as a function of the particle size at various values of the particle volume fraction. Colors indicate different values of particle volume fraction; red 1%, brown 2%, blue 3%, and black 4%. Whereas Xue and Xu [46] model illustrates the trend of increasing thermal conductivity with decreasing particle size is due to the fact that these models are either based on Brownian motion (Koo and Kleinstreuer [57] and Jang and Choi [31] models) or based on liquid layering around nanoparticles (Yu and Choi [55], Xie et al. [39], Xue and Xu [46], and Sitprasert et al. [56] models). Although the general trend for Al2O3/water nanofluids was as presented, there is also experimental data for Al2O3/water nanofluids, which shows increasing thermal conductivity with decreasing particle size [19, 20, 61, 32, 35, and 58]. It should be noted that clustering may increase or decrease the thermal conductivity enhancement. If a network of nanoparticles is formed as a result of clustering, this may enable fast heat transport along nanoparticles. On the other hand, excessive clustering may result in sedimentation, which decreases the effective particle volume fraction of the nanofluid.

16

Vol. 5, Issue 1, pp. 13-28

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Figure 3. Comparison of the experimental results of the thermal conductivity ratio for Al2O3/water nanofluid with Koo and Kleinstreuer model [57] and Jang and Choi model [31] as a function of temperature at various values of particle volume fraction. Colors indicate different values of particle volume fraction; red 1%, brown 2%, blue 3%, and black 4% (zerin et al [23]).

2.3. Effect of Temperature


In conventional suspensions of solid particles (with sizes on the order of millimeters or micrometers) in liquids, thermal conductivity of the mixture depends on temperature only due to the dependence of thermal conductivity of base liquid and solid particles on temperature. Das et al [15] investigated the increase of thermal conductivity with temperature for nanofluids with water as base fluid and particles of Al2O3 or CuO as suspension material. A temperature oscillation technique was used for the measurement of thermal diffusivity and thermal conductivity. The results indicated an increase of enhancement characteristics with temperature, within the limited temperature range considered gradual curve appeared as linear. Yang et al [16] studied the temperature dependence of thermal conductivity enhancement in nanofluids containing Bi2Te3 nanorods of 20nm in diameter and 170mm in length. The 3-wire method had been developed for measurement of the thermal conductivity of nanofluids. The thermal conductivity enhancement of nanofluids has been experimentally found to decrease with increasing temperature, in contrast to the trend observed in nanofluids containing spherical nanoparticles. They observed a decrease in the effective thermal conductivity as the temperature increased from 5 to 50 C. The contrary trend was featured mainly to the particle aspect ratio. Honorine et al [17] reported effective thermal conductivity measurements of alumina/water and copper oxide/water nanofluids. The effects of particle volume fraction, temperature and particle size were investigated. Readings at ambient temperature as well as over a relatively large temperature range were made for various particle volume fractions up to 9%. Results clearly illustrated that the predicted overall effect of an increase in the effective thermal conductivity with an increase in particle volume fraction and with a decrease in particle size. Furthermore, the relative increase in thermal conductivity was found to be more important at higher temperatures. The experimental results From Fig. 3 it should be noted that the presented data of Li and Peterson [18] was obtained by using the line fit provided by the authors since data points create ambiguity due to fluctuations. In the models, particle size is selected as 40 nm since most of the experimental data is close to that value, as explained in the previous sections suggests that thermal conductivity ratio increases with temperature. It is seen that the temperature dependence of the data of Li and Peterson [18] is much higher than the results of other two research groups. On the other hand, the results of Chon et al. [32] show somewhat weaker temperature dependence. This might be explained by the fact that the average size of nanoparticles in that study is larger when compared to others, since increasing particle size decreases the effect of both Brownian motion and nanolayer formation. It should also be noted that dependence on particle volume fraction becomes more pronounced with increasing temperature in all of the experimental studies [18]. When it comes to theoretical models, predictions of Hamilton and Crosser model [54], Yu and Choi model [55], Xue and Xu model [46], and Xie et al. [29] model does not depend on temperature except for a very slight decrease in thermal conductivity ratio with temperature due to the increase in the thermal conductivity of water with temperature. Therefore, these models fail to predict the mentioned

17

Vol. 5, Issue 1, pp. 13-28

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
trends of experimental data. Since the predictions of these four models with respect to temperature do not provide any additional information; associated plots are not shown here. The model proposed by Koo and Kleinstreuer [57] considered the effect of Brownian motion on the thermal conductivity and the predictions of this model are presented in Fig. 3. The presented model of Koo and Kleinstreuer predicts the trend in the experimental data correctly. The model proposed by Jang and Choi [31] was presented in Fig. 3. It was noted that this model predicts nonlinear temperature dependence of thermal conductivity, whereas other two models predict linear behavior. Experimental results of Das et al. [15] and Li and Peterson [18] show nearly linear variation of thermal conductivity ratio with temperature, which is contradictory with the model. On the other hand, result of Chon et al. [32] suggests nonlinear variation and the associated trend is somewhat in agreement with the model of Jang and Choi.

III.

NANOFLUID THERMAL CONDUCTIVITY MODELS

The thermal conductivity enhancement of nanofluids was higher than those predicted from conventional models for larger size particle dispersions as in Fig.4. Therefore, different researchers (Keblinski et.al [27] Li and Xuan [28]) Xie etal [29]) explored the mechanisms of heat transfer in nanofluids, and proposed four possible reasons for the contribution of the system: 1. Brownian motion of the particle 2. Molecular-level layering of the liquid at the liquid/solid interface 3. The nature of the heat transport in nanoparticles 4. The effects of nanoparticles clustering

Figure 4. Comparison of conventional models with the experimental data

Keblinski et al [27] investigated the effect of nanoparticle size on thermal conductivity of nanofluids. Thermal conductivity was found to be increased with the reduction in grain size of nanoparticles within the nanofluid. They concluded that the key factors for thermal properties of nanofluids are the ballistic, rather than diffusive, nature of heat transport in the nanoparticles, combined with direct or fluid-mediated clustering effects that provide paths for rapid heat transport. Krischer [30] developed an empirical model to describe the irregular arrangement of suspended particles. The greater surface area associated with smaller particles promotes heat conduction. The higher specific surface area of nanoparticles improves a greater degree of aggregation than with a suspension of larger particles. Most nanofluid thermal conductivity models were developed based on one or more of these mechanisms.

3.1. Brownian motion


Jang et al [31] found that the Brownian motion of nanoparticles at the molecular and nanoscale level is a key mechanism governing the thermal behavior of nanofluids. They used a theoretical model that accounts for the fundamental role of dynamic nanoparticles in nanofluids. The model not only captures the concentration and temperature-dependent conductivity, but also predicts strongly sizedependent conductivity. The model is based on a linear combination of contributions from the liquid, the suspended particles, and the Brownian motion of the particles to give:

18

Vol. 5, Issue 1, pp. 13-28

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

keff,m k1

1 C1

df d

Re2 Pr
(1)

where is a constant related to the Kapitza resistance, C1 is a proportionality constant, df is the diameter of a fluid molecule, and Re and Pr are the Reynolds the Prandtl numbers of the fluid, respectively. The Reynolds number, Re, is defined by,

Re

k BT 3 2l f

(2)

where kB is the Boltzmann constant, lf is the mean free path of a fluid molecule, and and are the density and viscosity of the fluid, respectively. Their model reflects strong temperature dependence due to Brownian motion and a simple inverse relationship with the particle diameter. Based on the Jang and Choi model, Chon et al. [32] employed the Buckingham-Pi theorem to develop the following empirical correlation,

keff,m k1

1 64.7

0.7460

df d

0.3690

0.7476 Pr 0.9955 Re1.2321


(3)

where the Reynolds and Prandtl numbers are the same as in the Jang and Choi model [31]. The equation was fit to their measurements of aqueous nanofluids containing three sizes of alumina particles. However, their correlation was of limited use, since it was based on measurements over a limited temperature range (20 70 C) and it was fit to thermal conductivity data for a single nanoparticle material in a single base fluid. Chon et al [32] did not demonstrated any ability of their model to predict the thermal conductivity of other nanofluids. Other models are available that are fitted to similarly limited nanofluid data and include no consideration for the more conventional thermal conductivity models [33-35]. However, some researchers have used conventional heterogeneous thermal conductivity models as a starting point and extended these to include a particle size dependence based on Brownian motion. Xuan et al [36] adopted the concepts of both the Langevin equation of the Brownian motion and the concept of the stochastic thermal process to describe the temperature fluctuation of the nanoparticles suspended in base fluids. They developed an extension of the Maxwell equation to include the micro convective effect of the dynamic particles and the heat transfer between the particles and fluid to give:

keff,m k1

2 2 1 18 HAT 2 1 2 d 6 k1

(4)

where H is the overall heat transfer coefficient between the particle and the fluid, A is the corresponding heat transfer area, and is the comprehensive relaxation time constant. The heat transfer area should be proportional to the square of the diameter, thus the effective thermal conductivity is proportional to the inverse of the particle diameter to the fourth power. Such strong particle size dependence has yet to be demonstrated experimentally. Additionally, the equation reduces to the Maxwell equation with increasing particle size. As discussed previously, thermal conductivity enhancements greater than those predicted by the Maxwell equation have been reported for nanofluids containing relatively large nanoparticles (d > 30 nm) [37]. It is therefore obvious that models that reduce to the Maxwell equation at large nanoparticle sizes will not be able to represent published data. Numerous thermal conductivity models have been developed for heterogeneous systems and specifically for nanofluids. Theoretical models such as those by Maxwell [37] and Bruggeman [38] were derived by assuming a homogeneous or random arrangement of particles. However, these assumptions are not valid for dispersions containing aggregates. Empirical models [20, 22] have been successfully employed to account for the spatial arrangement of particles. More recently, particle size has been incorporated into many models in an attempt to describe the thermal conductivity of nanofluids. Several mechanisms have been described that may affect the thermal conductivity of nanofluids, including Brownian motion of the particles, ordered liquid molecules at the solid / liquid

19

Vol. 5, Issue 1, pp. 13-28

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
interface, nanoparticle clustering, and interfacial thermal resistance. However, there is no consent as to which mechanism has the dominant effect on the thermal conductivity.

3.2. Interfacial layering of liquid molecules


Nan, et al [39, 40] addressed the effect of interfacial resistance (Kapitza resistance) on thermal conductivity of particulate composites due to weak interfacial contact. They set up a theoretical model to predict thermal conductivity of composites by including interfacial resistance. According to this model, the effective thermal conductivity should decrease with decrease of the nanoparticle size which is contrary to most of the experiment results for nanofluids. Yu et al [41] reported that molecules of normal liquids close to a solid surface can organize into layered solid like structure. This kind of structure at interface is a governing factor in heat conduction from solid surface to liquid. Choi et al [42] pointed out that this mechanism contributed to anomalous thermal conductivity enhancement in nanotube dispersions. However, Keblinski et al [27] indicated that the thickness of the interfacial solid-like layer is too small to dramatically increase of the thermal conductivity of nanofluids because a typical interfacial width is only on the order of a atomic distance (1nm). So this mechanism only can be applied to very small nanoparticles (<10nm). Xue [45, 46] developed a novel model which was based on Maxwell theory and average polarization theory for effective thermal conductivity of nanofluids by including interface effect between solid particle and base liquid. In this work solid nanoparticle and interfacial shell (nanolayer of liquid molecules) considered as a complex nanoparticle and set up the model based on this concept. The theoretical results obtained from this model were in good agreement with the experimental data for alumina nanoparticle dispersions (Xue, Wu et al. [46]) and showed nonlinear volume fraction dependence for thermal conductivity enhancement in nanotube dispersions. Ren, Xie et al. [47] and Xie, Fujii et al.[48] investigated the effect of interfacial layer on the effective thermal conductivity of nanofluids. A model has been derived from general solution of heat conduction equation and the equivalent hard sphere fluid model representing microstructure of particle suspensions. Their simulation work showed that the thermal conductivity of nanofluids increased with decrease of the particle size and increase of nanolayer thickness. The calculating values were in agreement with some experimental data (Lee, Choi et al. [42]; Eastman, Choi et al. [43]). Recently, a new thermal conductivity model for nanofluids was developed by Yu et al [49]. This model was based on the assumption that monosized spherical nanoparticle are uniformly dispersed in the liquid and are located at the vertexes of a simple cubic lattice, with each particle surrounded by an organized liquid layer. A nonlinear dependence of thermal conductivity on particle concentration was showed by this model and the relationship changed from convex upward to concave upward. In order to find the connection between nanolayer at interface and the thermal conductivity of nanofluids, Yu et al. [41] modified the Maxwell equation for spherical particles and Hamilton-Crosser equation for non-spherical particles to predict the thermal conductivity of nanofluid by including the effect of this ordered nanolayer. The result was substituted into the Maxwell model and the following expression was obtained.

knf kf

k pe 2k f 2 k pe k f 1
3

k pe 2k f k pe k f 1
3

kf
(5)

where kpeis the thermal conductivity of the equivalent nanoparticle;

2 1 1 3 1 2 k k pe p 1 1 3 1 2
Where

(6)

kl kp

(7)

and k is thermal conductivity of the nanolayer. is defined as:

20

Vol. 5, Issue 1, pp. 13-28

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

t rp

(8)

where t is nanolayer thickness and rpthe nanoparticle radius. Yu and Choi later applied the same idea to the Hamilton and Crosser [54] model and proposed a model for nonspherical particles [65]. Another model that considers non-spherical particles was developed by Xue [66]. Xie et al. [29] also studied the effect of the interfacial nanolayer on the enhancement of thermal conductivity with nanofluids. A nanolayer was modeled as a spherical shell with thickness t around the nanoparticle similar to Yu et al [41]. However, the thermal conductivity was assumed to change linearly across the radial direction, so that it is equal to thermal conductivity of base liquid at the nanolayerliquid interface and equal to thermal conductivity of the nanoparticle at the nanolayer nanoparticle interface. The associated expression for the determination of the thermal conductivity of nanofluid was given as:

knf k f kf

3T

32T 2 1 T

(9)

Where

lf 1
3

and

3 1 2 pl fl
kl k f kl 2k f
k p kl k p 2kl k f kl

pl fl
(10)

lf
pl fl

k f 2kl where T is the total volume fraction of nanoparticles and nanolayers. kl is the thermal conductivity of
the nanolayer, Tcan be determined using

T 1

Where

tr p
kl was defined as:

kl

M ln 1 M M

kf M 2

Where

M p 1 1

p kp k f
When the thermal conductivity of the nanolayer is taken as a constant, this model gives the same results as Yu and Choi [41] model. It was shown that for a chosen nanolayer thickness, the model is in agreement only with some of the experimental data. As a result, it was concluded that liquid layering around nanoparticles is not the only mechanism that affects the thermal conductivity of nanofluids.

21

Vol. 5, Issue 1, pp. 13-28

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Xue and Xu [46] presented another theoretical study for the effective thermal conductivity of nanofluids. In their derivation, nanoparticles were assumed to have a liquid layer around them with a specific thermal conductivity. First, an expression for the effective thermal conductivity of the complex particle, which was defined as the combination of the nanoparticle and nanolayer, was determined. Then, by using Bruggemans effective media theory [38], the effective thermal conductivity of the nanofluid was determined. The resulting implicit expression for thermal conductivity of nanofluids is
knf kl 2kl k p k p kl 2kl knf knf k f 0 1 2knf k f 2knf kl 2kl k p 2 k p kl kl knf
3

(11)

where subscript l refers to nanolayer. is defined as

(12) where t is the thickness of the nanolayer. Li et al. [67] considered the effect of Brownian motion, liquid layering around nanoparticles, and clustering together. The effect of temperature on average cluster size, Brownian motion, and nanoparticle thermal conductivity was taken into account. Nanoparticle thermal conductivity is calculated by using the following expression:

r p r t p

3r * 4 kp k 3r* 4 1 b
(13)
* Here, k is thermal conductivity of the bulk material and r rp where is the mean-free path of
b

phonons. Mean-free path of phonons can be calculated according to the following expression:

(14) Here, a is crystal lattice constant of the solid, Gruneisen constant, T temperature, and Tmthe melting point (in K). It is assumed that thermal conductivity of the nanolayer is equal to the thermal conductivity of nanoparticles. As a result, particle volume fraction is modified according to the expression: (15) rpis particle radius in this equation. The expressions presented above are substituted into the Xuan et al. [28] model (Eq. 16) to obtain:

10aTm T

eff 1 t rp
3

knf kf

k p 2k f 2 k f k p k p 2k f k f k p

p c p , p
2k f

(16) Another study regarding the effect of nanolayers was made by Sitprasert et al. [56]. They modified the model proposed by Leong et al. [68] by taking the effect of temperature on the thermal conductivity and thickness of nanolayer into account. Leong et al.s static model is as follows:
knf

kBT 3 rcl f

kl kl 213 3 1 k p 2kl 13 3 kl k f k f

13 k p 2kl k p kl 13 3 1

(17)

Here, subscript l refers to nanolayer. and 1are defined as:

t rp t 2rp
Vol. 5, Issue 1, pp. 13-28

1 1

22

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
t is the thickness of the nanolayer and rpis the radius of the nanoparticles. This model was modified by providing the following relation for the determination of nanolayer thickness: (18) where T is temperature in K and rp the particle radius in nanometers. After the determination of nanolayer thickness, thermal conductivity of the nanolayer should be found according to the expression:

t 0.01T 273 rp 0.35

kl C

t kf rp

(19) where C is 30 and 110 for Al O and CuO nanoparticles, respectively. It should be noted that the
2 3

above expressions provided for the determination of the thickness and thermal conductivity of the nanolayer were determined by using experimental data (which is known to have great discrepancies and uncertainties) and no explanation was made regarding the physics of the problem. When the theoretical models based on nanolayer formation around nanoparticles are considered, it is seen that the main challenge is finding the thermal conductivity and thickness of the nanolayer.

3.3. Nature of heat transfer in nanoparticles


Keblinski et al [27] estimated the mean free path of a phonon in Al 2O3 crystal is ~35nm. Phonons can diffuse in the 10nm particles but have to move ballistically. In order to make the ballistic phonons initiated in one nanoparticle to persist in the liquid and reach another nanoparticle, high packing fractions, soot-like particle assemblies, and Brownian motion of the particles will be necessary to keep the separation among nanoparticles to be small enough. However, Xie et al. [12] found in their research about alumina nanofluids that when the particle size close to the mean free paths of phonons, the thermal conductivity of nanofluid may decrease with particle size because the intrinsic thermal conductivity of nanoparticle was reduced by the scattering of phonon at particle boundary. However, this result was not in agreement with most of the experimental results from other groups. Choi et al [42] indicated that sudden transition from ballistic heat conduction in nanotubes to diffusion heat conduction in liquid would severely limit the contribution of ballistic heat conduction to overall thermal conductivity of nanotube dispersions. They suggested that both ballistic heat conduction and layering of liquid molecules at interface contributed to the high thermal conductivity of nanotube dispersions. The nature of heat transfer in nanoparticles or the fast ballistic heat conduction cannot be the mechanism works alone to explain thermal conductivity enhancement of nanofluids due to the barrier caused by slow heat diffusion in liquid. Other mechanisms need to be combined with it to fully understand the enhancement of the thermal conductivity in nanofluids.

3.4. Nanoparticle clusters


Xuan et al. [38] studied the thermal conductivity of nanofluids by considering Brownian motion and clustering of nanoparticles. An equation was proposed to predict the thermal conductivity of nanofluids:

knf kf

k p 2k f 2 k f k p k p 2k f k f k p

p c p , p
2k f

(18) Here, rcl is the apparent radius of the nanoparticle clusters, which should be determined by experiment. T is temperature in K. f is the dynamic viscosity of the base fluid and it can be calculated from the study of Li and Xuan [36]. The first term on the right-hand side of Eq. (18) is the Maxwell model [37] for thermal conductivity of suspensions of solid particles in fluids. The second term on the right-hand side of Eq. (18) adds the effect of the random motion of the nanoparticles into account. For the contribution of this term, the following values were presented for Cu (50 nm)/water nanofluid: For = 0.03%, contribution of the second term is 11% when clustering occurs and 17% when clustering does not occur. For = 0.04%, contribution of the second term is 14% when clustering occurs and 24% when clustering does not occur. It was indicated that Brownian motion of nanoparticles becomes

kBT 3 rcl f

23

Vol. 5, Issue 1, pp. 13-28

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
more effective with increasing temperature. On the other hand, as nanoparticles (or clusters) become larger, their random motion becomes slower and this decreases the enhancement in thermal conductivity. It should be noted that the second term on the right-hand side of the equation is not nondimensional, which is an indication of a mistake in the analysis. Chen et al. [63] measured the viscosity of TiO2/water and TiO2/ethylene glycol nanofluids and proposed a way of calculating the thermal conductivity of nanofluids by using the data. Two types of nanoparticles were used; spherical particles (25 nm) and cylindrical particles (10 nm in diameter and 100 nm in length). The model was found to be a function of cluster radius, and cluster radius values of the sample nanofluids were determined by matching the predictions of the modified model with experimental data. Then, the determined cluster radius values were used in the thermal conductivity model proposed, which is a modification of Hamilton and Crosser [54] model .

knf kf

kcl n 1 k f n 1 cl k f kcl kcl n 1 k f cl k f kcl

(19) where kcl and clare the thermal conductivity and volume fraction of the clusters, respectively. n was taken as 3 for the spheres and 5 for the cylinders in this work.

cl rcl rp

3-D

where rcl and rp are the radii of the clusters and nanoparticles, respectively. D is the fractal index, which was taken as 1.8 in the viscosity model and the same value might be used here. rcl / rp values are equal to 2.75 and 3.34, for TiO2/water (spherical) and TiO2/ethylene glycol (spherical) nanofluids, respectively. For the estimation of kcl, the following expression was proposed for spherical particles [38]:
kp 3in 1 3 1 in 1 kf kcl 1 1 2 2 k f 4 kp kp 3in 1 k 3 1 in 1 8 k f f

(20)

where in is the solid volume fraction of clusters and it is defined as

in rcl r p

D 3

For the estimation of kcl, the following expression was proposed for nanotubes [64].

kcl 3 in 2 x 1 Lx z 1 Lz kf 3 in 2 x Lx z Lz

(21)

Where

x kx k f k f Lx kt k f z kz k f k f Lz kt k f

(22)

Where

(23) kx and kz are the thermal conductivity of nanotubes along transverse and longitudinal directions, respectively. kt is the isotropic thermal conductivity of the nanotube kx, kz and kt can be taken to be equal to kp as an approximation. Lx and Lzare defined as:

Lx

p2 p2 cosh 1 p 3 2 2 p 1 2 p 2 1 2

(24)

Where

Lz 1 2Lx

24

Vol. 5, Issue 1, pp. 13-28

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
rcl / rp values are equal to 5.40 and 12.98 for TiO2/ethylene glycol (nanotube) and TiO2/water (nanotube) nanofluids, respectively. p is the aspect ratio of the nanotubes defined as length of nanotube divided by diameter of nanotube. The modified Hamilton and Crosser [44] model (Eqs. 2, 3) was compared with experimental data for both spherical particles and nanotubes, and a good agreement was observed.

IV.

CONCLUSION

The available literature on nanofluid was thoroughly reviewed in this article. Some of the most relevant experimental results were reported for thermal conductivity several nanofluids. Thermal conductivity was found to be increased with the increase in particle volume fraction of nanoparticles. However, Effect of particle size on the thermal conductivity of nanofluids has not been completely understood yet. It is expected that Brownian motion of nanoparticles results in higher thermal conductivity enhancement with smaller particle size. However, some of the experiments show that the thermal conductivity decreases with decreasing particle size. This contradiction might be due to the uncontrolled clustering of nanoparticles resulting in larger particles. Particle size distribution of nanoparticles is another important factor and it is suggested that average particle size is not sufficient to characterize a nanofluid due to the nonlinear relations involved between particle size and thermal transport. Temperature dependence is an important parameter in the thermal conductivity of nanofluids. Limited study has been done about this aspect of the thermal conductivity of nanofluids up to now. Investigation of the thermal performance of nanofluids at high temperatures may widen the possible application areas of nanofluids.

V.

SCOPE FOR FUTURE WORK

The experimental results show that there exists significant discrepancy in the experimental data for nanofluid properties. An important reason of discrepancy in experimental data is the clustering of nanoparticles. Although there are no universally accepted quantitative values, it is known that the level of clustering affects the properties of nanofluids. Since level of clustering is related to the pH value and the additives used, two nanofluid samples with all of the parameters being the same can lead to completely different experimental results if their surfactant parameters and pH values are not the same. Therefore, the researchers providing experimental results should give detailed information about the additives utilized and pH values of the samples.

REFERENCES
[1]. S. Choi, , Enhancing thermal conductivity of fluids with nanoparticles, In Development and Applications of Non-Newtonian Flows, Ed. D A Siginer, H P Wang, New York: ASME. 1995 pp 99-105. [2]. Choi, S.U.S., Z.G. Zhang, W. Yu, F.E. Lockwood, and E.A. Grulke Anomalous thermal conductivity enhancement in nanotube suspensions, Applied Physics Letters, 2001. 79(14): p. 2252-2254. [3]. Yang, Y., E.A. Grulke, Z.G. Zhang, and G.F. Wu, Thermal and rheological properties of carbon nanotubein-oil dispersions, Journal of Applied Physics, 2006. 99(11). [4]. Eastman, J.A., S.U.S. Choi, S. Li, W. Yu, and L.J. Thompson Anomalously increased effective thermal conductivities of ethylene glycol-based nanofluids containing copper nanoparticles, Applied Physics Letters, 2001. 78(6): p. 718- 720. [5]. Jana, S., A. Salehi-Khojin, and W.H. Zhong Enhancement of fluid thermal conductivity by the addition of single and hybrid nano-additives, Thermochimica Acta, 2007. 462(1-2): p. 45-55. [6]. Liu, M.S., M.C.C. Lin, C.Y. Tsai, and C.C. Wang Enhancement of thermal conductivity with Cu for nanofluids using chemical reduction method, International Journal of Heat and Mass Transfer, 2006. 49(1718): p. 3028-3033. [7]. Murshed, S.M.S., K.C. Leong, and C. YangEnhanced thermal conductivity of TiO2 - water based nanofluids, International Journal of Thermal Sciences, 2005. 44(4): p. 367-373. [8]. Zhu, H.T., C.Y. Zhang, S.Q. Liu, Y.M. Tang, and Y.S. Yin Effects of nanoparticle clustering and alignment on thermal conductivities of Fe3O4 aqueous nanofluid , Applied Physics Letters, 2006. 89(2). [9]. Ceylan, A., K. Jastrzembski, and S.I. Shah Enhanced solubility Ag-Cu nanoparticles and their thermal transport properties, Metallurgical and Materials Transactions a-Physical Metallurgy and Materials Science, 2006. 37A(7): p. 2033-2038.

25

Vol. 5, Issue 1, pp. 13-28

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
[10]. Zhang, X., H. Gu, and M. Fujii Effective thermal conductivity and thermal diffusivity of nanofluids containing spherical and cylindrical nanoparticles, Journal of Applied Physics, 2006. 100(4): p. 044325. [11]. Putnam, S.A., D.G. Cahill, P.V. Braun, Z.B. Ge, and R.G. Shimmin Thermal conductivity of nanoparticle suspensions, Journal of Applied Physics, 2006. 99(8) [12]. Xie, H.Q., J.C. Wang, T.G. Xi, Y. Liu, F. Ai, and Q.R. Wu ,Thermal conductivity enhancement of suspensions containing nanosized alumina particles, Journal of Applied Physics, 2002. 91(7): p. 4568-4572. [13]. Kim, S.H., S.R. Choi, and D. Kim Thermal conductivity of metal-oxide nanofluids: Particle size dependence and effect of laser irradiation, Journal of Heat Transfer-Transactions of the ASME, 2007. 129(3): p. 298-307. [14]. Li, C.H. and G.P. Peterson The effect of particle size on the effective thermal conductivity of Al2O3water nanofluid, Journal of Applied Physics, 2007. 101(4): p. 044312. [15]. Das, S.K., N. Putra, P. Thiesen, and W. Roetzel Temperature dependence of thermal conductivity enhancement for nanofluids, Journal of Heat Transfer- Transactions of the ASME, 2003. 125(4): p. 567-574. [16]. Yang, B. and Z.H. Han Temperature-dependent thermal conductivity of nanorodbased nanofluids, Applied Physics Letters, 2006. 89(8). [17]. Honorine Angue Mintsa, Gilles Roy, Cong Tam Nguyen, Dominique Doucet, New temperature dependent thermal conductivity data for water-based nanofluids International Journal of Thermal Sciences ,Volume 48, Issue 2, February 2009, Pages 363371. [18]. Li, C.H. and G.P. Peterson Experimental investigation of temperature and volume fraction variations on the effective thermal conductivity of nanoparticle suspensions (nanofluids), Journal of Applied Physics, 2006. 99(8). [19]. Eastman, J.A., U.S. Choi, S. Li, L.J. Thompson, and S. Lee Enhanced thermal conductivity through the development of nanofluids, Materials Research Society Symposium Proceedings, 1997. 457(Nanophase and Nanocomposite Materials II): p. 3-11. [20]. Wang, X.W., X.F. Xu, and S.U.S. Choi Thermal conductivity of nanoparticle-fluid mixture, Journal of Thermophysics and Heat Transfer, 1999. 13(4): p. 474- 480. [21]. Lee, S., S.U.S. Choi, S. Li, and J.A. Eastman Measuring thermal conductivity of fluids containing oxide nanoparticles, Journal of Heat Transfer-Transactions of the ASME, 1999. 121(2): p. 280-289. [22]. Hwang, Y.J., Y.C. Ahn, H.S. Shin, C.G. Lee, G.T. Kim, H.S. Park, and J.K. Lee, Investigation on characteristics of thermal conductivity enhancement of nanofluids, Current Applied Physics, 2006. 6(6): p. 1068-1071. [23]. zerin, S., Kaka, S., and Yazcolu, A. G., , Enhanced Thermal Conductivity of Nanofluids: A Stateof-the-Art Review, Microfluid. Nanofluid., 2010 8(2), pp. 145-170. [24]. Wright, B., D. Thomas, H. Hong, L. Groven, J. Puszynski, E. Duke, X. Ye, and S. Jin Magnetic field enhanced thermal conductivity in heat transfer nanofluids containing Ni coated single wall carbon nanotubes, Applied Physics Letters, 2007. [25]. Hong, H.P., B. Wright, J. Wensel, S.H. Jin, X.R. Ye, and W. Roy Enhanced thermal conductivity by the magnetic field in heat transfer nanofluids containing carbon nanotube, Synthetic Metals, 2007. 157(10-12): p. 437-440. [26]. Wensel, J., B. Wright, D. Thomas, W. Douglas, B. Mannhalter, W. Cross, H.P. Hong, J. Kellar, P. Smith, and W. Roy Enhanced thermal conductivity by aggregation in heat transfer nanofluids containing metal oxide nanoparticles and carbon nanotubes, Applied Physics Letters, 2008. [27]. Keblinski, P., S.R. Phillpot, S.U.S. Choi, and J.A. Eastman Mechanisms of heat flow in suspensions of nano-sized particles (nanofluids), International Journal of Heat and Mass Transfer, 2002. 45(4): p. 855-863. [28]. Xuan, Y., Li, Q., Hu, W., Aggregation structure and Thermal conductivity of Nanofluids, AIchE Journal, vol. 49, no. 4, 2003, pp. 1038-1043. [29]. Xie, H., Fujii, M., and Zhang, X., Effect of interfacial Nanolayer on the effective thermal conductivity of Nanoparticle fluid Mixture, International Journal of Heat and Mass transfer, vol. 48, 2005, pp.2926-2932. [30]. Krischer, O., Die wissenschaftlichen Grundlagen der Trocknungstechnik (The Scientific Fundamentals of Drying Technology). 2nd ed. 1963, Berlin: Springer- Verlag [31]. Jang, S.P. and S.U.S. Choi Role of Brownian motion in the enhanced thermal conductivity of nanofluids, Applied Physics Letters, 2004. 84(21): p. 4316-4318. [32]. Chon, C.H., K.D. Kihm, S.P. Lee, and S.U.S. Choi Empirical correlation finding the role of temperature and particle size for nanofluid (Al2O3) thermal conductivity enhancement ,Applied Physics Letters, 2005. 87(15). [33]. Turian, R.M., D.J. Sung, and F.L. Hsu Thermal conductivity of granular coals, coal-water mixtures and multi-solid/liquid suspensions, Fuel, 1991. 70(10): p. 1157-1172. [34]. Patel, H.E., T. Sundararajan, T. Pradeep, A. Dasgupta, N. Dasgupta, and S.K. Das A micro-convection model for thermal conductivity of nanofluids, Pramana- Journal of Physics, 2005. 65(5): p. 863-869.

26

Vol. 5, Issue 1, pp. 13-28

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
[35]. Patel, H.E., T. Sundararajan, and S.K. Das A cell model approach for thermal conductivity of nanofluids, Journal of Nanoparticle Research, 2008. 10(1): p. 87- 97. [36]. Xuan, Y.M., Q. Li, X. Zhang, and M. Fujii Stochastic thermal transport of nanoparticle suspensions, Journal of Applied Physics, 2006. 100(4). [37]. Maxwell, J.C., A Treatise on Electricity and Magnetism. 3rd ed. Vol. II. 1892, London: oxford University Press. [38]. Bruggeman, D.A.G., Calculation of various physics constants in heterogenous substances I Dielectricity constants and conductivity of mixed bodies from isotropic substances, Annalen Der Physik, 1935. 24(7): p. 636-664. [39]. Nan, C.W., Liu, G., Lin, Y., Li., M., Interface effective thermal conductivity of carbon nanotube composites, Applied Physics Letters, vol. 85, 2004, pp.3549-3551. [40].Nan, C.W., Birringer, R., Effective Thermal conductivity of particulate composites with interfacial thermal resistance, Journal of Applied Physics, vol. 81, 1997, pp.6692-6699. [41]. Yu, C.J., Richter, A.G., Datta, A., Durbin, M.K., Dutta, P., Molecular layering in a liquid on a solid substrate: An x-Ray reflectivity study, Physica B, vol. 283, 2000, pp.27-31. [42]. Choi, S.U.S., Lee, S., Li, S., Eastman, J.A., Measuring Thermal Conductivity of Fluids Containing Oxide Nanoparticles, Journal of Heat Transfer, vol.121, 1999, pp.280-289. [43]. Eastman, J.A., Choi, S.U.S., Li, S., Thompson, L.J., Lee, S., Enhanced thermal conductivity through development of nanofluids, Proceedings of Materials Research Society Symposium, Materials Research Society Pittsburgh, PA, USA, Boston, MA, USA, vol.457, 1997, pp. 3-11. [44]. Xue, L., Keblinski, P., Phillpot, S.R., Choi, S.U.S., Eastman, J.A., Effect of liquid layering at the liquidsolid interface on thermal transport, International Journal of Heat and Mass transfer, vol.47, 2004, pp.42774284. [45]. Xue, Q.Z., Model for thermal conductivity of Carbon nanotube based composites, Physica B, vol.368, 2005, pp.302-307. [46]. Xue, Q., and Xu, W.M., A model of thermal conductivity of nanofluids with interfacial shells, Materials Chemistry and Physic, vol.90, 2005, pp.298-301. [47]. Ren, Y., Xie, H., Cai, A., Effective thermal conductivity of Nanofluids containing spherical Nanoparticles, Journal of Physics D: Apllied Physics, vol. 38, 2005, pp.3958-3961. [48]. Xie, H., Fujii, M., and Zhang, X., Effects of Interfacial Nanolayer on the effective thermal conductivity of nanoparticle-fluid mixture, International Journal of Heat and Mass Transfer, vol. 48, 2005, pp.2926-2932. [49]. Yu, W., Choi, S.U.S., A effective thermal conductivity model of nanofluids with a cubical arrangement of spherical particles, Journal of Nanoscience and Nanotechnology, vol.5, 2007, pp. 580-586. [50]. Garboczi, E.J., Snyder, K.A., Douglas, J.F., Geometrical percolation threshold of overlapping ellipsoids, Physical review E, vol.52, 1995, pp.819-828. [51]. Li, Q. and Xuan, Y., Convective heat transfer and flow characteristics of Cu+ water nanofluids, Science in China, Series E: Technological Sciences, vol. 45, 2002, pp.408-416. [52]. Wang, B., Zhou, L., Peng, X., Fractal model for predicting the effective thermal conductivity of liquid with suspension of nanoparticle, International Journal of heat and mass transfer, vol. 46, 2003, pp.2665-2672. [53]. Gao, L., Zhou, X.F., Differential effective medium theory for thermal conductivity in nanofluids, Physics Letters A, vol. 348, 2006, pp.355-360. [54]. Hamilton, R. L., and Crosser, O. K., 1962, Thermal Conductivity of Heterogeneous Two-Component Systems, Ind. Eng. Chem. Fund., 1(3), pp. 187-191. [55]. Yu, W., and Choi, S. U. S., 2003, The Role of Interfacial Layers in the Enhanced Thermal Conductivity of Nanofluids: A Renovated Maxwell Model, J. Nanopart. Res., 5(1), pp. 167-171. [56]. Sitprasert, C., Dechaumphai, P., and Juntasaro, V., 2009, A Thermal Conductivity Model for Nanofluids Including Effect of the Temperature-Dependent Interfacial Layer, J. Nanopart. Res., 11(6), pp. 1465-1476. [57]. Koo, J., and Kleinstreuer, C., 2004, A New Thermal Conductivity Model for Nanofluids, J. Nanopart. Res., 6(6), pp. 577-588. [58]. Oh, D., Jain, A., Eaton, J. K., Goodson, K. E., and Lee, J. S., 2008, Thermal Conductivity Measurement and Sedimentation Detection of Aluminum Oxide Nanofluids by Using the 3 Method, Int. J. Heat Fluid Fl., 29(5), pp. 1456-1461. [59]. Wen, D., and Ding, Y., 2004, Experimental Investigation into Convective Heat Transfer of Nanofluids at the Entrance Region under Laminar Flow Conditions, Int. J. Heat Mass Tran., 47(24), pp. 5181-5188. [60]. Nimtz, G., Marquardt, P., and Gleiter, H., 1990, Size-Induced Metal-Insulator Transition in Metals and Semiconductors, J. Cryst. Growth, 86(1-4), pp. 66-71. [61]. Xie, H., Wang, J., Xi, T., Liu, Y., and Ai, F., 2002, Dependence of the Thermal Conductivity of Nanoparticle-Fluid Mixture on the Base Fluid, J. Mater. Sci. Lett., 21(19), pp. 1469-1471. [62]. Beck, M., Yuan, Y., Warrier, P., and Teja, A., 2009, The Effect of Particle Size on the Thermal Conductivity of Alumina Nanofluids, J. Nanopart. Res., 11(5), pp. 1129-1136.

27

Vol. 5, Issue 1, pp. 13-28

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
[63]. Chen, H., Witharana, S., Jin, Y., Kim, C., and Ding, Y., 2009, Predicting Thermal Conductivity of Liquid Suspensions of Nanoparticles (Nanofluids) Based on Rheology, Particuology, 7(2), pp. 151-157. [64]. Nan, C. W., Shi, Z., and Lin, Y., 2003, A Simple Model for Thermal Conductivity of Carbon NanotubeBased Composites, Chem. Phys. Lett., 375(5-6), pp. 666-669. [65]. Xue Yu, W., and Choi, S., 2004, The Role of Interfacial Layers in the Enhanced Thermal Conductivity of Nanofluids: A Renovated HamiltonCrosser Model, J. Nanopart. Res., 6(4), pp. 355-361. [66]. Xue, Q., 2003, Model for Effective Thermal Conductivity of Nanofluids, Phys. Lett. A, 307(5-6), pp. 313-317. [67]. Li, Y., Qu, W., and Feng, J., 2008, Temperature Dependence of Thermal Conductivity of Nanofluids, Chinese Phys. Lett., 25(9), pp. 3319-3322. [68]. Leong, K., Yang, C., and Murshed, S., 2006, A Model for the Thermal Conductivity of Nanofluids The Effect of Interfacial Layer, J. Nanopart. Res., 8(2), pp. 245-254.

BIOGRAPHICAL NOTES
B. Ravi Sankar is currently working as Lecturer in the Department of Mechanical Engineering, R.V.R.&J.C. College of Enigneering, Guntur, Andhra Pradesh,India. He graduated in Mechanical Engineering from the same college in 2002. He received his Masters Degree from ANU, India in 2005. He has published 2 research papers in International Journals and various papers in International and National conferences.

D. Nagesawara Rao worked as a Professor in Andhra University, Visakhapatnam, India for past 30 years and presently he is working as Vice Chancellor, Centurion University of Technology & Management, Odisha, India. Under his guidance 18 PhDs were awarded. He has undertaken various projects sponsored by UGC, AICTE and NRB. He worked as a coordinator for Centre for Nanotechnology, Andhra University, Visakhapatnam.

Ch. Srinivasa Rao is currently an Associate Professor in the Department of Mechanical Engineering, Andhra University, Visakhapatnam, India. He graduated in Mechanical Engineering from SVH Engineering College, Machilipatnam,, India in 1988. He received his Masters Degree from MANIT, Bhopal, India in 1991. He received PhD from Andhra University in 2004.He has published over 25 research papers in refereed journals and conference proceedings.

28

Vol. 5, Issue 1, pp. 13-28

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

IMPROVED PERFORMANCE OF HELIXCHANGER OVER SEGMENTAL BAFFLE HEAT EXCHANGER USING KERNS METHOD
Sunil Kumar Shinde1, Mustansir Hatim Pancha2 and S. Pavithran3
Department of Mechanical Engineering, Vishwakarma Institute of Technology, Pune University, Pune, India 2 Masters in Heat Power Engineering, Mechanical Department, Vishwakarma Institute of Technology, Pune University, Pune.
1&3

ABSTRACT
Heat exchangers being one of the most important heat & mass transfer apparatus in industries like oil refining, chemical engineering, electric power generation etc. are designed with preciseness for optimum performance and long service life. This paper analyses the conventional segmental baffle heat exchanger using the Kern method with varied shell side flow rates. This is a proven method used in design of Heat Exchangers with a baffle cut of 25%. The paper also consists of the thermal analysis of a helixchanger (Continuous Helical baffled Heat Exchanger) using the Kern method, modified to estimate the results for different flow rates at a fixed helical angle of 25. The results obtained in this paper show us that the desired properties from a Heat exchanger i.e High Heat Transfer Co-efficient and lower pressure drop are more effectively obtained in a Helixchanger. The shell side zigzag flow induced by the Segmental baffle arrangement is completely eliminated in a Helixchanger. The flow pattern in the shell side of the continuous helical baffle heat exchanger is rotational & helical due to the geometry of continuous helical baffles. This flow pattern, at a certain fixed helical angle, results in significant increase in the heat transfer coefficient, however at the cost of lower pressure drop.

KEYWORDS: Kern method, helixchanger, helical angle, increased heat transfer coefficient, reduced pressure
drop, shell & tube heat exchanger.

I.

INTRODUCTION

The conventional shell and tube heat exchangers with segmental baffles have low heat transfer coefficient due to the segmental baffle arrangement that causes high leakage flow, bypassing the heat transfer surface and high pressure drop due to the segmental baffles obstructing the shell side flow completely. This results in higher pumping costs for industries. The hydrodynamic studies testing the heat transfer (mean temperature difference) and the pressure drop, with the help of research facilities and industrial equipment have shown much better performance of helical baffle heat exchangers as compared to the conventional ones. This results in relatively high value of shell side heat transfer coefficient, low pressure drop, and low shell side fouling.[1]

II.

DESIRED FEATURES IN A HEAT EXCHANGER

The desirable features of a heat exchanger would be to obtain maximum heat transfer to Pressure drop ratio at least possible operating costs, and without comprising the reliability.

2.1. Higher heat transfer co-efficient and larger heat transfer area

29

Vol. 5, Issue 1, pp. 29-39

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
A high heat transfer coefficient can be obtained by using larger heat transfer surfaces, and surfaces which promote local turbulence for single phase flow or have some special features for two phase flow. Heat transfer area can be increased by using larger exchangers, but the more cost effective way is to use a heat exchanger having a large area density per unit exchanger volume.

2.2. Lower Pressure drop


Use of segmental baffles in a Heat Exchanger result in high pressure drop which is undesirable as pumping costs are directly proportional to the pressure drop within a Heat Exchanger. Hence, lower pressure drop means lower operating and capital costs.

III.

DEVELOPMENTS IN SHELL AND TUBE EXCHANGER

The developments for shell and tube exchangers focus on better conversion of pressure drop into heat transfer i.e higher Heat transfer co-efficient to Pressure drop ratio, by improving the conventional baffle design. With single segmental baffles, most of the overall pressure drop is wasted in changing the direction of flow. This kind of baffle arrangement also leads to more grievous undesirable effects such as dead spots or zones of recirculation which can cause increased fouling, high leakage flow that bypasses the heat transfer surface giving rise to lesser heat transfer co-efficient, and large cross flow. The cross flow not only reduces the mean temperature difference but can also cause potentially damaging tube vibration.[2]

Figure 1.Helical Baffle Heat Exchanger

3.1. Helical baffle Heat Exchanger


The baffles are of primary importance in improving mixing levels and consequently enhancing heat transfer of shell-and-tube heat exchangers. However, the segmental baffles have some adverse effects such as large back mixing, fouling, high leakage flow, and large cross flow, but the main shortcomings of segmental baffle design remain. [3] Compared to the conventional segmental baffled shell and tube exchanger Helixchanger offers the following general advantages. [4] Increased heat transfer rate/ pressure drop ratio. Reduced bypass effects. Reduced shell side fouling. Prevention of flow induced vibration. Reduced maintenance

30

Vol. 5, Issue 1, pp. 29-39

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Figure 2 Helical baffle Heat Exchanger

3.2. Research aspects


Research on the helixchanger has been focussed on two principle areas : Hydrodynamic studies and experimentation on the shell side of the Heat Exchanger Heat transfer co-efficient and pressure drop studies on small scale and full industrial scale equipment.

3.3. Design aspects


An optimal design of a helical baffle arrangement depends largely on the operating conditions of the heat exchanger and can be accomplished by appropriate design of helix angle, baffle overlapping, and tube layout. The original Kern method is an attempt to co-relate data for standard exchangers by a simple equation analogous to equations for flow in tubes. However, this method is restricted to a fixed baffle cut of 25% and cannot adequately account for baffle-to-shell and tube-to-baffle leakages. Nevertheless, although the Kern equation is not particularly accurate, it does allow a very simple and rapid calculation of shell side co-efficient and pressure drop to be carried out and has been successfully used since its inception. [5]

Figure 3 Helical Baffle Heat Exchanger pitch

3.4. Important Parameters


Pressure Drop (PS) Helical Baffle pitch angle () Baffle spacing (LB)

31

Vol. 5, Issue 1, pp. 29-39

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Equivalent Diameter (DE) Heat transfer coefficient (o)

In designing a helical Baffle Heat Exchanger, the pitch angle, baffles arrangement, and space between the two baffles with the same position are some of the important parameters. Baffle pitch angle () is the angle between the flow and perpendicular surface on exchanger axis and LB is the space between two corresponding baffles with the same position. Optimum design of helical baffle heat exchangers is dependent on the operating conditions of the heat exchanger. Consideration of proper design of Baffle pitch angle, overlapping of baffles and tubes layout results in the optimization of the Heat Exchanger Design. In segmental heat exchangers, changing the baffle space and baffle cut can create wide range of flow velocities while changing the helix pitch angle in helical baffle system does the same. Also, the overlapping of helical baffles significantly affects the shell side flow pattern.

IV.

THERMAL ANALYSIS OF SEGMENTAL BAFFLE HEAT EXCHANGER & HELICAL BAFFLE HEAT EXCHANGER

In the current paper, thermal analysis has been carried out using the Kerns method. The thermal parameters necessary to determine the performance of the Heat Exchanger have been calculated for Segmental baffle heat Exchanger following the Kerns method, and suitable modifications made to the method then allow us to apply it for the helical baffle Heat Exchanger which is the subject area of interest. Also, the comparative analysis, between the thermal parameters of the two Heat exchangers has been carried out, that clearly indicates the advantages and disadvantages of the two Heat Exchangers.

4.1 Heat Exchanger data at the shell side


Table 1. Input data Shell Side [12] S. No. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. Quantity Shell side fluid Volume flow rate Shell side Mass flow rate Shell ID Shell length Tube pitch No. of passes Baffle cut Baffle pitch Shell side nozzle ID Mean Bulk Temperature No. of baffles Shell side Mass velocity / mass flux Symbol ( s) ( s) (Dis) (Ls) (Pt) Value Water 40 to 80 lpm. 0.67 to 1.33 kg/sec 0.153 m 1.123 m 0.0225 m 1 25% 0.060 m 0.023 m 30 C 17 kg / (m2s)

(LB) (MBT) (Nb) ( F)

4.2 Heat Exchanger data at the tube side


Table 2. Input data Tube Side S. No. 1. 2. 3. 4. 5. 6. 7. 8. Quantity Tube side fluid Volume flow rate Tube side Mass flow rate Tube OD Tube thickness Number of Tubes Tube side nozzle ID Mean Bulk Temperature Symbol ( t) ( t) (Dot) Value Water 40 to 80 lpm. 0.67 to 1.33 kg/sec 0.153 m 1.123 m 0.0225 m 1 30 C

(MBT)

4.3 Fluid Properties

32

Vol. 5, Issue 1, pp. 29-39

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Table 3. Properties of the fluid used in the Heat Exchanger. [11] Property Specific Heat Thermal Conductivity Viscosity Prandtls Number Density Cp K Pr Unit KJ/kg. K W/m. K kg/m. s 1 kg/m3 Cold Water (Shell side) 4.178 0.6150 0.001 5.42 996 Hot Water (Tube side) 4.178 0.6150 0.001 5.42 996

Figure 4. Variation of f with Reynolds number. [5]

4.4. Thermal analysis of Segmental Baffle Heat Exchanger


The thermal analysis has been performed for different flows (LPMs) for the shell side fluid. 4.4.1 ( s) = 40 lpm 1. Tube Clearance (C) C = Pt Dot = 0.0225 0.012 = 0.0105 2. Cross-flow Area (AS) AS = ( Dis C LB ) / Pt = ( 0.153 0.0105 0.06 ) / 0.0225 = 4.284 E-3 3. Equivalent Diameter (DE) DE = 4 [ ( Pt2 Dot2 / 4 ) / ( Dot) = 4 [ (0.02252 0.0122 / 4 ) / ( 0.012) ] = 0.04171 m. 4. Maximum Velocity (Vmax) Vmax = s / A = 6.67E-4 / (/4 Dis2 ) (since s= 40 lpm = 2400 lph = 6.67E-4 m3/s) = 6.67E-4 / (/4 0.1532 ) = 0.0362 m/s 5. Reynolds number (Re) Re = ( Vmax DE) / = (996 0.0362 0.04171) / 0.001

33

Vol. 5, Issue 1, pp. 29-39

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
= 1507.136 6. Prandtls number (Pr) Pr = 5.42 (for MBT = 30C and water as the medium) 7. Heat Transfer Co-efficient (o) o= (0.36 K Re0.55 Pr0.33) / R DE (where R = (/w)0.14 = 1 for water as medium) = (0.36 0.6150 1507.1360.55 5.420.33) / 0.04171 = 518.968 W/m2K 8. No. of Baffles (Nb) Nb= Ls / (Lb + SB) = 1.123 / (0.06 + 0.005) 17 9. Pressure Drop (PS) PS = [4 f ( F)2 Dis (Nb + 1)] / (2 DE) (f from graph and F = s / As) = (4 0.1 156.392 0.153 18)/(2 996 0.04171) = 324.298 Pa = 0.3243 KPa 4.4.2 ( s) = 60 lpm 1. Tube Clearance (C) C = 0.0105 2. Cross-flow Area (AS) AS = 4.284 E-3 3. Equivalent Diameter (DE) DE = 0.04171 m. 4. Maximum Velocity (Vmax) Vmax = s / A = 0.001 / ( Dis2 )

(since s = 60 lpm = 3600 lph = 0.001 m3/s)

5.

6. 7.

8. 9.

= 0.001 / ( 0.1532 ) = 0.0544 m/s Reynolds number (Re) Re = ( Vmax DE) / = (996 0.0544 0.04171) / 0.001 = 2259.948 Prandtls number (Pr) Pr = 5.42 Heat Transfer Co-efficient (o) o= (0.36 K Re0.55 Pr0.33) / R DE = (0.36 0.6150 2259.9480.55 5.420.33) / 0.04171 = 648.352 W/m2K No. of Baffles (Nb) Nb 17 Pressure Drop (PS) PS = [4 f ( F)2 Dis (Nb + 1)] / (2 DE) (f from graph and = (4 0.09 233.422 0.153 18)/(2 996 0.04171) = 650.15 Pa = 0.65 KPa Similarly, 50 lpm o = 585.28 W/m2K, Ps = 0.482 KPa. 70 lpm o = 705.92 W/m2K, Ps = 0.885 KPa. 80 lpm o = 758.56 W/m2K, Ps = 1.150KPa.

F = s / As)

4.5. Thermal analysis of Helical Baffle Heat Exchanger

34

Vol. 5, Issue 1, pp. 29-39

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
The thermal analysis of the helical baffle Heat Exchanger will be carried out using the Kerns method which has been modified to suit the changed geometry of the Heat exchanger and hence get comparable results to the above analysis. 4.5.1 ( s) = 40 lpm 1. C = 0.0105 2. Baffle Spacing (Lb) Lb = Dis tan (where is the helix angle = 25) = 0.153 tan 25 = 0.2241 3. Cross-flow Area (AS) AS = ( Dis C LB ) / Pt = ( 0.153 0.0105 0.2241 ) / 0.0225 = 0.016 m2 4. DE = 0.04171 m. 5. Maximum Velocity (Vmax) Vmax = s / As = 6.67 E-4 / (0.016) = 0.0416 m/s 6. Reynolds number (Re) Re = ( Vmax DE) / = (996 0.0416 0.04171) / 0.001 = 1728.19 7. Pr = 5.42 8. Heat Transfer Co-efficient (o) o = (0.36 K Re0.55 Pr0.33) / R DE = (0.36 0.6150 1728.190.55 5.420.33) / 0.04171 = 559.54 W/m2K 9. No. of Baffles (Nb) Nb= Ls / (Lb + SB) = 1.123 / (0.2241 + 0.005) 5 10. Pressure Drop (PS) PS = [4 f F2 Dis (Nb + 1)] / (2 DE) = (4 0.09 41.872 0.153 6) / (2 996 0.04171) = 6.97 Pa 4.5.2 ( s) = 60lpm 1. 2. 3. 4. 5. C = 0.0105 Baffle Spacing (Lb) Lb = 0.2241 Cross-flow Area (AS) AS = 0.016 m2 Equivalent Diameter DE = 0.04171 m. Maximum Velocity (Vmax) Vmax = s / As = (1 E-3) / 0.016 = 0.0625 m/s Reynolds number (Re) Re = ( Vmax DE) / = (996 0.0625 0.04171) / 0.001

6.

35

Vol. 5, Issue 1, pp. 29-39

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
= 2596.44 Pr = 5.42 Heat Transfer Co-efficient (o) o= (0.36 K Re0.55 Pr0.33) / R DE = (0.36 0.6150 2596.440.55 5.420.33) / 0.04171 = 699.94 W/m2K 9. No. of Baffles (Nb) Nb 5 10. Pressure Drop (PS) PS = [4 f F2 Dis (Nb + 1)] / (2 DE) = (4 0.08 62.52 0.153 6) / (2 996 0.04171) = 13.81 Pa Similarly, 50 lpm o = 633.05 W/m2K, Ps = 9.59 Pa. 70 lpm o = 780.15 W/m2K, Ps = 17.62 Pa. 80 lpm o = 819.82 W/m2K, Ps = 21.48Pa. 7. 8.

V.

RESULTS
Table 4. LPM 40 50 60 70 80 H.T Co-efficient (W/m2K) 518.96 585.28 648.35 705.92 758.56 Pressure drop (Pa) 324.3 482 650 885 1150 o/Ps 1.6 1.2142 0.9974 0.7976 0.6596

The table below gives the results at various LPMs for the Segmental Baffle Heat Exchanger. These results have been calculated above.

The table below gives the results at various LPMs for the Segmental Baffle Heat Exchanger. These results have been calculated above.
Table 5. LPM 40 50 60 70 80 H.T Co-efficient (W/m2K) 559.54 633.05 699.94 780.15 819.82 Pressure drop (Pa) 6.97 9.59 13.81 17.62 21.48 o/Ps 80.27 66.01 50.68 44.27 38.16

5.1 Graph Plots


The below graphs have been plotted with the help of the results obtained above. They clearly indicate that the use of Helixchanger in place of a Segmental baffle Heat Exchanger results in higher Heat Transfer co-efficient that is required ideally from a Heat Exchanger. The graphs have been plotted using MS Excel and various colours have been used to clearly distinguish between Helical Baffled Heat Exchanger and Segmental baffled Heat Exchanger. The points plotted are at 40, 50, 60, 70 and 80 LPM respectively. These points indicate the Heat Transfer co-efficient and Pressure drop at these inlet Volume flow rates.

36

Vol. 5, Issue 1, pp. 29-39

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
H.T co-efficient comparison between Segmental Baffle and Helical Baffle H.E at varying LPM
1800 1600 1400 1200 1000 800 600 400 200 0

Heat Transfer co-efficient

1 559.54 518.96

2 633.05 585.28

3 699.94 648.35

4 780.15 705.92

5 819.82 758.56

Heat Transfer for Helical baffle Heat Transfer for Segmental baffle

Figure 5.Heat Transfer co-efficient ofor Segmental and Helical Baffle Heat Exchanger.

Pressure drop comparison between Segmental Baffle and Helical Baffle H.E at varying LPM Pressure drop Ps
1400 1200 1000 800 600 400 200 0

1 324.3 6.97

2 482 9.59

3 650 13.81

4 885 17.62

5 1150 21.48

Pressure drop for Segmental baffle Pressure drop for Helical baffle

Figure 6.Pressure Drop Ps for Segmental and Helical Baffle Heat Exchanger.

/ Ps ration comparison between Segmental Baffle and Helical Baffle H.E at varying LPM
90 80 70 60 50 40 30 20 10 0

/ Ps

1 1.6 80.27

2 1.2142 66.01

3 0.9974 50.68

4 0.7976 44.27

5 0.6596 38.16

/ Ps for segmental baffle / Ps for Helical baffle

Figure 7. / Ps for Segmental and Helical Baffle Heat Exchanger.

37

Vol. 5, Issue 1, pp. 29-39

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

VI.

CONCLUSIONS a) The above results give us a clear idea that the Helical baffle heat exchanger has far more better Heat transfer coefficient than the conventional segmental Heat Exchanger, in all cases of varying LPM.[Graph 1] b) The above results also indicate that the pressure drop Ps in a helical baffle heat exchanger is appreciably lesser as compared to Segmental baffle heat Exchanger [Graph 2], due to increased cross-flow area, resulting in lesser mass flux throughout the shell. c) The ratio of Heat Transfer co-efficient to the pressure drop is higher as compared to segmental baffle heat exchanger. This kind of ratio ismost desired in Industries, especially the one obtained at 60LPM.[Graph 3] This helps reduce the pumping power and in turn enhance the effectiveness of the heat exchanger in a well-balanced way. d) The Kern method available in the literature is only for the conventional segmental baffle heat exchanger, but the modified formula used to approximate the thermal performance of Helical baffle Heat Exchangers give us a clear idea of their efficiency and effectiveness. e) The ratio of Heat transfer co-efficient to the Pressure drop is around 50 for Volume flow rate of 60 LPM amongst all the other varying flow rates. This is the most desired result for industrial Heat Exchangers as it creates a perfect balance between the Heat transfer co-efficient and shell side pressure drop in a heat exchange.

NOMENCLATURE
Symbol As LB Cp Dot Dis DE o Nb Pr PT Re PS Vmax Quantity Shell Area Baffle Spacing Specific Heat Tube Outer Diameter Shell Inner Diameter Equivalent Diameter Heat Transfer Co-efficient Number of Baffles Prandtls No. Tube Pitch Reynolds Number Total shell side pressure drop Dynamic viscosity Fluid Density Maximum Tube Velocity Helix Angle m Pa Kgs/m2 kg/m3 m/s Units m2 m kJ/kgK m m m

ACKNOWLEDGEMENTS
The authors would like to acknowledge BCUD, University of Pune for providing necessary financial support (Grant no. BCUD/OSD/184/09).

REFERENCES
[1]. Andrews Malcolm, Master Bashir, Three Dimensional Modeling of a Helixchanger Heat exchanger using CFD, Heat Transfer Engineering, 26(6), (2005), 22-31. [2]. Mukherjee R, Effective design Shell & Tube Heat exchangers, Chemical Engg Progress, (1998), 1-8. [3]. Gang Yong Lei, Ya-Ling He, RuiLi, Ya-Fu Gao, Effects of baffle inclination angle on flow and heat transfer of a heat exchanger with helical baffles, ScienceDirect-Chemical Engineering and Processing, (2008), 1-10.

38

Vol. 5, Issue 1, pp. 29-39

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
[4]. Peng B, Wang Q.W., Zhang C., An experimental Study of shell and tube heat exchangers with continuous Helical baffles, ASME Journal of Heat transfer, (2007). [5]. Hewitt G.F, Shires G.L., Bott T.B., Process heat transfer, (CRC press), 275-285. [6]. Master B.I, Chunagad K.S. Boxma A.J. Kral D, Stehlik P, Most Frequently used Heat exchangers from pioneering Research to Worldwide Applications, vol. No.6, (2006), 1-8. [7]. Prithiviraj, M., and Andrews, M. J, Three Dimensional Numerical Simulation of Shell-and-Tube Heat Exchangers, Foundation and Fluid Mechanics, Numerical Heat Transfer Part A - Applications, 33(8), 799816, (1998). [8]. Serth Robert W., Process heat transfer principles and application, ISBN:0123735882, 246-270, (2007). [9]. WadekarVishwasEnhanced and Compact Heat exchangers A Perspective from process industry, 5th international conference on Enhanced and Compact Heat exchangers: Science, Engineering and Technology, 35-41, (2005). [10]. Wang Qui, Dong Chen Gui, Xu Jing, PengJi Yan, Second-Law Thermodynamic Comparison and Maximal Velocity Ratio Design Of Shell-and-Tube Heat Exchanger With Continuous Helical Baffles, ASME journal, 132/101801, 1-8, (2010). [11]. Prof. Sunil s. Shinde, Samir Joshi et. Al.,Performance improvement in single phase tubular heat exchanger using continuous helical baffles, IJERA journal,(jan-feb 2012) www.ijera.com/papers/Vol2_issue1/GB2111411149.pdf [12]. Prof. Sunil S. Shinde, P.V. Hadgekar et al.,Comparative thermal analysis of helixchanger with segmental heat exchanger using bell delaware method, IJAET journal, (may 2012). http://www.archives-ijaet.org/media/24I8-IJAET0805801-COMPARATIVE-THERMAL-ANALYSIS.pdf Authors S. S. Shinde: Assistant Professor at Vishwakarma Institute of Technology, Pune. He has completed Masters in Mechanical Engineering from College of Engineering, Pune specializing in the Heat Power Engineering field.He is currently pursuing Ph.D. in Mechanical Engineering, from University of Pune, India.Research includes design, Simulation and Experimentation in tubular heat exchangers. The author has 15 years of teaching & research experience in various subjects of Mechanical Engineering, and has guided about 15 M.E. thesis & number of B.E. projects. He has credit to 6 publications in various national &international journals and conference proceedings. Mustansir H. Pancha: 2nd year M.Tech Student Vishwakarma Institute Of Technology, Pune specializing in the Heat Power Engineering field. He has completed his Bachelors in Engineering under Mumbai University, with distinction marks. With a year of Experience in the HVAC&R field as well as in the Industrial automation industry, he is currently working as a Research Assistant under Guide and Mentor Prof. S.S. Shinde.

S. Pavithran: Professor in Mechanical engineering at Vishwakarma Institute of Technology,Pune. He received his Bachelors degree from Indian Institute of Technology, Madras, thencompleted his Masters (MS) from University of Minnesota and PhD from University of Southern California. He has an experience of 8 years in teaching involving various areas of Mechanical Engineering. He also has 6 years of industrial experience. He has guided a number of masters Theses. Currently, six PhD students are working under his supervision. He has around 24 publications to his credit in various international journals.

39

Vol. 5, Issue 1, pp. 29-39

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

ESTABLISHMENT OF AN EMPIRICAL MODEL THAT CORRELATES RAINFALL- INTENSITY- DURATIONFREQUENCY FOR MAKURDI AREA, NIGERIA
Martins Okey Isikwue1, Sam Baba Onoja1 and Kefas J. Laudan2
Department of Agricultural and Environmental Engineering, University of Agriculture Makurdi. Nigeria. 2 Department of Agricultural Engineering, Federal Polytechnic Bauchi, Nigeria
1

ABSTRACT
Rainfall records for 30 years (1979-2009) were used to establish an emperical model that correlates rainfallintensity-duration-frequency for Makurdi area. They were analyzed by sorting out the maximum monthly rainfall depths with their corresponding durations and ranked in descending order of magnitude and their return periods were computed. All ranked rainfall depths for each return period were converted to rainfall intensities. The Shermans mathematical method was first employed to develop station constants and IDF curves for Makurdi. The station constants for Makurdi area were found to be c = 21.429, m = 0.6905, b = 0.2129. Values of durations of rainfall were substituted in each regression equation to compute corresponding rainfall intensity for each return period. Graphs of rainfall intensities and corresponding durations for each return period were plotted with the subsequent derivation of IDF empirical equations for Makurdi. They were used to develop the IDF curves for 2yrs, 5yrs, 10yrs, 25yrs, 50yrs and 100yrs return periods. It was observed that for a given return period, the IDF curves decreased with increasing time interval. Rainfall Intensity Durations -Frequency (IDF) data was produced and a model expressing the relationship between the rainfall intensity, the duration and frequency was developed for Makurdi area.These will be useful for construction of hydrologic structures such as dams and other drainage systems in Makurdi metropolis.

KEYWORDS: (IDF) formula, Station constants, Makurdi, Nigeria.

I.

INTRODUCTION

Rainfall Intensity-Duration-Frequency Curves (IDF Curves) are graphical representations of the amount of rain that falls within a given period of time. These curves are used to help predict when an area will be flooded, or to pinpoint when a certain rainfall rate or a specific volume of flow of runoff will reoccur in the future [1]. The first step in designing a flood control structure is to determine the probable recurrence of storms of different intensities and durations so that an economic size of a structure can be provided. For most purposes it is not feasible to provide a structure that will withstand the greatest rainfall that has ever occurred. It is more economical to have a periodic failure than to design for a very intense storm. But where human life is endangered, however, the design should handle runoff from storms even greater than have been recorded. For these purposes, data providing return periods of storms of various intensities and durations are essential [2]. Precipitation in the form of rainfall, though useful to life (i.e life supporting), is not without problems. Too little of it causes drought and desertification in an environment and too much of it causes flood disaster with devastating consequences that might lead to loss of lives and properties. Efforts are continually being made by scientists and engineers using different irrigation and flood mitigation techniques towards ameliorating such problems. Rainfall intensity-duration-frequency (IDF) curves are used for the estimation of peak discharge of runoff in a

40

Vol. 5, Issue 1, pp. 40-46

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
catchment area, by the rational method, for subsequent sizing of hydraulic channels and other water ways. For selected storm duration and frequency, the design rainfall intensity is normally estimated from a set of statistically derived rainfall intensity duration frequency or IDF curves appropriate to that region [3]. The extreme value type1 (Gumbel) method was applied by [4] to the annual extreme rainfall data sets generated by eleven rainfall zones to estimate the parameters and hence the intensityduration-frequency (IDF) curves for Nigeria. In an effort to develop IDF curves for some parts of Nigeria and the country as a whole, [4] divided the country into ten principal rainfall zones. He applied the extreme value type1 (Gumbel) method to develop the intensity-duration-frequency (IDF) curves from the historic rainfall records of 1951 to 1978 for Nigeria. Using the same rainfall records, [5] carried out the first comprehensive nonempirical IDF studies for Nigeria with a return period of 50 or more years. [6]developed a new technique for the analysis of extreme rainfall applicable to Lagos metropolis. The new technique developed by [6] was applied by [7] for the analysis of extreme rainfall for Nigeria. [8] presented IDF curves for Kano and Oshogbo for rainfall records of the periods 1951-1954 and 1956-1964 respectively. The lapses observed with the works carried out by [4,5,6,7] were in their use of the same rainfall data for the period 1951-1978. Data used were very old compared to the current global climatic and environmental changes that are adversely affected by global warming and mans activities. Any attempt to use such IDF curves produced to design any hydrological structure for current use may lead to inappropriate design which can cause failure of the structure. An empirical equation for the IDF curves of Imo River Basin and environ in Imo State of Nigeria was developed by [9]. The daily rainfall data used in the study were collected from the Nigerian Meteorological Department at Oshodi for the following stations: Enugu (1916-1965), Onitsha (1906-1966), Awka (1942-1965) and Adani (1948-1965). These rainfall records also look very old and using the developed empirical equation to design hydrological structures may lead to failures. The IDF relationships can be used in conjunction with the rational method to determine peak discharge from a catchment area for design of hydrological structures. The quantity of storm runoff or discharge may be computed based on the correlation between rainfall intensity and surface flow using the expression in equation 1 equation below. (1) q CiA 3 Where q = the design peak runoff rate in m /s, C = the runoff coefficient, i = rainfall intensity in mm/hr for the design return period and for a duration equal to the time of concentration of the water shed, A = the watershed area m2. Previous IDF curves were developed based on past rainfall data up to 1978. Some researchers or designers of hydraulic structures might have used such curves to design culverts and other channels to contain runoff with a return period of 30 years or less. Certainly new IDF curves from most recent rainfall data are needed for use in designing current hydraulic structures that are expected to have higher capacity than those designed using the the very old data based IDF curves. Experiences have shown that some hydraulic structures like open channels, culverts and small dams that were constructed some 20-30 years back now have far less capacity due to siltation and increased in runoff to contain much more runoff produced in the watershed. The excessive runoff tends to overrun the banks with the subsequent collapse of such structures. Also some areas might have been affected by erosion menace that would require new set of hydraulic structures to control such erosion. That calls for the use of the most current and improved IDF curves. With the observed lapses, an up to date daily rainfall records for the period of 30 years (1979-2009) were used to develop IDF curves and empirical formulae for use in designing hydraulic modern structures such as channels, culverts dams and bridges and the control of soil erosion by water in Makurdi area. The objectives of this study are; To develop empirical equations, station constants and IDF curves for Makurdi area.

II.

MATERIALS AND METHOD

2.1 Study Area 41 Vol. 5, Issue 1, pp. 40-46

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Markurdi is characterized by undulating rolling plain with irregular river valleys and ridges with steep slopes. It lies within the humid zone with little seasonal temperature variation throughout the year. There are two main seasons, the rainy season (April-October), and the dry season (NovemberMarch). The average annual temperature is 31.5oC, and the relative humidity ranges between 65-69%. The rainfall varies between 1000mm to 2500mm [10]. Makurdi is located in the North-Central zone of Nigeria between latitude 70 45 - 70 52 N and longitude 80 35 8041 E

2.2 Data Acquisition


Maximum weekly rainfall records with their corresponding durations and for a period of 30 years (1979-2009) were collected from Makurdi metrological station (Airforce Base) for statistical analysis. The essence of collecting 30 years or more of rainfall records was to be able to get as much as possible the most severe or extreme rainfall events that could be handled by hydraulic structures for a certain design period of time (Return period). The maximum rainfall depth for each month and year corresponding to specified durations were sorted out. All rainfall depths that fell under each of the stated durations were ranked in descending order of magnitude to determine the return period for each duration. The return periods (recurrence intervals) of the ranked rainfall depths were computed using Weibulls equation method as shown in equation 2.

(2) Setting rainfall intensity (i) as dependent variable (y) and duration (t) as independent variable (x), the rainfall intensities (i) with their corresponding durations (t) for each return period was subjected to regression analysis. The regression method correlates means of two variables y (dependent) and x (independent) variables by plotting them on x and y axes. The regression line equation (3) is given by [11] as:

n 1 m

y a bx y and x x Where y n n a y bx
b

(3)

xy x y / n x x / n
2 2

S xy S xx

The IDF data was generated by substituting values of rainfall durations of 5 1440 (24hrs) and return periods of 2, 5, 10, 25, 50 and 100yrs into the developed empirical equation to obtain their corresponding rainfall intensities (mm/hr).

III.

RESULTS
21.429T 0.6905 t 0.2129

The empirical equation for the development of IDF curves for Makurdi area was found to be (equation 4):

(4)

The station constants were found to be; c = 21.429, m = 0.6905 and b = 0.2129. This shows that rainfall intensity (i) is directly proportional to return period (T) and inversely proportional to the duration of rainfall (t). Table 1 shows the IDF data while fig 1 shows the IDF curve for Makurdi area.

42

Vol. 5, Issue 1, pp. 40-46

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Table 1. Results of Generated Final IDF Data RETURN PERIODS (years) 5 10 25 50 Rainfall intensities (mm/hr) 5 10 15 20 25 30 35 40 45 50 55 60 90 120 150 180 210 240 270 300 330 360 420 780 840 900 960 1020 1080 1140 1200 1260 1320 1380 1440 Source: [12] 24.55 21.18 19.43 18.28 17.43 16.76 16.22 15.77 15.38 15.04 14.73 14.46 13.27 12.48 11.90 11.45 11.08 10.77 10.50 10.27 10.06 9.88 9.56 8.38 8.25 8.13 8.02 7.91 7.82 7.73 7.64 7.56 7.49 7.42 7.35 46.22 39.88 36.58 34.41 32.81 31.56 30.54 29.69 28.95 28.31 27.74 27.23 24.98 23.50 22.41 21.55 20.86 20.27 19.77 19.33 18.94 18.60 17.99 15.77 15.53 15.30 15.09 14.90 14.72 14.55 14.39 14.24 14.10 13.97 13.84 74.59 64.36 59.04 55.53 52.95 50.94 49.29 47.91 46.72 45.69 44.77 43.95 40.31 37.92 36.16 34.78 33.66 32.72 31.91 31.20 30.57 30.01 29.04 25.46 25.06 24.69 24.35 24.04 23.75 23.48 23.22 22.98 22.76 22.54 22.34 140.43 121.17 111.14 104.54 99.69 95.90 92.80 90.20 87.97 86.01 84.29 82.74 75.90 71.39 68.08 65.48 63.37 61.59 60.07 58.74 57.56 56.50 54.68 47.92 47.17 46.49 45.85 45.26 44.72 44.20 43.72 43.27 42.85 42.44 42.06 226.64 195.54 179.37 168.71 160.89 154.76 149.76 145.57 141.96 138.81 136.02 133.53 122.48 115.21 109.86 105.68 102.27 99.40 96.94 94.79 92.89 91.18 88.24 77.34 76.13 75.02 74.00 73.05 72.16 71.34 70.56 69.83 69.15 68.50 67.88 365.76 315.57 289.48 272.28 259.65 249.76 241.70 234.92 229.10 224.02 219.52 215.49 197.67 185.93 177.30 170.55 165.04 160.42 156.45 152.98 149.90 147.15 142.40 124.82 122.86 121.07 119.42 117.89 116.46 115.13 113.88 112.70 111.59 110.54 109.54

Time(min)

100

43

Vol. 5, Issue 1, pp. 40-46

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963 1000.00

Rainfall intensity (mm/hr)

100.00

2yrs 5yrs 10yrs

10.00

25yrs 50yrs 100yr s

1.00 5 50 500 Duration (min) 5000

Fig 1. IDF curves for Makurdi area based on 1979 to 2009 rainfall records.

Figure 2 shows a flow chart of steps used to develop the IDF curves for Makurdi area:

Data aquisition (1979 2009 rainfall records). Sorting, ranking and determination of return periods. Determination of rainfall intensities and duration. Correlation of rainfall intenssity, duration and frequency. Determination of station constants and emperical equation. Substuting the station constants into the emperical equation. Generation of IDF curves.
Fig 2: Flow chart of steps used to develop the IDF curves for Makurdi area:

IV.

DISCUSSION

The IDF curves produced have non-uniform negative slopes. The trend line regression equations for the various return periods by least square method also have non-uniform negative slopes. This shows that the rainfall intensities generally decrease with increase in duration for a giving return period. The empirical equation for Makurdi area compares favourably with that for Imo River Basin and environs

44

Vol. 5, Issue 1, pp. 40-46

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
for Nigeria except for difference in station constants. This implies that station constants vary from one geographical region to the other. The empirical equation is used directly to compute the rainfall intensity (i) which when substituted into the rational method will lead to the computation of peak discharge of storm water for which hydrological structures can be designed for. It could be deduced from figure 1 that for a given duration, the rainfall intensities increase with increase in return periods. This explains why larger hydrological structures such as dams and bridges are designed for higher return periods while small hydrological structures such as culverts and drainage gutters are designed for low return periods. Also, for a given return period, rainfall intensities decrease with increase in duration. This implies that high intensive rainfall of short durations could have high devastating consequences of runoff to the environment. Figure 3 gives the flow chart of usage of IDF curves in design of hydrological structures.

Fig 3: Flow chart to use the IDF curves to design hydrological structures

V.

CONCLUSION

The graphical fitting method was first employed in the analysis of rainfall in an attempt to develop IDF curves for Makurdi area. Graphs of logarithms rainfall intensities were plotted against the logarithms of rainfall durations. The least square method was used with a view to bringing the scattered points close to each other. The empirical equations developed by the least square method agreed with earlier researchers empirical equations. The station constants for Makurdi area were found to be; c = 21.429, m = 0.6905 and b = 0.2129. It was concluded there was linear relationship between the rainfall amounts and their corresponding durations. Statistically, there was no significant difference between both rainfall amount and rainfall intensities.

REFERENCES
[1]. Dupont and Allen, 2000:Revision of the Rainfall- Intensity Duration curves for the Commonwealth of Kentucky [2]. Schwab, G. O., Frevert, R. K.,Edmenster, T. W. and Barnas, K. K. (2004). Soil and Water Engineering, Fifth edition. Published by John Wiley & Sons New York pp66-73 [3]. Gordon, M. F. John, C. G. and Okun, D. A. (2005): Elements of Water Supply and Water Disposal. Fourth Edition Published by John Wiley and Sons Inco Canada. Pp. 48-60 [4]. Oyebande, L. (1980): Rainfall intensity-duration-frequency curves and maps for Nigeria. Occasional pap. series, no. 2, Department of Geography, Univ. of Lagos.

45

Vol. 5, Issue 1, pp. 40-46

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
[5]. Oyebande Lekan, (1982): Deriving rainfall-intensity-duration-frequency relationships and estimates for regions with inadequate data. Hydrological Science-Journal-des Sciences Hydrologiques, 27,3,9. http://www.iahs.infor/hysj-27-03-0353.pdf. Assessed on 20/3/2009 [6]. Oyegoke, E. S. and J.O. Sonuga, (1983) A new technique for the analysis of extreme rainfall with particular application to Lagos metropolis, Nordic Hydrol. Pp127-138 [7]. Oyegoke, S. O. and Oyebande, L. (2008) A New Technique for Analysis of Expreme Rainfall for Nigeria. Environmental Research Journal. 2(1)7-14. http://207.56.141/fulltext/erj/2008/1-14pdf. Assessed on 14/5/2009 [8]. Ayoade, J. O. (1988) Tropical Hydrology and Water Resources First Edition Published by MacMillan Published Ltd. London and Basingstoke. Pp. 36- 43. [9]. Ojukwu, S. C. (1983) Hydrological Computation for Water Resources Development within Imo River Basin, Nigeria. http://www.cig.ensmp.fr/ iahs/rebooks/9140/iahs-140-0409.pdf. [10]. Egboramy (1989) University of Makurdi Master Planners, Architechs, Town Planners, Engineering Consultants Ltd [11]. Schirley, D., Stanley, W. and Daniel, C. (2006). Statistics for Research. 3 rd edition, Wiley Interscience. John Wiley & Sons Inc. Publication. Pp 211-217. [12]. Laudan, J.K (2012). Determination of Rainfall Intensity-Duration-Frequency [13]. Curves for Makurdi and Bauchi Areas. An unpublished Master of Engineering Thesis University of Agriculture, Makurdi Nigeria. Martins Okey Isikwue is a Senior Lecturer in the Department of Agricultural and Environmental Engineering, University of Agriculture Makurdi Nigeria. Also holds Certificate in Management of Advanced Irrigation Systems, Israel. His research interest include Watershed management, Irrigation and Drainage Engineering, Water and Environmental Quality Engineering.

Sam Baba Onoja is a Senior Lecturer in the Department of Agricultural and Environmental Engineering, University of Agriculture Makurdi Nigeria. His research interests are Irrigation and Drainage Engineering, Water and Environmental Quality Engineering, Development Studies and Hydrology.

Kefas L. Laudan is a staff of Federal Polytechnic Bauchi Nigeria. Currently he is doctoral Student in the Department of Agricultural and Environmental Engineering, University of Agriculture Makurdi Nigeria. His research interest is Soil and Water Engineering.

46

Vol. 5, Issue 1, pp. 40-46

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

FASTICA BASED BLIND SOURCE SEPARATION FOR CT IMAGING UNDER NOISE CONDITIONS
Rohit Kumar Malik1 and Ketaki Solanki2
2

Applications Engineer, Oracle India Private Limited, Bangalore, Karnatka, India M. Tech Student, Department of Electronics and Communication Engg., Guru Gobind Singh Indraprastha University, Delhi, India

ABSTRACT
A novel blind source separation method, fast independent component analysis (FastICA) is proposed in this paper. The proposed method is extended from the existing FastICA algorithm in one-dimensional signals. The existed FastICA is not suitable for the signals which are under noise. To solve this problem, we combined the image denoising and source separation concepts on medical images under noise conditions. The performance of the proposed method is tested National Electrical Manufacturers Association (NEMA) computer tomography (CT) image database. The results after being investigated the proposed method show that it can separate every independent component effectively under different noise conditions of the images.

KEYWORDS: Blind source separation, FastICA, CT imaging.

I.

INTRODUCTION

Since the beginning of the last decade, extensive research has been devoted to the problem of blind source separation (BSS). The attractiveness of this particular problem is essentially due to both its applicative and theoretical challenging aspects. This research has given rise to the development of many methods aiming to solve this problem (see [1] and [2] for an overview). An interesting aspect of this emerging field, which is still open to more research, is the fact that the theoretical development evolves in pair with the real-world application specificities and requirements. Extracting components and time courses of interest from fMRI data [3], [4] is a representative illustration of this statement. BSS can be analyzed with two dual approaches: source separation as a source reconstruction problem or source separation as a decomposition problem. In the first approach, one assumes that during an experiment , the collected data x1....T {x1 ,......., xT } are not a faithful copy of the original process of interest s1....T under study (sources). In other words, the observed data x1....T are some transformation F of the sources s1....T corrupted with a stochastic noise n1....T reflecting either the modeling incertitude or the superposition of real undesirable signals (1) x1....T F (s1....T ) n1....T where is the operator modeling the noise superposition. Given the data x1....T our objective is the recovery of the original sources s1....T . The second approach for dealing with the source separation problem is to consider it as decomposition on a basis enjoying some particular statistical properties. For instance, principal component analysis (PCA) relies on the decorrelation between the decomposed components, and independent component analysis (ICA) relies on their statistical independence. The decomposition approach can be considered to be dual to the reconstruction approach (see Fig. 1) as the existence of an original process is not required.

47

Vol. 5, Issue 1, pp. 47-55

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Fig. 1: Duality of reconstruction and decomposition approaches.

Based on independent and identically distributed (i.i.d.) source modeling, many proposed algorithms are designed to linearly demix the observations x1....T . The separation principle in these methods is based on the statistical independence of the reconstructed sources (ICA) [5][9]. However, ICA is designed to efficiently work in the noiseless case. In addition, with the i.i.d assumption, the separation necessarily relies on high order statistics, and treating the noisy case with the maximum likelihood approach leads to complicated algorithms [10][12]. Discarding the i.i.d assumption, source separation can be achieved with second order statistics. For instance, second order correlation diversity in the time domain [13], frequency domain [14], or time frequency domain [15] are successfully used to blindly separate the sources. Non stationary secondorder-based methods are also proposed in [16][20] (see [21] and the references therein for a synthetic introduction of these concepts). Stationary and non stationary can approximately be seen as dual under Fourier transformation. For instance, based on the circular approximation, it is shown [22] that a finite sample correlated temporal stationary signal has a Fourier transform with non stationary decorrelated samples. We recently proposed a maximum likelihood method to separate noisy mixture of Gaussian stationary sources exploiting this temporal/spectral duality [23], [24]. The Gaussian model of sources allows an efficient implementation of the expectationmaximization (EM) algorithm [25]. In this paper, a fast algorithm of blind source separation based on ICA is introduced on CT images. The results of experiment show that the proposed approach can separate every independent component effectively. Our method is the extension of existing fastICA on signals [26]. This paper is organized as follows. In Section II, the overview of the ICA algorithm is discussed. In Section III, the FastICA is discussed. In Section IV, experimental results and discussions are presented. Finally, Section V concludes the paper.

II.

ICA ALGORITHM

2.1. ICA Definition


We can assume the there is an x(t) which is N-dimensional signal: x(t)=As(t)+n(t); t=1,2,. (2) where,x(t) is N-dimensional vector ( x1....N )of the observed signal at the discrete time instant t, A is an unknown transfer matrix or mixed matrix of signal, s(t) is the independence vector of MN unknown source signal component, n(t) is the observed noise vector. ICAs basic idea is to estimate or separate s(t) which is the source signal from x(t) which are the mixed observation signals, it is equivalent to estimate matrix A. We can assume that there is a W matrix, which is the separation inverse matrix of A, then s(t)=Wx(t). The algorithm must follow the below assumptions: 1. The quantities of x(t) must be greater than or equal to the quantities of source signal, for the sake of convenience, we can take the same quantity, namely, the mixed matrix A is full rank matrix. 2. Each component of s(t) is statistical independent. 3. Each vector of source signal is only allowed to have one Gaussian distribution, this is because a number of linear mixed Gaussian signal is still Gaussian distribution, it cannot be separated. 4. It is no noise or only low additive noise. That is, n(t) in Eq. (2) approaches to zero.

48

Vol. 5, Issue 1, pp. 47-55

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

III.

FAST ICA ALGORITHM

At present, the conventional ICA model estimated algorithm mainly includes information maximization, mutual information minimization and maximum likelihood estimation method. The main problem is the slow convergence rate, large computing quantity. But FastICA is based on a fixed-point iteration scheme which has most of the advantages of neural algorithms, such as parallel, distribution, fast convergence, computing less, and small memory requirements. Data Pre-processing If we use a fast fixed-point algorithm for independent component analysis, ICA algorithm is usually required an appropriate pre-treatment for the observation data, this pre-treatment can improve the convergence in the calculation process. In the pre-treatment process, an important step is to whiten the data. The so-called whiten refers to a linear transformation of the data, makes sure the sub-vector of new vector unrelated and the new vectors covariance matrix is a identity matrix, then, the new vector is called spatially white, this process is called whiten. We can assume that x(t) are zero mean, the prewhiten treatment for x(t) can achieve under the style. P(t)=Qx(t) (3) Where p(t) is whiten vector, Q is whiten matrix, which is selected to make sure the sub-vector of whiten vector unrelated and have a unit variance. Therefore, correlation matrix (covariance matrix) of p(t) become a unit matrix, that is, E{PPT}=I. Then, Eq. (3) become Eq. (4): P(t)=Qx(t)=QASS(t)=KS(t) Where matrix K=QA is called separation matrix which is a MM orthogonal mtrix, then E{PPT}=KE|SST|KT=I Therefore, S(t)=KTp(t) (6) (5) (4)

Determination of Objective Function Fast fixed-point algorithm (FastICA) is a rapid neural algorithm by seeking a local extremism of observed variables linear combination of fourth-order cumulant (kurtosis coefficient). Then we can use kurtosis coefficient to get separation matrix. Kurtosis coefficient is the higher-order statistics of signal, it is a typical function for non-Gaussian description. For a zero-mean random variable y, kurtosis coefficient is defined as: Kurt[y]=E|y4|-3[E|y2|]2 (7)

If y is a Gaussian random signal, the kurtosis coefficient is zero. When the random signal is the superGaussian distribution, its kurtosis is positive; when the random signal the sub-Gaussian distribution, its kurtosis is negative. For two independent random signal y1 and y2, kurt[y1+y2]= kurt[y1]+ kurt[y2], for a scalar constant , kurt[y]=4kurt[y]. We do pre-whitening treatment for x(t) which is observation signal, then we can get the vector P. Now, we assume that there is a linear combination WTP, and the norm of W is bounded and W=1, then we can get the greatest or the smallest kurtosis. Definition, Z=KTW. because K is orthogonal matrix, z=1. Because of the character of kurtosis, Kurt(WTP)=kurt(WTKS)=kurt(ZTS)=ni=1z4ikurt(si) Eq. (8) is the objective function what we seek to. (8)

IV.

IMAGE DENOISING USING MULTI-SCALE RIDGELET TRANSFORM

4.1. Multi scale Ridgelet Transform

49

Vol. 5, Issue 1, pp. 47-55

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Multiscaleridgelets based on the ridgelet transform combined with a spatial bandpass filtering operation to isolate different scales as shown in [27]. Algorithm: 1. Apply the `a trous algorithm with J scales [28]. 2. Apply the radon transform on detail sub-bands of J scales. 3. Calculate ridgelet coefficients by applying 1-D wavelet transform on radon coefficients. 4. Get the multiscaleriglet coefficients for J scales.

Fig. 2: Relations between transforms.

4.2. Image Denoising


Suppose that one is given noisy data of the form: (9) I ( x, y) I ( x, y) Z ( x, y) Where Z(x,y) is unit-variance and zero-mean Gaussian noise. Denoising a way to recover I(x,y) from the noisy image I ( x, y) as proper as possible. Rayudu et al. [29] have proposed the hard thresholds for Ultrasound image denoising as shown below: Let y be the noisy ridgelet coefficients (y = MRT*I). They used the following hard-thresholding rule for estimating the unknown ridgelet coefficients: y y ; if y k (10)
y 0 ; else

In their experiments, they have chosen a scale dependent value for k; k = 4 for the first scale ( j = 1) while k = 3 for the others ( j > 1). Algorithm: 1. Apply multi scale ridgelet transform to the noisy image and get the scaling coefficients and multi scale ridgelet coefficients. 2. Chose the threshold by Eq. (10) and apply thresholding to the multi scale ridgelet coefficients (leave the scaling coefficients alone). 3. Reconstruct the scaling coefficients and the multi scale ridgelet coefficients thresholded and get the denoised image.

4.3. Proposed Algorithm


The algorithm of the proposed blind source separation under different noise conditions is given below. Algorithm: 1. Load the two images for source mixing. 2. Apply the Gaussian noise of zero mean and 0.05 standard deviation. 3. Apply the noise removal algorithm using multi-scale ridgelet transform. 4. Apply the source separation algorithm using FastICA algorithm.

50

Vol. 5, Issue 1, pp. 47-55

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Fig. 3: Flowchart of Discrete ridgelet transform

V. EXPERIMENTAL RESULTS AND DISCUSSIONS


In order to evaluate the proposed method we tested on CT images [30]. The digital imaging and communications in medicine (DICOM) standard was created by the National Electrical Manufacturers Association (NEMA) [30] to aid the distribution and viewing of medical images, such as computer tomography (CT) scans, MRIs, and ultrasound. For this experiment, we have collected 7 CT scans of different parts of human body and results are presented as follows. Figs. 4 to 8 illustrate the results of proposed algorithm. From Figs. 4 to 8, it is clear that the proposed approach can separate every independent component effectively.

Fig. 4: Results of proposed method on CT images

51

Vol. 5, Issue 1, pp. 47-55

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Fig. 5: Results of proposed method on CT images

Fig. 6: Results of proposed method on CT images

52

Vol. 5, Issue 1, pp. 47-55

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Fig. 7: Results of proposed method on CT images

Fig. 8: Results of proposed method on CT images

VI.

CONCLUSIONS

A novel combined FastICA and denoising algorithm is proposed in this paper for CT image blind source separation under different noise conditions. Proposed method is extended from the existing FastICA on signals. The performance of the proposed method is tested on NEMA CT image database. The results of experiment show that the proposed approach can separate every independent component effectively.

53

Vol. 5, Issue 1, pp. 47-55

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

REFERENCES
[1] A. Hyvrinen, J. Karhunen, and E. Oja, Independent Component Analysis. New York: Wiley, 2001. [2] A. Cichocki and S. Amari, Adaptive Blind Signal and Image Processing: Learning Algorithms and Applications. New York: Wiley, 2003. [3] M. McKeown, S. Makeig, G. Brown, T. Jung, S. Kindermann, A. Bell, and T. Sejnowski, Analysis of fMRI data by blind separation into independent spatial components, Hum. Brain. Mapp., vol. 6, pp. 160 188, 1998. [4] V. Calhoun, T. Adali, L. Hansen, J. Larsen, and J. Pekar, ICA of functional MRI data: An overview, in Proc. Fourth Int. Symp. Independent Component Anal. Blind Source Separation, Nara, Japan, Apr. 2003, pp. 281288. [5] C. Jutten and J. Herault, Blind separation of sources. I. An adaptive algorithm based on neuromimetic architecture, Signal Process., vol. 24, no. 1, pp. 110, 1991. [6] P. Comon, Independent component analysis, A new concept ?, Signal Process., Special Issue on Higher Order Statistics, vol. 36, no. 3, pp. 287314, Apr. 1994. [7] A. Hyvrinen and E. Oja, A fast fixed-point algorithm for independent component analysis, Neural Comput., vol. 9, no. 7, pp. 14831492, 1997. [8] J.-F. Cardoso and A. Souloumiac, Blind beamforming for nonGaussian signals, Proc. Inst. Elect. Eng. F, vol. 140, no. 6, pp. 362370, Dec. 1993. [9] A. J. Bell and T. J. Sejnowski, An information maximization approach to blind separation and blind deconvolution, Neural Comput., vol. 7, no. 6, pp. 11291159, 1995. [10] E. Moulines, J. Cardoso, and E. Gassiat, Maximum likelihood for blind separation and deconvolution of noisy signals using mixture models, in ICASSP, Munich, Germany, Apr. 1997. [11] H. Attias, Independent factor analysis, Neural Comput., vol. 11, pp. 803851, 1999. [12] H. Snoussi and A. Mohammad-Djafari, Fast joint separation and segmentation of mixed images, J. Electron. Imaging, vol. 13, no. 2, pp. 349361, Apr. 2004. [13] A. Belouchrani, K. AbedMeraim, J.-F. Cardoso, and E. Moulines, A blind source separation technique based on second order statistics, IEEE Trans. Signal Process., vol. 45, no. 2, pp. 434444, Feb. 1997. [14] K. Rahbar and J. Reilly, Blind source separation of convolved sources by joint approximate diagonalization of cross-spectral density matrices, in Proc. ICASSP, 2001. [15] A. Belouchrani and M. Amin, Blind source separation using time-frequency distributions: Algorithm and asymptotic performance, in Proc. ICASSP, Munchen, Germany, 1997, pp. 34693472. [16] E. Weinstein, M. Feder, and A. Oppenheim, Multi-channel signal separation by decorrelation, IEEE Trans. Speech, Audio Process., vol. 1, no. 4, pp. 405413, Oct. 1993. [17] K. Matsuoka, M. Ohya, and M. Kawamoto, A neural net for blind separation of nonstationary sources, Neural Networks, vol. 8, no. 3, pp. 411419, 1995. [18] S. Choi and A. Cichocki, Blind separation of nonstationary sources in noisy mixtures, Electron. Lett., vol. 36, no. 9, pp. 848849, Apr. 2000. [19] A. Souloumiac, Blind source detection and separation using second order nonstationarity, in Proc. ICASSP, 1995, pp. 19121915. [20] D.-T. Pham and J. Cardoso, Blind separation of instantaneous mixtures of nonstationary sources, IEEE Trans. Signal Process., vol. 49, no. 9, pp. 18371848, Sep. 2001. [21] J. Cardoso, The three easy routes to independent component analysis; contrasts and geometry, in Proc. ICA Workshop, Dec. 2001. [22] B. R. Hunt, A matrix theory proof of the discrete convolution theorem, IEEE Trans. Automat. Control, vol. AC-19, pp. 285288, 1971. [23] H. Snoussi, G. Patanchon, J. Macas-Prez, A. Mohammad-Djafari, and J. Delabrouille, Bayesian blind component separation for cosmic microwave background observations, in Proc. Bayesian Inference Maximum Entropy MethodsWorkshop, Amer. Inst. Phys., R. L. Fry, Ed., Aug. 2001, pp. 125140. [24] J. Cardoso, H. Snoussi, J. Delabrouille, and G. Patanchon, Blind separation of noisy Gaussian stationary sources. Application to cosmic microwave background imaging, in Proc. EUSIPCO, Toulouse, France, Sep. 2002.

54

Vol. 5, Issue 1, pp. 47-55

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
[25] A. P. Dempster, N. M. Laird, and D. B. Rubin, Maximum likelihood from incomplete data via the EM algorithm, J. R. Statist. Soc. B, vol. 39, pp. 138, 1977. [26] Liu Yang, Zhang Ming, Blindsource separation Based on FastICA, 9th International Conference on Hybrid Intelligent Systems, pp. 475-479, 2009. [27] Jean-Luc starck, EmmanualJ.Candes, and David L.Donoho,The Curvelet transform for image DenoisingIEEETrans.Image processing, vol.11, no.6, pp.670-684, 2002. [28] Yong-bingXu, Chang-Sheng Xie, and Cheng-Yong Zheng, An Application of the Trous Algorithm in Detecting Infrared Targets, IEEE conf. on Wavelet Analysis and Pattern Recognition, Beijing, China, 2-4: Nov. 2007, pp.1015-1019. [29] D. K. V. Rayudu, SubrahmanyamMurala, and Vinodkumar, Denoising of Ultrasound Images using Curvelet Transform, The 2nd International Conference on Computer and Automation Engineering (ICCAE 2010), Singapore, vol. 3, pp. 447451, 2010. [30] ftp://medical.nema.org/medical/Dicom/Multiframe/.

AUTHORS
Rohit Kumar Malik: I am having 4+ years of experience in product development. I started my professional career with Tally Solutions Pvt. Limited (Research and Development). There, I worked as Software Engineer for Research and Development. Currently, working with Oracle India Pvt. Ltd. I completed B. Tech in Electronics and Communication Engineering. Subsequently, I got post graduate diploma in Embedded Systems.

Ketaki Solanki: I am having 5+ years of experience. I started my professional career as an Assistant Professor in Electronics & Communication Engineering. I completed M. Tech in Electronics and Communication Engineering. Image Processing and Digital Signal Processing is my area of Research and Development.

55

Vol. 5, Issue 1, pp. 47-55

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

IMPROVEMENT OF TRANSIENT STABILITY THROUGH SVC


V. Ganesh1, K. Vasu2, K. Venkata Rami Reddy3, M. Surendranath Reddy4 and T.GowriManohar5
1

Dept. of Electrical and Electronics Engineering, JNT University, Anantapur, A.P, India, 2 Dept. of Electrical and Electronics Engineering, MITS, Madanapalle, A.P, India 3 Dept. of Electrical and Electronics Engineering, SKIT, Srikalahasti, A.P, India 4 Dept. of Electrical and Electronics Engineering, VIT, Proddatur, A.P, India 5 Dept. of Electrical and Electronics Engineering, SV University, Tirupati, A.P, India,

ABSTRACT
With the growing stress on todays power systems, many utilities are increasingly and there is a need for inclusion of security analysis capabilities in the energy management systems. Transient Stability analysis is the evaluation of the ability of the power system to withstand to a set of severe but credible contingencies and to survive transition to an acceptable steady state condition. The performance of a power system during a transient period can be obtained from the network performance equations. In transient stability studies a load flow calculation is made first to obtain system conditions prior to the disturbances. After the load flow calculations, the admittance matrix of the network must be modified to reflect the changes in the representation of the network. FACTS technology is a collection of controllers which can be applied individually or in coordination with others to control one or more of the interrelated system parameters, voltage, impedance and phase angle. Static Var Compensator ( SVC) is a FACTS device, which can control voltage at the required bus by means of reactive power compensation, thereby improving the voltage profile of the system. SVCs have been used for high performance steady state and transient voltage control compared with classical shunt compensation. The effectiveness of the proposed method is analysed with IEEE 14-bus test system.

KEYWORDS: FACTS, SVC, Transient Stability, Optimal Power Flow

I.

INTRODUCTION

A power system is a complex network comprising numerous generators, transmission lines, variety of loads and transformers. As a consequence of increase in demand for power, some transmission lines are more loaded than was planned when they were built. So with the increased power transfer, transient stability also increasingly has become important for secure operation. Transient stability evaluation of large scale power systems is an extremely intricate and highly non-linear operation. An important function of transient evaluation is to appraise the capability of the power system to withstand serious contingency in time, so that some emergencies or preventive control can be carried out to prevent system breakdown. In practical operations, correct assessment of transient stability for given operating states is necessary and valuable for power system operation. Transient stability of a system refers to the stability [1] when subjected to large disturbances such as faults and switching of lines. The resulting system response involves large excursions of generator rotor angles and is influenced by the nonlinear power angle relationship. Stability depends upon both the initial operating conditions of the system and the severity of the disturbance. The voltage stability, and steady state and transient stabilities of a complex power system can be effectively improved by the use of FACTS devices. Transient stability studies place an important role in power systems, which provide information related to the capability of a power system to remain in synchronism during major disturbances

56

Vol. 5, Issue 1, pp. 56-66

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
resulting from either the loss of generation or transmission facilities, sudden or sustained load changes, in the voltages, currents, powers, speeds and torques of the machines of the power systems as explained in [2]. For most of the faults in a multi-machine system [3], it was observed that only one machine (or a small group of machines) becomes severely disturbed and is called the critical machine (or critical group). The critical machine (or critical group) is usually responsible to initiate instability for an unstable situation. FACTS devices [4, 5] are capable of controlling the network condition in a very fast manner and this unique feature of FACTS devices can be exploited to enlarge the decelerating area and hence improving the first swing stability limit of a system. SVC and STATCOM are members of FACTS family that are connected in shunt with the system. Continuous and discontinuous types of control are very commonly used for shunt FACTS devices to improve the transient stability and damping of a power system [6].The transient stability of a generator [8] depends on the difference between mechanical and electrical power. During a fault, electrical power is reduced suddenly while mechanical power remains constant, thereby accelerating the rotor. To maintain transient stability, the generator must transfer the excess energy toward the system. For this purpose, the existing FACTS devices can be employed. Due to FACTS device placement in the main power transfer path of the critical machine, the output power of the machine and hence its first swing stability limit can be increased by operating the FACTS device at its full capacitive rating [9]. Such an operation should continue until the machine speed reaches a reasonable negative value during the first return journey. Control strategy was proposed based upon local input signals can be used for series and shunt compensator devices (TCSC, and SVC) to damp power swings [10, 11]. Using the proposed control strategies, the series and shunt connected compensators can be located in several locations. In transient stability studies a load flow calculation is made first to obtain system conditions prior to disturbance. In this calculation, the network is composed of system buses, transmission lines and transformers. The network representation for transient stability studies include, in addition to those components, equivalents circuits for machines and static impedances or admittances to ground for loads. A transient stability analysis is performed by combining a solution of the algebraic equations describing the network with a numerical solution of the differential equations. In this paper also the RUNGA-KUTTA method is used for the solution of differential equations in transient stability studies. Transient stability analysis, fault analysis and power angle characteristics have been calculated without FACTS devices and after inserting FACTS device SVC the improvements in transient stability have been discussed. The modelling of SVC and transient stability solution formation for single and multiple machine systems discussed in session II. Results for the IEEE 14-bus power system are discussed with respect to transient stability solution during the faults at different buses without and with SVC device in section III. Finally in the session IV discussed the conclusion of the proposed method II. MODELLING OF SVC AND TRANSIENT STABILITY SOLUTION FORMATION

2.1. Modelling of SVC


SVC is a Shunt FACTS device which is considered a variable impedance type device. The SVC uses conventional thyristors to achieve fast control of shunt-connected capacitors and reactors. The configuration of the SVC is shown in Fig.1, which basically consists of a fixed Capacitor (C) and a thyristor controlled reactor (L). The firing angle control of the thyristor banks determines the equivalent shunt admittance presented to the power system.

57

Vol. 5, Issue 1, pp. 56-66

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Fig. 1 SVC connected to a transmission line

ISVC = jBSVCVm The reactive power injected at bus m is QSVC = Qm = ISVCVm = -Vm2BSVC Where

(1) (2) (3)

BSVC

1 XL XC 2( syc) sin 2 syc XCXL

A Jacobian matrix that accounts for the SVC is given as

(4) Where

SVC is found from inversion of the Jacobian matrix. The variable is then updated by
n SVC 1 SVC SVC n n

Qm 2V 2 k cos(2 SVC ) 1 SVC XL

(5)

(6)

The control strategy of SVC is considered as

(7) Here max is the maximum speed of the machine and it is usually at fault clearing and is a small positive constant. K is a positive gain and its value depends on SVC rating.

2.2. Development of Transient Stability Solution


In order to determine the angular displacement between the machines of a power system during transient conditions, it is necessary to solve the differential equations describing the motion of the machine rotors. The swing equation can be represented using the single synchronous machine connected to infinite bus bars, governed by the nonlinear differential equation M d2 / dt2 = pm pe Where pe = pmax sin M d2 / dt2 = pm pmax sin (9) (8)

58

Vol. 5, Issue 1, pp. 56-66

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
To determine load flow solution, Newton - Raphson Method is employed. A fault at or near a bus is simulated by appropriately changing the self-admittance of the bus. For a three-phase fault, the fault impedance is zero and the faulted bus has the same potential as the ground. This involves placing infinite shunt admittance, so that the bus voltage in effect is zero. The fault is removed by restoring the shunt admittance to the appropriate value depending on the post fault system configuration. In the application of the Runga-Kutta fourth-order approximation, the changes in the internal voltage angles and machine speeds, again for the simplified machine representation, are determined from

i t t i t t

1 k1i 2k2i 2k3i k4i 6 1 l1i 2l2 i 2l3i l4 i 6

(10)

Where i = 1,2,,no. of generators. The ks and ls are the changes in i and i respectively, which are obtained using derivatives evaluated at predetermined points. For this procedure the network equations are to be solved four times.

2.3. Transient Stability Analysis for Multi machine system


In multi-machine case, two steps are required, they are: a. The steady state pre-fault conditions for system are calculated using load flows. b. The pre-fault network representation is determined and then modified to account for the fault and for the post-fault conditions. From the first step it is possible to know the values for power, reactive power and voltage at each generator terminal and load bus with all angles measured with respect to swing bus. The transient internal voltage of each generator is calculated by using E | = Vt + jXd | I Where V t Terminal voltage I Output current Each load is converted into a constant admittance to ground at its bus using the equation: YL = PL-jQL/ | VL |2 (12) The bus admittance used for pre-fault load flow calculation is augmented to include the transient reactance of generator and shunt load admittance. The second step determines the modified bus admittance matrices corresponding to the faulted and post-fault conditions. Since only generator internal buses have injections, all other buses can be eliminated to reduce the size of matrix and matrix size is equal to number of generators. For elimination of nth bus Yij(new) = Yij(old) Yin Ynj / Ynn (13) During and after the fault, the power flow into the network from each generator is given by Pei = | Ei| |Ej| |Yij| cos( i-j-ij) (14) Where Yij = |Yij|> ij Yij= Admittance between ith and jth nodes, now the swing equation is given by, 2Hi d2i / s dt2 = Pa = Pmi - Pe to represent the motion of each rotor for during fault and post-fault periods. In a multi-machine system a common system base must be chosen. Let Gmech = machine rating (base) (15) (11)

59

Vol. 5, Issue 1, pp. 56-66

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Gsystem = system base Then the swing equation can be written as (Hsystem/f) (d2 / dt2) = Pm Pe pu in system base Consider the swing equations of n machines on common base (Heq / f) (d2 / dt2) = Pm Pe Where Pm = Pm1+Pm2++Pmn Pe = Pe1+Pe2++Pen Heq = H1 + H2 ++Hn Then machines swinging coherently are thus reduced to a single machine. (17) (16)

(18)

III.

RESULTS AND ANALYSIS-CASE STUDIES AND DISCUSSIONS

3.1. IEEE 14-Bus model


A three-phase fault is considered at two locations in IEEE 14-bus Model [10]. One of them is on line 2-4 to the generating station 2, which has the smallest inertia value. The second is on line 13-14, very close to bus 13 and far away from all the generating station. Thus, the effect of the distance between the fault location and the generating stations and the effect of fault clearing time is analysed.

3.2 Critical clearing time 3.2.1 When fault on line 2-4 without FACTS devices
The study is performed with the intention of analyzing the effect of fault location in conjunction with the fault clearing time. A three-phase fault at bus 2 near generating station on line 2-4 is shown with a learing time of 0.4 sec. It is observed from fig. 2. that generator 2 is severely disturbed. The results of the angle differences of the machines in the system, when fault occurred on line 2-4 and the fault is cleared in 0.5 sec are shown in fig. 2.
Phase angle difference (fault cleared at 0.4s) 100

50

Delta, degree

-50

-100

-150

0.5 t, sec

1.5

Fig. 2 Plots of angle differences for machines 2, 3, 4 & 5 when fault on bus 2 cleared at 0.4sec

60

Vol. 5, Issue 1, pp. 56-66

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Fig. 3 Plots of angle differences for machines 2, 3, 4 & 5 when fault on bus 2 cleared at 0.5sec.

It is clear that the comparing the swing curves in fig 2 and fig.3, critical clearing time of the system is 0.4 sec. Since after 0.4 sec, the machines go out of step, the fault should be cleared in 0.4 sec.Fig.3 shows the swing curves for machines fall out of synchronism if fault is cleared in 0.5sec, as the critical clearing time here is 0.4 sec. the fault should be cleared in 0.4 sec.

3.2.2 When fault on line 2-4 with FACT device (SVC)


The swing curves for all five generators represented by classical models are shown in fig. 4, when svc is placed in line 2-4 to determine clearing time. Fig.4 shows the results of the angle differences of the machines in the system, when three-phase fault is occurred on line 2-4 after svc is placed. The clearing time of fault that is 0.6 sec. is increased when svc is placed. Fig 4 and Fig.5 shows the comparing of the swing curves, critical clearing time of the system at 0.6 sec. When svc is placed clearing time of fault is more when compared to not placing of any device in the earlier situation.
Phase angle difference (fault cleared at 0.6s) 150

100

50

Delta, degree

-50

-100

-150

0.2

0.4

0.6

0.8 1 t, sec

1.2

1.4

1.6

1.8

Fig. 4 Plots of angle differences for machines 2, 3, 4 & 5 when Fault on bus 2 cleared at 0.6sec

61

Vol. 5, Issue 1, pp. 56-66

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Phase angle difference (fault cleared at 0.65s) 600

400

200

Delta, degree

-200

-400

-600

0.2

0.4

0.6

0.8 1 t, sec

1.2

1.4

1.6

1.8

Fig. 5 Plots of angle differences for machines 2, 3, 4 & 5 when fault on bus 2 leared at 0.65sec

Now if the fault is cleared after 0.6 sec, the machines go out of synchronism as shown in fig.5.There fore critical clearing time is increased to 0.6 sec when svc is placed in line 2-4., where the clearing time of fault is 0.4sec before the device was placed.
Table 1 Critical clearing time without and with FACTs device when fault on line 2-4 Fault on line 2-4 Without FACTS device SVC Critical clearing time (sec) 0.4 0.6

Table 1 gives critical clearing time of fault with and without FACTS device. When FACTS device is placed clearing time is more. So, FACTS devices protect the system until fault is cleared. When SVC is placed clearing time is more compared to without using FACTS device.

3.2.3 When Fault on line 13- 14 without FACTS devices


This study is performed with the intention of analyzing the effect of fault location in conjunction with the fault clearing time. Two faults located on two different lines are considered. One is closer to the generating stations and the other one is far from the generating stations. Now consider fault located far from the generating stations. Fig.6 shows the results of the angle differences of the machines in the system. When fault is occurred on line 13-14, the clearing time of faults is 0.6 sec.
Phase angle difference (fault cleared at 0.6s) 20 15 10 5 0 -5 -10 -15 -20

Delta, degree

0.5 t, sec

1.5

Fig. 6 Plots of angle differences for machines 2, 3, 4 & 5 when fault on bus 13 cleared at 0.6sec.

62

Vol. 5, Issue 1, pp. 56-66

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963 3.2.4 Fault on line 13-14 with FACTS device (SVC)
The swing curves for all five generators represented by classical models are shown in Fig.7, when SVC is placed in line 13-14 to determine clearing time. The clearing time of fault is shown from swing curves. Fig.7 shows the results of the angle differences of the machines in the system, when fault is occurred on line 13-14 after svc is placed. The clearing time of faults is 0.7 sec. When no device is placed clearing time is 0.6 sec but when svc is placed clearing time is increased to 0.7 sec.
Phase angle difference (fault cleared at 0.7s) 25 20 15 10

Delta, degree

5 0 -5 -10 -15 -20

0.2

0.4

0.6

0.8 1 t, sec

1.2

1.4

1.6

1.8

Fig. 7 Plots of angle differences for machines 2, 3, 4 & 5 when fault on bus 13 cleared at 0.7sec Table 2 Critical clearing time without and with FACTs device when fault on line 13-14 Fault on line 13-14 Critical clearing time (sec) Without FACTS device 0.6 SVC 0.7

From table 2, it is observed that critical clearing time of fault is given for with and with out FACTS device. When FACTS device is placed clearing time is more. FACTS devices protect the system until fault is cleared. When SVC is placed clearing time is more compared to without FACTS device. There are many factors affecting the critical clearing time. Here, the effect of distance between the fault location and the generating stations is studied. Two fault locations for the same values of the damping and inertia constants are considered. One of the faults is on line 2-4, very close to bus 2, which is connected to machine 2, and the second fault is on line (10, 11), very close to bus 10. Machines 4 and 5 have the smallest value of inertia, so they are expected to go out of step first. The fault that is closer to the generating station must be cleared rapidly than the fault on the line far from the generation station. Rapid clearing of the faults promotes power system stability.

3.3 Power Angle Characteristics


The curve power versus delta () is known as power angle curve. The power angle curves explain the performance before fault, during fault and after fault. It determines critical clearing angle.

3.3.1 Without FACTS device


In Fig.8, it shows power angle characteristics showing application of equal area criteria to a critically cleared system when FACTS device is not placed.

63

Vol. 5, Issue 1, pp. 56-66

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Application of equal area criterion to a critically cleared system 2 Critical clearing angle = 77.8124 1.8 acceleratina area decelerating area before fault during fault after fault power intial power angle critical clearing angle maximum angle

1.6

1.4

1.2

Power, per unit

Pm 1

0.8

0.6

0.4

0.2

20

40

60

80 100 Power angle, degree

120

140

160

180

Fig. 8 Power angle characteristics without FACTS devices

The system behaviour before fault, during fault and after fault is shown in fig.8. The initial rotor angle, the new operating angle and maximum rotor angle swing are shown.

3.3.2 With FACTS Device - SVC


Fig.9, shows power angle characteristics showing application of equal area criteria to a critically cleared system when FACTS device SVC placed, the power angle curve corresponding to before fault, during fault and after fault. When SVC is placed, the clearing time of fault is increased and critical clearing angle also is increased. The accelerating area and decelerating area are shown in Fig. 9. The Fig. 9 shows the system behaviour before fault, during fault and after fault. The initial rotor angle, the new operating angle and maximum rotor angle swing are shown.
Application of equal area criterion to a critically cleared system 3 acceleratina area decelerating area after fault during fault before fault power intial power angle critical clearing angle maximum angle Critical clearing angle = 119.2591

2.5

Power, per unit

1.5

Pm 1

0.5

20

40

60

80

100

120

140

160

180

Fig. 9 Power angle characteristics with facts device SVC

64

Vol. 5, Issue 1, pp. 56-66

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Table 3 Critical clearing angle without and with FACTs device Power angle curve Without FACTS device SVC Critical clearing angle (Degrees) 77.81 119.26

From table 3, critical clearing time of fault and critical clearing angle are given for with and without FACTS devices. When FACTS device is placed, clearing time is more and FACTS devices protect the system until fault is cleared. When SVC is placed clearing time is more compared to without placing a FACTS device.

IV.

CONCLUSIONS

The developed program with SVC is tested for several cases and major conclusions are the transient stability is improved by decreasing first swing with FACTS devices. FACTS devices help in improving transient stability by improving critical clearing time. The fault that is closer to the generating station must be cleared rapidly than the fault on the line far from the generation station. FACTS device must be placed in the main power transfer of the critical machine. The effectiveness of the proposed method has been shown with IEEE 14-bus system and compared the critical clearing angle and critical clearing angle without and with SVC device. From the result it has been observed that there is a improvement in critical clearing time and critical clearing angle with the help of the SVC device.

V.

FUTURE WORK

The present work is analysed with respect to SVC but there is various types of FACTS devices are available and can model for the transient stability analysis purpose and can be observe the effectiveness of the other FACTS devices during the fault in the power system.

REFERENCES
A.A. Fouad, V. Vittal, T.K. Oh, Critical energy for the direct transient stability assessment of multimachine power systems, IEEE Trans. On PAS, Vol. 103, No. 8, 1984, pp. 2199-2106. [2]. G.T. Heydt, Computer analysis methods for power systems, Macmillan Publishing Company, New York, 1986. [3]. M.H. Haque and A.H.M.A. Rahim, Determination of first swing stability limit of a multi-machine power system, IEE Proc., Part-C, Vol. 136, No. 6, 1989, pp. 373-379. [4]. E. Lerch, D. Povh and L. Xu, Advanced AVC control for damping power system oscillations, IEEE Trans. on PS, Vol. 6, No., 2, 1991, pp. 524-531. [5]. L. Angquist, B. Lundin and J. Samuelsson, Power oscillation damping using controlled reactive power compensation A comparison between series and shunt approaches, IEEE Trans. on PS, Vol. 8, No. 2, 1993, pp. 687-695. [6]. E.Z. Zhou, Application of static var compensators to increase power system damping, IEEE Trans. of Power Systems, Vol. 8, No. 2, 1993, pp. 655-661. [7]. L. Gyugyi, Dynamic compensation of AC transmission lines by solid-state synchronous voltage sources, IEEE Trans. on Power Delivery, Vol. 9, No. 2, 1994, pp. 904-911. [8]. N.G. Hingorani and L. Gyugyi, Understanding FACTS: Concepts and technology of flexible ac transmission systems, IEEE Press, NY, 1999. [9]. L. Gyugyi, Converter-based FACTS technology: Electric power transmission in the 21st century, Proc. of the International Power Electronics Conference, 3-7 April 2000, Tokyo, Japan. [10]. M. Noroozian, M. Ghandhari, G. Andersson, J. Gronquist and I. Hiskens, A robust control strategy for shunt and series reactive compensators to damp electromechanical oscillations, IEEE Trans. On PD, Vol. 16, No. 4, pp. 812-817, 2001. [11]. M.H. Haque, Improvement of fist swing stability limit by utilizing full benefit of shunt FACTS devices, IEEE Trans. on PS, Vol. 19, No. 4, pp. 1894-1902, 2004. [12]. H. Haque, Use of series and shunt FACTS Devices to improve First swing Stability limit, Power Engineering Conference, 2005. IPEC 2005. [1].

65

Vol. 5, Issue 1, pp. 56-66

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

AUTHORS
V. Ganesh is completed B.Tech from JNT University, M.Tech from S.V. University, Tirupathi, India and Ph.D from J.N.T. University Anantapur, Andhra Pradesh, INDIA. He is Presently working as Associate Professor in the Department of Electrical Engineering, J.N.T. University, Anantapur, Andhra Pradesh, India. His areas of interest are Renewable energy sources and its effects to power systems, smart Grid and its applications and Genetic Algorithms and its applications to Electrical distribution systems and its automation.

K. Vasu is completed M.Tech from NIT, Calicut, India. He is Presently working as Assistant Professor in the Department of Electrical & Electronics Engineering, Madanapalle institute of technology, Madanapalle, Andhra Pradesh, India. His area of interest includes Development of Intelligence Controllers for FACTS Devices, Fractional Controllers designs and Non-Linear control systems.

K. Venkata Rami Reddy is presently working as Assistant professor in the Department of Electrical & Electronics Engineering, Srikalahasteesara Institute of technology, Srikalahasti, Andhra Pradesh, India. His area of interest includes FACTS, Deregulation of power systems. M. Surendranath Reddy is presently working as assistant professor in the Department of Electrical & Electronics Engineering, Vagdevi Institute of technology, Proddatur, Andhra Pradesh, India. His area of interest includes FACTS, deregulation of power systems and electrical distribution automation. T. Gowri Manohar is completed M.Tech and Ph.D from S.V. University, Tirupathi, Andhra Pradesh, INDIA. He is Presently working as Associate Professor in the Department of Electrical Engineering, S.V. University, Tirupathi ,Andhra Pradesh, India. His areas of interest are Economic Despatch, FACTS and its applications, Deregulation.

66

Vol. 5, Issue 1, pp. 56-66

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

SIMULATION OF SECURE AODV IN GRAY HOLE ATTACK FOR MOBILE AD-HOC NETWORK
Onkar V. Chandure1, Aditya P. Bakshi2, Saudamini P. Tidke3, Priyanka M. Lokhande4
1&2

Asst. Prof. Department of I.T., J.D. Institute of Engg. & Technology,Yavatmal, India 3 MTech IT (Scholar) TIT Engg, Bhopal, India 4 ME CSE (Scholar) Sipna COET, Amravati, India

ABSTRACT
A MANET is an autonomous system of mobile nodes. The system may operate in isolation, or may have gateways and interface with a fixed network. Its nodes are equipped with wireless transmitters/receivers using antennas which may be omni directional (broadcast), highly-directional (point-to-point), or some combination thereof. At a given time, the system can be viewed as a random graph due to the movement of the nodes, their transmitter/receiver coverage patterns, the transmission power levels, and the co-channel interference levels. In this, paper, we are focusing on the concept of gray hole attack in adhoc network & impact of gray hole attack on network. A gray hole is a node that selectively drops and forwards data packets after advertises itself as having the shortest path to the destination node in response to a route request message. Because of the gray hole attack on the network there is an impact on the different performance metrics of the network such as PDR, e2edelay, throughput etc. Our mechanism helps to protect the network by detecting and reacting to malicious activities of any node. Simulation will be carried out by using network simulator tool so as to address the problem of detection & prevention of gray hole attack in mobile ad-hoc network.

KEYWORDS: Mobile ad hoc network, Routing Protocol, Security in MANET, Gray Hole node.

I.

INTRODUCTION

A Mobile ad-hoc network is a network [1] formed without any central administration which consists of mobile nodes that use a wireless interface to send packet data. These attacks can be classified into two categories, attacks on Internet connectivity and attacks on mobile ad hoc networks. Ad-Hoc network [2] is a wireless network without having any fixed infrastructure. Each mobile node in an adhoc network moves arbitrarily and acts as both a router and a host. A wireless ad-hoc network consists of a collection of "peer" mobile nodes that are capable of communicating with each other without help from a fixed infrastructure. The interconnections between nodes are capable of changing on a continual and arbitrary basis. Nodes within each other's radio range communicate directly via wireless links, while those that are far apart use other nodes as relays. Nodes usually share the same physical media; they transmit and acquire signals at the same frequency band. However, due to their inherent characteristics of dynamic topology and lack of centralized management security, MANET is vulnerable to various kinds of attacks. Ad-hoc networks are more vulnerable than wired. Wireless networks are typically much easier to snoop on, as signals go through the air and only physical Proximity is required to gain access to the medium. Mobile ad hoc network (MANET) is a class of wireless networks with no fixed infrastructure (or base stations) and is formed on ad hoc basis. Peer-

67

Vol. 5, Issue 1, pp. 67-76

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
to-peer routing is done in these networks; the absence of any central authority makes MANETs more vulnerable to various forms of attacks than a typical wireless network. The impromptu nature of the MANETs formation makes it hard to distinguish between trusted and untrusted nodes. The dynamic nature of MANETs makes the trust relationship between nodes also change. Routing is one of the most basic networking functions in mobile ad hoc networks. Hence, an adversary can easily paralyze the operation of the network by attacking the routing protocol. This has been realized by many researchers and several secure routing protocols have been proposed for ad hoc networks. However, the security of those protocols has mainly been analyzed by informal means only. In this, we are focusing on the concept of gray hole attack in adhoc network .A Gray hole is a node that selectively drops and forwards data packets after advertises itself as having the shortest path to the destination node in response to a route request message. The MANET security can be classified in to 5 layers, as Application layer, Transport layer, Network layer, Link layer, and Physical layer. However, the focus is on the network layer, which considers mainly the security issues to protect the ad hoc routing and forwarding protocols. When the security design perspective in MANETs is considered it has not got a clear line defense. Unlike wired networks that have dedicated routers, each mobile node in an ad hoc network may function as a router and forward packets for other peer nodes. The wireless channel is accessible to both legitimate network users and malicious attackers. In order to achieve this goal, the security approach should provide overall protection that spans the entire protocol stack. But sometimes the security protocol may not be able to meet the requirements as said above and results in a packet forwarding misbehavior.

Figure 1: Basic Idea about the MANET Structure

II.

LITERATURE REVIEW & RELATED WORK

Extensive research has been done in the MANET area. Reliable network connectivity in wireless networks is achieved if some counter measures are taken to avoid data packet forwarding against malicious attacks in MANET. A lot of research has taken place to avoid malicious attackers. In this section we mainly focus on the analyzing & defend the system from malicious impact of different attacks on MANET. Secure ad hoc routing protocol has been proposed as a technique to enhance the security in MANET. S.Ramaswamy et. al. [3] presented an algorithm to prevent the co-operative black hole attacks in ad hoc network. This algorithm is based on a trust relationship between the nodes, and hence it cannot tackle gray hole attacks. According to their algorithm instead of sending the total data traffic at once, they divide it into small sized blocks, in the hope that the malicious nodes can be detected& removed in between transmission. Marti et al [4] proposed to trace malicious nodes by using watchdog/pathrater. In watchdog when a node forwards a packet, the nodes watchdog verifies that the next node in the path also forwards the packet by promiscuously listening to the next nodes transmissions. Gonzalez et al [5] presents a methodology, for detecting packet forwarding misbehavior, which is based on the principle of flow conservation in a network. The problem of security and cooperation enforcement has received considerable attention by researchers in the ad hoc network community. Mechanisms or technique to prevent the routing layer from malicious attacks for securing the system of a MANET by cryptographic techniques are proposed by Y. Hu, Perrig and Johnson [6],

68

Vol. 5, Issue 1, pp. 67-76

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Papadimitratos and Hass [7], Snazgiri [8]. Buttyan and Hubaux [9] have presents a self organized PGP-based mechanism to authenticate nodes using chains of certificates and transitivity of trust. Zeshan [10] proposed a two-fold approach for detection and isolation of nodes that drops data packets. Usha and Radha [11] proposed extension to the TWOACK scheme, in which each node must send back a normal Ack to its immediate source node after receipt of any kind of packet. This scheme requires an end to end Ack packet (i.e. Nack) to be sent between the source and the destination. S.Banerjee et al [12] has also proposed an algorithm for detection & removal of Black/Gray Holes. According to their algorithm instead of sending the total data traffic at once, they divide it into small sized blocks, in the hope that the malicious nodes can be detected& removed in between transmission. Flow of traffic is monitored by the neighbors of each node. Source node uses the acknowledgement sent by the destination to check for data loss & in turn evaluates the possibility of a black hole. However in this mechanism false positives may occur and the algorithm may report that a node is misbehaving, when in fact it is not.

III.

ROUTING PROTOCOLS

The primary goal of routing protocols in ad-hoc network is to establish optimal path (min hops) between source and destination with minimum overhead and minimum bandwidth consumption so that packets are delivered in a timely manner. A MANET protocol should function effectively over a wide range of networking context from small ad-hoc group to larger mobile Multihop networks. As fig shows the categorization of these routing protocols. Routing protocols can be divided into proactive, reactive and hybrid protocols, depending on the routing topology.

Figure 2: Hierarchy of Routing Protocols

3.1 Reacting Routing Protocol


Reactive routing protocols [13] are on-demand protocols. These protocols do not attempt to maintain correct routing information on all nodes at all times. Routing information is collected only when it is needed, and route determination depends on sending route queries throughout the network. The primary advantage of reactive routing is that the wireless channel is not subject to the routing overhead data for routes that may never be used. While reactive protocols do not have the fixed overhead required by maintaining continuous routing tables, they may have considerable route discovery delay. Reactive search procedures can also add a significant amount of control traffic to the network due to query flooding. Because of these weaknesses, reactive routing is less suitable for realtime traffic or in scenarios with a high volume of traffic between a large numbers of nodes.

3.2 Proactive Routing Protocol


In a network utilizing a proactive routing protocol, every node maintains one or more tables representing the entire topology of the network. These tables are updated regularly in order to maintain up-to-date routing information from each node to every other node. To maintain the up-todate routing information, topology information needs to be exchanged between the nodes on a regular basis, leading to relatively high overhead on the network. On the other hand, routes will always be available on request. Many proactive protocols stem from conventional link state routing, including the Optimized Link State Routing protocol (OLSR).

69

Vol. 5, Issue 1, pp. 67-76

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963 3.3 Hybrid Routing Protocol
Wireless hybrid routing is based on the idea of organizing nodes in groups and then assigning nodes different functionalities inside and outside a group [13]. Both routing table size and update packet size are reduced by including in them only part of the network (instead of the whole); thus, control overhead is reduced. The most popular way of building hierarchy is to group nodes geographically close to each other into explicit clusters. Each cluster has a leading node (cluster head) to communicate to other nodes on behalf of the cluster. An alternate way is to have implicit hierarchy. In this way, each node has a local scope. Different routing strategies are used inside and outside the scope. Communications pass across overlapping scopes. More efficient overall routing performance can be achieved through this flexibility. Since mobile nodes have only a single Omni directional radio for wireless communications, this type of hierarchical organization will be referred to as logical hierarchy to distinguish it from the physically hierarchical network structure.

IV.

GRAY HOLE ATTACK

Gray Hole attack [14] may occur due to a malicious node which is deliberately misbehaving, as well as a damaged node interface. A Gray hole attack is a variation of the black hole attack,
where the malicious node is not initially malicious, it turns malicious sometime later. The gray hole attack has two phases. In the first phase, a malicious node exploits the AODV protocol to advertise itself as having a valid route to a destination node, with the intention of intercepting packets, even though the route is spurious. In the second phase, the node drops the intercepted packets with a certain probability. This attack is more difficult to detect than the black hole attack where the malicious node drops the received data packets with certainly. A gray hole may exhibit its malicious behavior in different ways. It may drop packets coming from (or destined to) certain specific node(s) in the network while forwarding all the packets for other nodes. Another type of gray hole node may behave maliciously for some time duration by dropping packets but may switch to normal behavior later.

Figure 3: Example of Gray Hole Attack

Fig 3, shows the example of gray hole attack on the adhoc network. In this figure node 1 is act as a source node, node 8 act as a destination node. Node 4 represents the gray hole node in above diagram .Node 4 takes the packets from the neighboring node and drop the certain packets during the packet transmission.

4.1 Impact of Gray Hole Attack on Adhoc Network


When there is a gray hole attack occur in the adhoc network, performance of adhoc network gets decreases. Gray hole attack decreases certain performance metrics of the network such as packet delivery ratio, end to end delay & packet loss ratio. Packet delivery ratio (PDR): is nothing packet send at the source to the packet receive at the destination. PDR =Ps/Pr End to end delay (e2e): it refers to the time taken for a packet to be transmitted across a network from source to destination. End to end delay D =Td-Ts Where Td is the packet receive at the destination Ts Packet send at the source node

70

Vol. 5, Issue 1, pp. 67-76

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Packet loss ratio: is where the network traffic fails to reach at the destination in a timely manner. Packet dropped/loss, Pd= Ps-Pa

4.2 Method for Gray Hole Node or Suspected node


This method is useful to find out the suspected or malicious behavior of any node during the adhoc network. This method helps us in recognizing as well as prevent from the suspected node. DSN-Destination Sequence Number, NID- Node Id, MN-ID Malicious Node ID. Level 1: Initialization Phase of Process or starting phase of process Retrieve the current time Add the current time with BCAST_ID_SAVE Level 2: Storing Process Store all the Route Replies DSN and NID in RR-Table Repeat the above process until the time exceeds Level 3: Identify and Remove Malicious /Suspected / Gray Hole Node Retrieve the first entry from RR-Table rrep_lookup function is for looking any RREP message up if it is exist rrep_remove function is for removing any record for RREP message that arrived from defined node rrep_purge function is to delete periodically from the list if it has expired. Discard or remove the entry from RR-Table and store its NID and Update table Level 4: Proper selection of node process Select the NID having highest DSN among RR-table entries Level 5: Continue default process Call Receive Reply method of default AODV Protocol The above algorithm starts from the initialization process, first set the waiting time for the source node to receive the RREQ coming from other nodes and then add the current time with the waiting time. Then in storing process, store all the RREQ Destination Sequence Number (DSN) and its Node Id in RR-Table until the computed time exceeds. Generally the first route reply will be from the malicious node with high destination sequence number, which is stored as the first entry in the RRTable. Then compare the first destination sequence number with the source node sequence number, if there exists much more differences between them, surely that node is the malicious node, immediately remove that entry from the RR-Table. This is how malicious node is identified and removed. Final process is selecting the next node id that has the higher destination sequence number, is obtained by sorting the RR-Table according to the DSEQ-NO column, whose packet is sent to Receive Reply method in order to continue the default operations of AODV protocol.

V.

EXPERIMENTAL RESULTS & DISCUSSION

We are mainly focusing on the issue of gray hole attack detection & prevention or malicious node behavior .Result is analyzed by the comparing the performance metrics of the normal AODV, gray hole attack & SAODV.

5.1 Simulation Parameters for AODV, Gray Hole & SAODV


Evaluation is done by keeping total Simulation time constant and varies the number of mobiles nodes used in the network. For e.g. if total simulation time is 200 ms then this time is constant only there is a variation in the nodes value.
Table I: Simulation Parameters for Ad-Hoc Network Simulator Ns-2 (Version 2.32) Simulation Time 200 (s) Number of mobile nodes 10,20,30,40,50 Topology 700 * 700 (m) Routing Protocol AODV Traffic CBR (constant bit rate) Transport Protocol TCP & UDP Packet Size 512 bytes

71

Vol. 5, Issue 1, pp. 67-76

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
5.1.1 Evaluation of Packet Delivery Ratio for Normal AODV In this, the packet delivery ratio is calculated for normal aodv protocol with different mobile nodes. (Simulation time 200 s)
Table II: Number of Nodes & PDR for AODV Nodes PDR 10 91.60 20 95.89 30 97.03 40 95.20

Figure. 4: Number of Nodes & PDR for AODV

5.1.2 Evaluation of Packet Delivery Ratio for Gray Hole Node In this, the packet delivery ratio is calculated for gray hole node with different mobile nodes. (Simulation time 200 s)
Nodes PDR Table III: Number of Nodes & PDR For Gray Hole Node 10 20 30 86.41 80.70 94.00 40 95.20

From the table, we can clearly identified the packet delivery ratio is degrade after the gray hole attack in the adhoc network. Because the gray hole attacks drop the data packets during the transmission but with no fixed probability of losing the data packets. if we change the simulation time then again there is a change in the PDR values. If the values of nodes are increases then it required more time for simulating the network.
5.1.3 Evaluation of Packet Delivery Ratio for SAODV In this, the packet delivery ratio is calculated with different mobile nodes. (Simulation time 200 s). In this, we are improving the packet delivery ratio of the network as well as secure the network from the gray hole attack. Result shows the improvement in PDR when compared with gray hole attack.
Nodes PDR Table IV: Number of Nodes & PDR For SAODV 10 20 30 95.66 81.39 94.46 40 98.40

5.1.4 Comparison of Packet Delivery Ratio for AODV, Gray Hole & SAODV (200 S)
Table V: Number of Nodes & PDR for AODV, Gray Hole & SAODV Nodes 10 20 AODV 91.60 95.89 Gray Hole 86.41 80.70 SAODV 95.66 81.39

72

Vol. 5, Issue 1, pp. 67-76

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
30 40 97.03 95.20 94.00 95.20 94.46 98.40

From the table V, we can clearly identified that there is a much more increase in the PDR values, after the gray hole attack there is drop of packets but because of the SAODV procedure there is a increment in the PDR values as well as improvement in the performance of the network.

Figure 5: Number of Nodes & PDR for AODV, Gray Hole & SAODV

5.1.5 Evaluation of End to End Delay for AODV, Gray Hole & SAODV e2e delay is calculated for normal aodv protocol with different mobile nodes(simulation time 200 s).
Table VI: E2e Delay for Normal AODV, Gray Hole & SAODV Nodes 10 20 30 40 AODV 0.00724547 0.00315586 0.0315781 0.0166477 Gray Hole 0.00763422 0.00801343 0.0139475 0.0166477 SAODV 0.0111809 0.0121467 0.0173508 0.0200695

Figure 6: Number of Nodes & E2e for AODV, Gray Hole & SAODV

5.1.6 Evaluation of Throughput for AODV, Gray Hole & SAODV (Kbps)
Table VII: Throughput for Normal AODV, Gray Hole & SAODV Nodes AODV Gray Hole SAODV 10 36.72 34.64 38.33 20 38.43 32.35 32.60 30 38.89 37.68 37.86 40 38.16 38.16 39.44

73

Vol. 5, Issue 1, pp. 67-76

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
5.1.7 Basic Parameters for adhoc network & graphical representation of gray hole attack with different mobile nodes Basic parameters of ad-hoc network consist of: Simulator used Topology Area Simulation Time Number of nodes Routing Protocol Traffic Pause time Transport Protocol Packet size

Figure 7: Gray Hole attack with 10 mobile nodes

Figure 8: Gray Hole with 10 Mobiles Nodes with Drop of Certain Packets

74

Vol. 5, Issue 1, pp. 67-76

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Figure 9: Single Gray Hole Node with Circle Represents RREQ & RREP

VI.

CONCLUSION

Mobile ad-hoc network has been active research based area over the past few years, due to their application in military and civilian communication. But it is vulnerable to various types of attacks. Misbehavior of nodes causes the damage to the nodes & packet also. Gray hole attack cause damage to the network & also it is difficult to detect. In this paper, we proposed a method algorithm for the detection & prevention of the gray hole attack as well as malicious node behavior. From the experimental result, Algorithm is well efficient in improving the performance metrics of the adhoc network. As the gray hole node or attack is detect & prevent then there is much more increase in the packet delivery ratio as well as end to end delay. By implementing a secure technique (SAODV) in the algorithm the performance of the network gets increases and also we can secure our network from the gray hole attack. In order to further improve accuracy in the adhoc network, we can go for the some additional features in the simulations parameters of the adhoc network. So that we can achieve the reliability and accuracy in the network & that will be the further direction.

VII.

FUTURE WORK

Many Problems in ad-hoc network remain to be investigated. Method for the detection & prevention of gray hole or malicious node is very efficient for detecting & preventing from the gray hole attack or behavior of malicious node. Because of the different attacks on the ad-hoc network, the performance of the network gets decreases. Future work will involved some new additional features or parameters using which there is a much more increment in the performance metrics of the network as well as try to avoid the different attacks which occur on the network, with the use of different routing protocols available in MANET. As future work, we intend to develop simulations to analyze the performance of the proposed solution based on the performance metrics and mainly concentrate on one thing that there is a minimum amount of packets loss during the transmission.

REFERENCES
[1] L. Zhou, and Z. Haas, Securing ad hoc network, IEEE Network Magazine, Special issue on network security, Vol. 13, No. 6, November/December 1999, pp. 24-30. [2] Poongothai T. and Jayarajan K., A non-cooperative game approach for intrusion detection in Mobile Adhoc networks, International Conference of Computing, Communication and Networking (ICCC), 18-20 Dec 2008, St. Thomas, VI,pp 1-4. [3] Sanjay Ramaswamy, Huirong Fu, Manohar Sreekantaradhya,John Dixon, and Kendall Nygard, Prevention of Cooperative Black Hole Attack in Wireless Ad Hoc Networks, In Proceedings of 2003 International Conference on Wireless Networks (ICWN03), Las Vegas, Nevada, USA, pp. 570-575. [4] S. Marti, T. J. Giuli, K. Lai, and M. Baker, Mitigating Routing Misbehavior in Mobile Ad Hoc Networks, Proceedings of the 6th annual international conference on Mobile Computing and Networking (MOBICOM), Boston, Massachusetts, United States, 2000, 255-265.

75

Vol. 5, Issue 1, pp. 67-76

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
[5] Oscar F. Gonzalez, Michael Howarth, and George Pavlou, Detection of Packet Forwarding Misbehavior in Mobile Ad-Hoc Networks, Center for Communications Systems Research, University of Surrey,Guildford, UK. Integrated Network Management, 2007. IM '07. 10 th IFIP/IEEE International Symposium on May 21, 2007. [6] Y. Hu, A. Perrig, and D. Johnson, Ariadne: A secure on demand routing protocol for ad-hoc networks, In Proceedings of the 8th Annual International Conference on Mobile Computing and Networking (MobiCom 2002), pp. 12-23,ACM Atlanta, GA, September 2002. [7] P. Papadimitratos, and Z. Haas, Secure routing for mobile ad hoc networks, In Proceedings of SCS Communications Networks and Distributed Systems Modeling and Simulation Conference (CNDS 2002), San Antonio, TX, January 2002. [8] K. Snazgiri, B. Dahill, B. Levine, C. Shields, and E.A. Belding-Royer, Secure routing protocol for ad hoc networks, In Proceedings of International Conference on Network Protocols (ICNP), Paris, France, November 2002. [9] L. Buttyan, and J. Hubaux, Enforcing cooperation in self organizing mobile as hoc networks, In Proceedings of IEEE/ACM Workshop on Mobile Ads Hoc Networks, Technical report DSC/2001/046, EPFLDIICA, August 2002. [10] Muhammad Zeshan, Shoab A. Khan, Ahmad Raza Cheema and Attique Ahmed, "Adding Security against Packet Dropping Attack in Mobile Ad Hoc Networks", in 2008.International Seminar on Future Information Technology and Management Engineering, November 2008, pp. 568-572. [11] S. Usha, S. Radha, "Co-operative Approach to Detect Misbehaving Nodes in MANET Using Multi-hop Acknowledgement Scheme", in 2009 International Conference on Advances in Computing, Control, and Telecommunication Technologies, December 2009, pp.576-578. [12] Sukla Banerjee , Detection/Removal of Cooperative Black and Gray Hole Attack in Mobile Ad-Hoc Networks ,Proceedings of the World Congress on Engineering and Computer Science 2008 WCECS 2008, October 22 - 24,2008, San Francisco, USA. [13] Imrich Chlamtac, Marco Conti, Jennifer J.-N. Liu Mobile ad hoc networking: imperatives and challenges, School of Engineering, University of Texas at Dallas, Dallas, TX, USA, 2003. [14] J. Sen, M.G. Chandra, S.G. Harihara, H. Reddy, and P.Balamuralidhar, A mechanism for detection of gray hole attack in mobile Ad Hoc networks, in Proc. of the 6th International Conference on Information, Communications & Signal Processing, December 2007, pp. 1-5.

AUTHORS
Onkar V. Chandure received his Bachelor Degree in Information Technology With distinction from Amravati University Amravati, INDIA in 2008.He has also received Master Degree in Information Technology in 2012 From Sant Gadge Baba Amravati University, Amravati, INDIA. He recently towards his PhD. He is currently working as an Assistant Professor in Information Technology Department J.D. Institute of Engineering & Technology,Yavatmal, India. His fields of interest include mobile adhoc network. Aditya P. Bakshi received his Bachelor Degree in Computer Science & Engg from Amravati University Amravati, INDIA in 2008.He has also received Master Degree in Computer Engg in 2012 From Sant Gadge Baba Amravati University, Amravati, INDIA. He recently towards his PhD. He is currently working as an Assistant Professor in Information Technology Department J.D. Institute of Engineering & Technology,Yavatmal, India. His fields of interest include Image Processing. Saudamini P. Tidke received her Bachelor Degree in Information Technology from Amravati University Amravati, INDIA in 2010. She is also pursuing a Master Degree in Information Technology From TIT Engg, Bhopal, RGPV University Bhopal. INDIA. His fields of interest include mobile adhoc network.

Priyanka M. Lokhande received her Bachelor Degree in Information Technology from Amravati University Amravati,INDIA in 2010.She is also pursuing a Master Degree in Computer Science & Engg From Sipna COET, Amravati, INDIA. From Sant Gadge Baba Amravati University, Amravati, INDIA. His fields of interest include mobile adhoc network.

76

Vol. 5, Issue 1, pp. 67-76

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

CERTAIN INVESTIGATIONS ON GRAVITY WAVES IN THE MESOSPHERIC REGION


Vivekanand Yadav and R. S. Yadav
Department of Electronics and communication Engineering J K Institute for Applied Physics and Technology University of Allahabad, Allahabad, India

ABSTRACT
This paper is concerned in mesosphere, with the effect of diabatic processes due to photochemical heating on long- period gravity waves .A linear diabatic gravity wave model is prepared to compare the model of pure dynamical adiabatic gravity waves. The detailed influences of (i) The adiabatic condition (ii) Cooling process (iii) Cooling and photochemical heating on the gravity waves are studied in mesosphere region.

KEYWORDS- Instability, Gravity wave, photochemical reaction, adiabatic condition.

I.

INTRODUCTION

In present circumstances, the rigirous study of the coupling between the mesosphere and thermosphere is very important. The nature of mesopause region is investigated by help of MST radar and laser radar. In mesosphere region, the heating process is composed of solar heating, exothermic chemical reaction, infrared cooling, turbulent heating and other possible effect of heat. Gravity waves are very common phenomenon in atmosphere, Lindzen [1], Fritts [2], Garcia and Solomon [3] and Lubken [4] have recognized the essential role of gravity wave in large scale circulation and chemical composition in the mesospheric region. The role of wave amplitude is an important factor in wave saturation. In recent years, several researchers like McDade and Llewellyn [5], Mlynczak and Solomon [6], Riese et al. [7], Meriwether and Mlynczak [8] Jiyao [9], Offermann [14] and Wang Yongmei et al. [10] have studied photochemical heating in mesopause region. In real situation nonadiabatic process of photochemical heating is of great important in investigation of propagation in this region. The purpose of this paper is to make a unitary gravity wave model in stratosphere, mesosphere and lower thermosphere considering the coupling between the photochemistry and the dynamics and to investigate the influence of photochemical processes on gravity waves in stratosphere, mesosphere and lower thermosphere. The accuracy of the results of this paper is obtained by the help of MATLAB Simulation setup. (The paper has been devided into sections: photochemical gravity wave model, investigations, results and discussions, conclusion and future work).

II.

PHOTOCHEMICAL GRAVITY WAVE MODEL

Here the effect of photochemistry on gravity wave propagation, the diabatic process of the photochemical heating and cooling and atmospheric constituents is considered in our model. The gravity wave fluctuations of wind, temperature and mixing ratios of atmospheric .The following linear inertia internal gravity wave model is used for influence of photochemistry on gravity waves in mesospheric region:

77

Vol. 5, Issue 1, pp. 77-83

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

+ + + + +

+ + + +

-f + +f + )

= =

(1) (2) (3)

+(

= = [

(4) = (5)

Where , and are background wind speeds in x, y and z direction respectively. = the background temperature and height.

is

.f is Coriolis parameter, and f = 2


is Geopotential

is atmospheric density, H is the scale height, H =

, N is the Brunt Vaisala frequency,

( ) is the specific heat at constant pressure , R is gas constant. = is the background density of the ith trace species. is the background mixing ratio of the ith trace species. and are production and loss rates of the ith trace species. , , and are the perturbation fields of u ,v ,w and . = is the relative perturbation of , where is the perturbation of the

ith species mixing ratio. , and are eddy diffusion coefficient of momentum, the thermal eddy diffusion coefficient and eddy diffusion coefficient of chemical constituent respectively. Q represents the net adiabatic heating rate, including solar heating, chemical reaction heating and atmospheric infrared radiation cooling rate. In adiabatic condition J.R.Holton [11] has given by the pure dynamical linear inertia internal gravity wave equation. If the adiabatic process of photochemical heating and cooling are considered: =0 =0& Thus, the linear inertia internal gravity wave equation are given by

+ + +

+ +

-f + +f + )

=0 =0

(6) (7) (8)

+(

+ +

+ +

= =

[ [

(9) (10)

i = 1, 2, 3 J Equation (10) is the photochemical reaction continuity equation for species i. Xun Zhu and J.R.Holton [12] only considered the ozone continuity equation in their model. The effects of nitrogen, hydrogen and chlorine compounds were considered by adjusting the reaction rate of O + = 2 . In the calculation, oxygen compounds: ( , O ( ), O ( ), hydrogen compounds: (H, OH, H ), nitrogen compounds: (N, NO, N , N , , HN ) and chlorine compounds :(Cl, ClO, HCl, HOCl) are considered. = is the relative perturbation for species i. Here term represents the net heating rate = Heating rate includes solar heating, chemical reaction heating. radiation cooling rate. Assuming the existence of wave solution of equation of the form = Cos is atmospheric infrared

(11)

78

Vol. 5, Issue 1, pp. 77-83

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Presents any one fluctuation of u, v, w, T and (i =1, 2, 3 J). Express the exponential growth with the height of gravity of wave due to the decreasing atmosphere density. Where = , = , = are wave number in x, y and z direction respectively. w is frequency wave. It is of

the form w = . is defined as the growth rate of wave. If the wave is damped If the wave is enhanced Equation (11) is substituted into Eqs. (6) (10).Then its become coupled equations which are composed of J +4 equations. After eliminating the the equation becomes iwy = Ay (12) Where A is a square matrix with dimension equal to J +3 =19, y is a vector which elements are , , and (i = 1, 2, . . . , J ). Here w = = k + is the Doppler shift frequency. The unknown quantity of the coupled equations can be solved by calculating the eignvalues of matrix A.

III.

INVESTIGATIONS

Case1. The gravity wave in adiabatic condition In this case, the coupling between the dynamics and the photochemistry is lost. Equation (9) becomes J.R.Holton [4].

=0

(13)

From Eqs. (6) (8) and Eq. (13), we can obtain the dispersion relation as follow: (14) The relation (14) shows that gravity waves are dispersive waves and indicate that the growth rate of gravity wave is zero. Thus, for adiabatic conditions, atmospheric gravity waves are stable

Case2. The effect of cooling process on the gravity wave


In this case, follows: = 0 and the net heating rate

= =

. By R.E.Dickinson [13] equation (9) is revised as (15)

The dispersion relation derived from Eqs. (6) (8) and Eq. (15) is
( )

(16)

Where is the Newtonian cooling coefficient. (16) Shows that the wave frequency is complex. When the wave frequency, f, from Eq. (16) we obtain (17)

(18) Expression (18) shows that the atmospheric cooling process always damps atmospheric gravity waves. The damping rate is equal to half of Newtonian cooling coefficient.

Case3. The effect of cooling and photochemical heating on the gravity wave

79

Vol. 5, Issue 1, pp. 77-83

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Fig. 1 The profiles of wave growth rate in stratosphere and lower mesosphere and in upper mesosphere

Fig 2 The profiles of wave growth rate in stratosphere and lower mesosphere and in lower thermosphere sphere

In Fig. 1 and Fig. 2, the dashed line is the result when the atmospheric cooling process is only considered. The solid line shows the result when atmospheric cooling and heating process are considered simultaneously.

Fig.3. Three temperature profiles, the mesopause temperature (U.S. Standard temperature) respectively.

(at 90 km) is 110 K, 130 K and about 190 K

80

Vol. 5, Issue 1, pp. 77-83

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Fig 4 Wave growth rate (

) profiles corresponding to the three temperature profiles.

The temperature at the mesopause is about 190 K. The calculation indicates that the growth rate is about 3 . This result is similar to that of Leovy. For the other two cases: the mesopause temperatures are 130 K and 110 K respectively. These temperature profiles are shown in Fig. 3. Fig.4 shows the gravity wave growth rate profiles in the upper mesosphere and lower thermosphere corresponding to these temperature profiles. The calculations indicate that the maximum growth rates are about 2 and 5 respectively. Therefore, the lower the temperature, the greater the wave growth rate in the mesopause region.

IV.

RESULTS AND DISCUSSIONS

(i) In case of gravity wave in adiabatic condition, atmospheric gravity waves are dispersive and stable. (ii) In case of effect of cooling process on the gravity wave, the photochemical heating process can induce comparatively strong enhancement of gravity waves at the mesopause for lower temperature. In the summer polar mesopause region, this growth rate may be greater by about one order of magnitude than the growth rate of gravity waves at other seasons and locations. (iii) In case of influence of cooling and photochemical heating on the gravity wave, the photochemistry has a damping effect on gravity waves in most region of the mesosphere. The photochemistry has a destabilizing effect on gravity wave in the mesopause region. Considering the related works, new observations and experimental calculations, investigations are carried out to obtain the results of this paper.

V.

CONCLUSION

On the basis of the model of Xun Zhu and J. R. Holton [12], the gravity wave model is prepared. This model deals with continuity equations of oxygen, hydrogen, other gases and photochemical reactions in mesospheric region. The diabatic effect of the photochemistry influences the propagation of the gravity waves. For adiabatic condition, atmospheric gravity waves are stable. The atmospheric cooling process damps atmospheric gravity waves. The damping rate is approximately equal to half of Newtonian cooling coefficient. In Mesospheric region, photochemical reaction heating increases the damping of gravity wave. A nonadiabatic process of photochemical heating is very important. These chemical processes enhance the gravity wave. The amplifying effect of photochemistry on the gravity waves is equivalent to the amplifying effect due to the decreasing of atmospheric density in mesospheric region. The chemical heating is closely related to the density of atomic oxygen. The mixing ratio of O ( ) increases with height. The photochemical heating rate is in proportion to the mixing ratio of O ( ).

VI.

FUTURE WORK

(1) The interaction between waves is very common in mesospheric region. The background field of a wave can be modulated by other waves. The propagation of one wave influences by the other

81

Vol. 5, Issue 1, pp. 77-83

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
wave. Our linear model has a limitation of studying this process. This process of non-linear interaction wave will be very worth for future study. (2) Gravity wave is one kind of atmospheric fluctuations, which has wide range of frequency. It will be necessary to investigate in detail the spectral feature of the gravity wave instability induced by photochemistry. (3) Numerous studies show that there are big differences of minor gas constituent distributions between model calculations and actual observations. Many model calculations underestimate the ozone and atomic oxygen in the region of mesopause and lower thermosphere. This is open question at present for future work.

ACKNOWLEDGMENT
Authors are grateful to the referee for his constructive criticism and valuable suggestions for the improvement of this paper.

REFERENCES
[1] Lindzen, R. S., Turbulence and stress owing to gravity wave and tidal breakdown, J. Geophys. Res., 86, 97079714, 1981. [2] Fritts, D. C., Gravit wave saturation in the middle atmosphere: a review of theory and observations, Reviews of Geophys and Space Physics, 22, 275308, 1984. [3] Garcia, R. R. and S. Solomon, The effect of breaking gravity waves on the dynamical and chemical composition of the mesosphere and lower thermosphere, J. Geophys. Res., 90, 38503868, 1985. [4] Lubken, F.-J., Seasonal variation of turbulent energy dissipation rates at high latitudes as determined by in situ measurements of neutral density fluctuations, J. Geophys. Res., 102(D12), 1344113456, 1997. [5] McDade, I. C. and E. J. Llewellyn, An assessment of the H + O3 heating efficiencies in the nighttime mesopause region, Ann. Geophysicae, 11, 4751, 1993. [6] Mlynczak, M. G. and S. Solomon, A detail evaluation of the heating efficiency in the middle atmosphere, J. Geophys. Res., 98(D6), 1051710541, 1993. [7] Riese, M., D. Offermann, and G. Brasseur, Energy released by recombination of atomic oxygen and related species at mesospause heights, J. Geophys. Res., 99, 1458514594, 1994. [8] Meriwether, J.W. and M. G. Mlynczak, Is chemical heating a major cause of the mesosphere inversion layer, J. Geophys. Res., 100(D1), 13791387, 1995. [9] Xu Jiyao, The study of the gravity wave instability induced by photochemistry in summer polar mesopause region, Chinese Science Bulletin, 2000, 45(3): 267270. [10] Wang Yongmei, Wang Yingjian, Xu Jiyao, The influence of non-linear gravity wave on atmospheric oxygen and hydrogen compounds, Science in China (in Chinese), Ser. A, 2000, 30(supp.): 8487. [11] Holton, J. R., An Introduction to Dynamic Meteorology, Chapter 9, pp. 161183, Academic Press, Inc., 1972. [12] Xun Zhu and J. R. Holton, Photochemical damping of inertio-gravity waves, J. Atmos. Sci., 43, 25782584, 1986. [13] Dickinson R. E., A method of parameterization of infrared cooling between altitudes of 30 km and 70 km, J. Atmos. Sci., 78, 44514457, 1973. [14] D. Offermann, Long-term trends and solar cycle variations of mesospheric temperature and dynamics, Journal of Geographical Research, vol. 115, d18127, 19 pp, 2010.

AUTHORS
Vivekanand Yadav is currently pursuing D.Phil at Allahabad university, Allahahbad211002(India) and obtained his B.E in ECE from Dr.B.R.A.University agra,India. Obtained M.Tech(EC) at HBTI Kanpur from UPTU, Lucknow,India. Area of interest are Filter Design,Digital Signal Processing and Atmosphheric Dynamics.

82

Vol. 5, Issue 1, pp. 77-83

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
R. S. Yadav is Presently working as Reader in Allahabad University,Allahabad-211002, India. Obtained D.Phil from Allahabad University,Allahabad-211002,India. Area of interest are Digital Electronics and Atmosphheric Dynamics.

83

Vol. 5, Issue 1, pp. 77-83

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

ANALYTICAL MODELLING OF SUPERCHARGING DIESEL RADIAL CENTRIFUGAL COMPRESSORS WITH VANES-BASED DIFFUSER
Waleed F. Faris1, Hesham A. Rakha2, Raed M. Kafafy1, Moumen Idres1, Salah A.M. Elmoselhy1
1

Department of Mechanical Engineering, International Islamic University Malaysia, Gombak, Kuala Lumpur, 53100 Malaysia 2 Virginia Tech Transportation Institute, Virginia Polytechnic Institute and State University, 3500 Transportation Research Plaza, Blacksburg, VA 24061, USA

ABSTRACT
Supercharging diesel radial centrifugal compressor with vanes-based diffuser is a key element in diesel power trains that was extensively modelled yet inaccurately. This paper presents and validates an analytical model of this type of compressors. The study developed analytical models of the shaft torque, power required to drive the rotor, velocities at the diffuser, and efficiency of a supercharging diesel radial centrifugal compressor with vanes-based diffuser. These analytically developed models serve as widely valid models that follow entirely from the principles of physics and the results of these developed models have explainable mathematical trends. The present models can help in accurately analyzing the performance of the supercharging diesel radial centrifugal compressors with vanesbased diffuser with respect to steady state response. Having addressed flaws in corresponding models presented in key references in this research area, the present analytical models can help as well in developing and assessing supercharging diesel radial centrifugal compressors technologies.

KEYWORDS: Radial centrifugal compressor, diesel powertrain, modeling, intelligent transportation


systems.

I.

INTRODUCTION

Constructing an internal combustion engine with a compressor was originally investigated in the development of British turbojet engines [1]. The use of superchargers has been increasing in response to strengthened automotive exhaust emission and fuel consumption regulations for global environmental protection by reduced engine weight due to reduced engine displacement relative to naturally aspirated engines. This has been generating a demand for a centrifugal compressor that provides wide and stable operation [2]. In diesel power trains, a compressor may be required by the engine either for good scavenging in two-stroke engines, or as a means of raising the power output in four-stroke engines [3, 4]. As the key tool to develop diesel power trains, modelling of diesel power trains has been contributing significantly to the diesel powertrain development [5]. Modelling can reduce acquisition cycles and reduce costs through enabling simulation to effectively create a virtual development environment in the design, prototyping and testing phases. This is particularly true in Intelligent Transportation

84

Vol. 5, Issue 1, pp. 84-106

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Systems (ITS) applications [6]. The most widely used models of automotive engine compressors are mean value models. Wahlstrm and Eriksson [7] developed a mean value model of a diesel engine with VGT and EGR. The model describes the dynamics in the manifold pressures, turbocharger, EGR, and actuators with a few states in order to have shorter simulation times than GT-POWER model and WAVE model. With Dymola it is possible to create a suitable vehicle model, yet not all inputs are controllable to simulation [8]. Thus, in order to make it easy to use, a graphical user interface (GUI) is needed. In the STARS (Scania Truck And Road Simulation) model, MATLAB is used to manage the exchangeable datasets and to build the GUI [8]. From the GUI it is possible to change all parameters, initial values and datasets used in the model. In the STARS model, it is possible to choose from a variety of data sets on numerous aspects, such as road characteristics and powertrain characteristics. Towards small mass flow rates the operation of a compressor is limited by flow and system instabilities like surge and stall [9]. In an endeavor to better understand how to suppress these instabilities that can result in damaging the supercharging compressor, the operation in these regimes and the inception mechanism of these phenomena of rotating stall and surge in radial compressors were studied profoundly by [10-15]. The specific design process and requirements for centrifugal compressors in various applications are significantly given in [16, 17]. The expansion of the operational range and the internal flow in centrifugal compressors that is associated with three dimensional and unsteady flow phenomena near the surging range were the focus of numerous studies. Empirically, Ibaraki [18] studied through detailed flow measurements the centrifugal compressor flow phenomena caused by its complex blade geometry. Lawless [19] reviewed the centrifugal compressor instability behaviour and the experiments performed for detecting rotating stall phenomena leading to a surge condition in centrifugal compressors. Empirically and numerically, Mitsubishi Heavy Industries, Ltd. (MHI) has developed a centrifugal compressor with an operational range wider than that of conventional units, based on experimental and numerical analysis of the flow phenomena, and control of the tip leakage vortices at the impeller blades [2]. In that study the MHI measured pressure fluctuations and used unsteady numerical analysis in order to expand the operational ranges of supercharging compressors to cope with the stringent exhaust emission and fuel consumption regulations. The developed centrifugal compressor at the MHI improves engine torque by increasing the boost pressure for acceleration at the operational point of a small flow rate [2]. Analytical models emerged as a tool that provides widely valid modeling of compressors. Originally, Moss [20] had developed an analytical model of the rotor of a compressor analyzing fluid flow through the rotor of the compressor. In that research, an analytical model had been developed for the shaft torque, power required to drive the rotor of the compressor, and efficiency of the compressor. Yet, they had not paid enough attention to the stagnation states throughout the compressor and their potential influence on such a model. More recently, Taylor [21] developed an analytical model of the power required to drive the rotor of the compressor and of the efficiency of the compressor. However, he did not differentiate clearly in his analytical model between the isentropic pressure and pressure in real processes. Recently, Heywood [22] modelled analytically the shaft torque, power required to drive the rotor of the compressor, and efficiency of the compressor. Yet, his analytical model of the power required to drive the rotor of the compressor is inaccurate. Therefore, the present research presents an analytical model of the radial centrifugal compressors that addresses and corrects these flaws. This paper presents an analytical model of the shaft torque, the power required to drive the rotor, the velocities at the diffuser, and the efficiency of a supercharging diesel radial centrifugal compressor with vanes-based diffuser. The paper presents an analysis of the fluid flow through the compressor and elucidates an analytical model developed entirely following from the principles of physics. It starts with developing an analytical model of the torque and power required to drive the supercharging diesel radial centrifugal compressor. Following from this, the velocities at the diffuser of the supercharging diesel radial centrifugal compressor are then analytically developed. Finally, the study ends with developing an analytical model of the efficiency of the supercharging diesel radial centrifugal compressor.

85

Vol. 5, Issue 1, pp. 84-106

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

II.

SUPERCHARGING DIESEL RADIAL CENTRIFUGAL COMPRESSOR WITH VANES-BASED DIFFUSER ANALYTICAL MODEL

The requirements for small radial compressors used in automotive supercharging applications are ambitious [9]. For over a century now, internal combustion (IC) engines have been coupled with superchargers to achieve improved performance. The operating characteristics of a given supercharged IC engine rely on the proper selection and supercharging compressors [23]. A wide operating range and high efficiency are required throughout the operation envelope [9]. Most power trains designers prefer centrifugal compressors to axial compressors [21]. The radial centrifugal compressors are particularly widely used in automotive power trains [21]. These compressors are relatively simple in construction, small in size, and cheap to manufacture [21]. In addition, they usually generate moderate level of noise and have very good efficiencies in the range of pressure ratios from 1.5 to 3, where many superchargers are designed to operate [21]. The main disadvantage of radial centrifugal compressors is that their performance range is limited by surge and choking [21]. Surging, to the contrary of choking, occurs when the flow rate of a centrifugal compressor is reduced. This generates an overall system pulsation and leads to an operational limit. In order to expand the operational range of a centrifugal compressor, it is necessary to lower the flow rate limit at which surging occurs [2]. Mechanical supercharging of the radial centrifugal compressors is advantageous in light of the facts that it can operate at wide range of speeds and is suitable for all times of operating the engine including when the engine starts-off when turbo-lag occurs in case of turbo-charging [22]. The schematic configuration of the supercharging diesel compressor in diesel power trains is shown in Figure 1.

Fig 1.0 Schematic diagram of supercharging diesel compressor in diesel powertrain equipped with an electronic throttle control (ETC) [24]

The particular type of supercharged diesel compressor under investigation in this study, i.e. the supercharging diesel radial centrifugal compressor with vanes-based diffuser, is schematically shown in Figure 2. The indicated states on Figure 2, are elaborated on the h-s diagram of the supercharging diesel radial centrifugal compressor with vanes-based diffuser, as depicted in Figure 3. The velocity diagrams of the diesel radial centrifugal compressor at the inlet of impeller with pre-whirl and at the exit of diffuser are shown in Figure 4 and Figure 5, respectively.

86

Vol. 5, Issue 1, pp. 84-106

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Fig 2.0 Schematic diagram of supercharging diesel radial centrifugal compressor with vanes-based diffuser

Fig 3.0 h-s diagram of the supercharging diesel radial centrifugal compressor with vanes-based diffuser. Explanatory Notation: Px implies static pressure, Pox implies stagnation pressure, Poxs implies isentropic pressure

Fig 4.0 The velocity diagram of the diesel radial centrifugal compressor at the entrance of impeller with prewhirl

87

Vol. 5, Issue 1, pp. 84-106

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Fig 5.0 The velocity diagram of the diesel radial centrifugal compressor at the exit of diffuser

Since air can be treated as non-viscous fluid, the flow in diesel engines is therefore non-viscous and the gas flow can thus be conveniently treated as one-dimensional flow [25, 26]. Hence, a onedimensional analysis of air flow is adopted in this paper and hence one vector can represent all particles of the air stream throughout the compressor. In the analysis of air flow throughout the compressor, the following assumptions are made in this study [3]: 1- Since there is no change in phase in the air flow throughout the compressor and the minimum temperature of the air flow throughout the compressor is far above the critical point of atmospheric air, air is thus treated as an ideal gas; 2- Since the air flow into the compressor is turbulent and the change in the flow rate of momentum through the rotor of the compressor equals the resultant of the forces acting upon the stream of air, therefore the air flow is steady and the rotor velocity is uniform based on average speed; 3- There is no leakage of air from the compressor, i.e. the principle of conservation of mass is applied to air flow. In order to analytically model the supercharging diesel radial centrifugal compressor with vanes-based diffuser, the following approach is adopted: a. Analytical modelling of the torque and power required to drive the supercharging diesel radial centrifugal compressor; b. Analytical modelling of the velocities at the diffuser of the supercharging diesel radial centrifugal compressor; c. Analytical modelling of the efficiency of the supercharging diesel radial centrifugal compressor.

2.1 Analytical sub-model of the torque and power required to drive the supercharging diesel radial centrifugal compressor with vanes-based diffuser
Since the forces acting upon the rotor of the compressor must be in balance with the change in momentum flow rate, the following equation follows infinitesimally neglecting body forces, such as the weight of air charge:

d FRW d ( p A) mair dc

(1)

By applying equation (1) to the states at the entrance and exit of the rotor of the compressor, i.e. states 1 and 2 on Figure 2, the following follows:

88

Vol. 5, Issue 1, pp. 84-106

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
FRW ( p2 A2 p1 A1 ) mair (c2 c1 )

(2)

Each pressure-area vector has three components: (a) an axial force, denoted by subscript a in Figure 4, causing end thrust, (b) a radial force, denoted by subscript r in Figure 4 and Figure 5, causing compressive stresses, (c) a tangential component, denoted by subscript t in Figure 4 and Figure 5, yet this component vanishes in the rotor since the net pressure force in the tangential direction is zero [3]. Since the axial and radial components of air flow velocities produce thrusts which result in no displacements, the only component of air flow velocities that contributes to energy transfer in the compressor is the tangential component. Therefore, the resultant force of the rotor on the air flow in the tangential direction can be modelled as follows following from equation (2):

FRW mair (ct 2 ct1 )

(3)

Thus, following from Newtons third law of motion, the reaction exerted by the air flow to this force, Ra, becomes:

Ra mair (ct1 ct 2 )

(4)

Since moments about the axis of the rotor must balance as well following from the principle of the conservation of angular momentum, equation (4) hence leads to the following:

TC mair (ct1 r1 ct 2 r2 )

(5)

Thus, the thermodynamic power required to drive the rotor of the compressor, WC , can be modelled as follows following from equation (5):

WC mair c (ct1 r1 ct 2 r2 )

(6)

In order to investigate analytically the relation between TC and WC modelled in equations (5) and (6), respectively, and the velocity components illustrated in Figure 4 and Figure 5, let us investigate the analytical relation between the angular speed of the rotor, c, and the tangential velocity of the rotor, U:

U c r

(7)

Thus, by combining equations (5) and (7), the developed analytical model of TC can be rewritten as:

TC

m air

(c t 1 U 1 c t 2 U 2 )

(8)

Likewise, by combining equations (6) and (7), the developed analytical model of WC can be rewritten as:

WC mair (ct1 U1 ct 2 U 2 )

(9)

Equations (8) and (9) represent the thermodynamic torque and power required to drive the rotor of the compressor based on the velocity diagrams indicated in Figure 4 and Figure 5 without taking into account the bearing friction losses and fanning losses. This is particularly true in light of the fact that

89

Vol. 5, Issue 1, pp. 84-106

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
the velocity at the exit of the rotor, i.e. at state 2, reflects the effect of losses due to heat transfer and due to fluid friction. Since the inlet velocity to the rotor is axial, the tangential velocity component at state 1 in Figure 2, i.e. ct1, becomes zero. Thus, the developed analytical model of the thermodynamic torque required to drive the rotor, TC, indicated in equation (8) can be rewritten as:

TC

m air c t 2 U 2

(10)

Likewise, the developed analytical model of WC in equation (9) can be rewritten as:

WC mair ct 2 U 2

(11)

Since the radial velocity of the air flow at the diffuser, cr2 , is relatively easier to be measured than the tangential velocity of the air flow at the diffuser, ct2, for instance using a miniature X-wire probe [27], equations (10) and (11) can be rewritten as functions of cr2 rather than ct2 using the trigonometric relations in the velocity diagrams shown in Figure 5. It can be conceived from Figure 5 that

U 2 ct 2 cr 2 cot 2
Hence, equation (12) can be rearranged as follows:

(12)

c ct 2 U 2 1 r 2 cot 2 U 2

(13)

Now, by combining equations (13) and (10), the developed analytical model of the thermodynamic torque required to drive the rotor, TC, indicated in equation (10) can be rewritten as:
2 mair U 2 cr 2 1 TC U cot 2 2

(14)

Likewise, by combining equations (13) and (11) the developed analytical model of the

thermodynamic power required to drive the rotor, WC , indicated in equation (11) can be rewritten as:
c 2 WC mair U 2 1 r 2 cot 2 U 2

(15)

If the slip of the fluid becomes negligible by sufficiently increasing the number of the blades of the impeller, i.e. the relative velocity of the air flow to the rotor becomes entirely in the radial direction as dictated by the blade shape characterised by the impeller blade angle, 1, which is indicated in Figure 4, the tangential velocity of the air flow at the diffuser, ct2, equals then the tangential velocity of the rotor at the diffuser, U2. Therefore, the developed analytical model of the thermodynamic torque required to drive the rotor, TC, indicated in equation (10) can be rewritten as a function of the compressor air mass flow rate, the tangential velocity of the rotor at the diffuser, and the rotor angular speed as follows:

TC

m air U 2

(16)

Likewise, the developed analytical model of the thermodynamic power required to drive the rotor of

the compressor, WC , in equation (11) can be rewritten as a function of the compressor air mass flow

90

Vol. 5, Issue 1, pp. 84-106

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
rate and the tangential velocity of the rotor at the diffuser as follows:
WC mair U 2
2

(17)

By applying the principle of conservation of energy to the compressor, i.e. the first law of thermodynamics, to a control volume around the compressor, the steady flow energy equation on this system is
c2 QC WC m air h g z 2 out

c2 h g z 2 in

(18)

Since the change in altitude between inlet and outlet in diesel radial centrifugal compressor is negligible, the term gz is thus negligible in equation (18). Due to the fact that the enthalpies referred to in equation (18) are measured in compressor passages in which the velocity is considerable, rather than in large tanks for instance, their stagnation values should thus be used [21]. The stagnation or total pressure is defined as the pressure attained if the gas is isentropically brought to rest [22]. Hence, it follows from equation (18) that:

QC WC mair h0 2 h0 1

(19)

Since the air flows throughout the compressor is treated as an ideal gas, equation (19) can be rewritten as follows following from Figure 3 and Figure 6:

QC WC mair c P T02 T01

(20)

Fig 6.0 T-S diagram of the supercharging diesel radial centrifugal compressor with vanes-based diffuser

By recalling the derived analytical isentropic relation between temperature and pressure from another research paper [28], equation (20) can be rewritten as:

91

Vol. 5, Issue 1, pp. 84-106

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
k 1 QC WC m air c P T01 rP k 1

(21)

Since the heat transfer mode in the compressor is forced convection, the thermodynamic power

required to drive the rotor of the compressor, WC , can be therefore modeled as a function of the compressor air mass flow rate, temperature, and pressure ratio following from equation (21) as follows:
k 1 WC hC T02 TW m air c P T01 1 rP k

(22)

Thus, the thermodynamic torque required to drive the rotor, TC, can also be modelled as a function of the compressor air mass flow rate, temperature, and pressure ratio following from equation (22) as follows:
TC
k 1 1 hC T02 TW m air c P T01 1 rP k c

(23)

Since the kinetic energy that generates velocities at the diffuser of the supercharging diesel radial centrifugal compressor is key to estimate the thermodynamic power required to drive the rotor of the

compressor, WC , an analytical sub-model of the velocities at the diffuser of the supercharging diesel radial centrifugal compressor with vanes-based diffuser is presented in the next section.

2.2 Analytical sub-model of the velocities at the diffuser of the supercharging diesel radial centrifugal compressor
In order to analytically model c2, cr2 , and ct2 as a function of the diesel radial centrifugal compressor input parameters and characteristic parameters, let us revisit the trigonometric relations between the velocities indicated in Figure 4. Trigonometrically, it can be conceived from Figure 4 that:

cr1 ca1 U1 ct1


2 2

(24)

Equation (24) can be rewritten as:


cr1 ca1 ct1 U1 2 U1 ct1
2 2 2 2

(25)

Following from the trigonometric relations between the velocities indicated in Figure 4, equation (25) can be simplified as:

cr1 c1 U1 2 U1 ct1
2 2 2

(26)

Similarly, the following holds true at the exit of the rotor:

cr2 c2 U 2 2 U 2 ct2
2 2 2

(27)

Substituting equations (26) and (27) into equation (9) leads to:

92

Vol. 5, Issue 1, pp. 84-106

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
m WC air 2

c2 cr2 cr1 U1 U 2
2 2 2 2

(28)

If the slip of the fluid becomes negligible by sufficiently increasing the number of the blades of the impeller, the tangential velocity of the air flow at the diffuser, ct2, equals then the tangential velocity of the rotor at the diffuser, U2, and thus equation (28) can be rewritten as:
m WC air 2

c2 cr2 cr1 U1 ct 2
2 2 2 2

(29)

Now, in order to analytically formulate the radial velocity of the air flow at the diffuser, cr2, let us combine equations (29) and (22) together as follows:
k 1 2 hC T02 TW mair cP T01 1 rP k cr2 mair

(30)

cr1 c1 c2 U1 ct 2
2 2 2 2

Let us now combine equations (15) and (22) together as follows:


c r2
k 1 hC T02 TW mair c P T01 1 rP k ct 2 cot 2

mair

ct 2 cot 2

(31)

The absolute velocity of air flow at the exit of the diffuser, c2, can be thus analytically formulated by combining equations (30) and (31) together as elucidated next:
c2
2 k 1 ct 2 hC T02 TW mair cP T01 1 rP k cot 2 2 2 ct 2 cot 2 2

mair

U1 ct 2 c1
2 2

mair

k 1 k 1 2 2 hC T02 TW mair cP T01 1 rP k hC T02 TW mair cP T01 1 rP k cr1 2 m cot 2 air

(32)

Since the following can be conceived from the trigonometric relations between the velocities indicated in Figure 4:

c2 ct 2 cr2
2 2

(33)

It follows thus from combining equations (32), (33), and (31) that:
2 k 1 ct 2 2 hC T02 TW mair cP T01 1 rP k cot 2 2 2 2 mair cot 2 2 ct 2 cot 2

mair

k 1 hC T02 TW mair cP T01 1 rP k

k 1 2 2 2 2 2 hC T02 TW mair cP T01 1 rP k cr1 U1 ct 2 c1 m air 2

ct 2

1 mair ct 2 cot 2 2
2 2

k 1 hC T02 TW mair cP T01 1 rP k

ct 2 cot 2 2
2

mair

k 1 hC T02 TW mair cP T01 1 rP k cot 2

(34)

93

Vol. 5, Issue 1, pp. 84-106

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Hence, simplifying equation (34) leads to the following analytical formulation of the tangential velocity of the air flow at the diffuser, ct2:
U1 c1 cr1
2 2 2

ct 2

hC T02 TW

mair

k 1 cP T01 1 rP k

(35)

By substituting equations (35) in (31):


U 1 c1 c r1
2 2 2

c r2

2 m air hC

hC T02 TW

m air

k 1 c P T01 1 rP k

1 U 2 c 2 c 2 k 1 h T T 1 r1 1 C 02 W c P T01 1 rP k 2 m air k 1 T02 TW m air c P T01 1 rP k 1 cot 2

cot 2

(36)

By substituting equations (36) in (33):


U 2 c 2 c 2 k 1 h T T 1 r1 1 C 02 W cP T01 1 rP k 2 mair 1 c2 U 2 c 2 c 2 k 1 h T T 1 r1 1 C 02 W cP T01 1 rP k mair 2 1 mair k 1 1 hC T02 TW mair cP T01 1 rP k cot 2

cot 2

(37)

As a matter of fact supercharging compressors do not change the mass flow rate goes into intake manifold and cylinders due to the conservation of mass [21, 29]. The performance of these compressors is usually presented in terms of a map of air mass flow rate versus pressure ratio with showing lines of constant efficiency and constant compressor impeller speed [30]. Therefore, the present research proposes that instead of using an empirical formula for estimating the compressor air mass flow rate, an analytical model of the mass flow rate of air is used that was analytically derived from the first principles of physics in another research paper by linking the intake manifold to the engine cylinders rather than to the supercharging compressor [24]. Hence, for a given intercooler efficiency, Intercooler , and for a given volumetric efficiency of the engine, V, the air mass flow rate,

m air , can be analytically modelled as follows:


m air

Pi V Vd N m n 60 R Ti

(38)

Since in reality much of the compressor rotor exit kinetic energy pressure head, i.e. (P02 P2) on Figure 3, is usually dissipated as a heat loss and is not converted into pressure rise and because the stagnation enthalpy at state 2, h02, equals the stagnation enthalpy at state 3, h03, as conceived from Figure 3, the following follows [22]:

94

Vol. 5, Issue 1, pp. 84-106

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Pi P2
Ti T2 Intercooler T2 TRe f Amb
Therefore, substituting equations (39) and (40) into equation (38) leads to the following:

(39) (40)

m air

P2 V Vd N m n 60 R T2 Intercooler T2 TRe f Amb

(41)

Now, by substituting equation (41) into (35), ct2 can be rewritten to analytically formulate the tangential velocity of the air flow at the diffuser as:
U1 c1 cr1
2 2 2

ct 2

60 R hC T2 Intercooler T2 TRe f P2 V Vd N m n

Amb

02

TW

k 1 cP T01 1 rP k

(42)

Similarly, by substituting equation (41) into (36), cr2 can be rewritten to analytically formulate the radial velocity of the air flow at the diffuser as:
c r2 U 1 c1 c r1
2 2 2

2 m air h C

60 R hC T2 Intercooler T2 TRe f Amb T02 TW P2 V V d N m n

k 1 c P T01 1 rP k

1 U 1 2 c1 2 c r 2 60 R hC T2 Intercooler T2 TRe f 1 2 P2 V V d N m n

Amb

02

TW

c P T01 1 rP 1 cot 2

k 1 k

T02 TW

k 1 P2 V V d N m n c P T01 1 rP k 60 R T2 Intercooler T2 TRe f Amb

cot 2

(43)

Likewise, by substituting equation (41) into (37), c2 can be rewritten to analytically formulate the absolute velocity of the air flow at the exit of the diffuser as:
k 1 U 12 c12 cr 2 60 R hC T2 Intercooler T2 TRe f Amb T02 TW 1 c P T01 1 rP k 2 P2 V Vd N m n

c2

1 mair cot 2 1 1 2 2 2 U 1 c1 cr1 60 R hC T2 Intercooler T2 TRe f Amb T02 TW c T 1 r k k 1 P 01 P 2 P2 V Vd N m n k 1 P2 V Vd N m n h T T c P T01 1 rP k 1 cot C 02 W 60 R T 2 Intercoole 2 TRe f Amb r T 2

(44)

Since efficiency plays a key role in accurately estimating the thermodynamic power required to drive

95

Vol. 5, Issue 1, pp. 84-106

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

the rotor of the compressor, WC , the following section presents the third and last sub-model in this paper that investigates analytically the efficiency of supercharging diesel radial centrifugal compressor with vanes-based diffuser.

2.3 Analytical sub-model of efficiency of supercharging diesel radial centrifugal compressor with vanes-based diffuser
In order to account for the bearing friction losses and fanning losses in the compressor, the efficiency of the compressor is modelled analytically as well in this study. The compressor total-to-total isentropic efficiency, CTT, is by definition given as follows following from the second law of thermodynamics [22, 31]:

CTT

reversible thermodynamic power requiremen t actual thermodynamic power requiremen t

(45)

Thus, following from Figure 3 and equation (22), equation (45) can be rewritten as:
CTT
h02 S h01 hC T02 TW m air
k 1 1 p 02 k c P T01 p 01

(46)

Hence, equation (46) can be rewritten as:


CTT
c P T02 S T01 hC T02 TW m air
k 1 1 p 02 k c P T01 p 01

(47)

For the isentropic process between states 01 and 02s, equation (47) can be rewritten as follows by recalling the derived analytical isentropic relation between temperature and pressure from another research paper [24]:
k 1 p k c P T01 02 1 p 01 k 1 1 p 02 k hC T02 TW m air c P T01 p 01

(48)

CTT

Since the compressor in supercharged diesel powertrains feeds the engine through a large manifold, much of the rotor exit kinetic energy pressure head, i.e. (P02 P2) on Figure 3, is usually dissipated as a heat loss and is not converted into pressure rise [22]. Therefore, equation (48) should be rewritten as follows in order to reflect this fact:
k 1 p2 k c P T01 1 p 01 k 1 1 p 02 k hC T02 TW m air c P T01 p 01

(49)

CTT

96

Vol. 5, Issue 1, pp. 84-106

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
By incorporating the influence of CCT on the thermodynamic power required to drive the rotor of the

compressor, WC , equations (49) and (22) should be combined together as follows:


k 1 1 p 02 k hC T02 TW m air c P T01 p 01 WC k 1 p2 k c P T01 1 p 01

(50)

In order to analytically evaluate the total power required to drive the compressor, WCT , the mechanical efficiency of the compressor, CM , should be incorporated into equations (50) as elucidated next:
k 1 p k hC T02 TW m air c P T01 1 02 p 01 k 1 p k c P T01 2 1 CM p 01

WCT

(51)

In order to analytically model the mechanical efficiency of the compressor, CM , indicated in equation (51), let us recall the definition of mechanical efficiency in its mathematical form [22]:

CM 1

PCF

(52)

WC

In order to analytically model the power loss due to friction in compressor, PCF, indicated in equation (52), let us draw on the fundamental relation between the power loss due to friction in compressor, PCF, and the total friction mean effective pressure in compressor, PCTFME [22]:
PCTFME PCF n VCd c

(53)

The angular speed of the rotor of the compressor in revolution per minute, NCM, can be expressed in terms of the angular speed of the rotor of the compressor in radians per second, c, as follows:
c
2 N CM 60

(54)

The total friction mean effective pressure in compressor, PCTFME , by definition is given by the following expression [32]:
PCTFME 2 IC C VCd

(55)

Now, by combining equations (50), (53), and (55) together, equation (52) can be rewritten expressing analytically the mechanical efficiency of the compressor, CM , as:

97

Vol. 5, Issue 1, pp. 84-106

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
k 1 2 I C C c c P T01 p 2 k 1 p n 01 k 1 1 p 02 k hC T02 TW m air c P T01 p 01

(56)

2

CM 1

The total power required to drive the compressor, WCT , can now be thus analytically modelled by combining equations (51) and (56) as follows:
(57)
k 1 1 p 02 k hC T02 TW m air c P T01 p 01 k 1 2 I C C c c P T01 p 2 k 1 k 1 n p 01 p2 k 1 1 k 1 p 01 p 02 k hC T02 TW m air c P T01 1 p 01 2

WCT c P T01

Hence, the total torque required to drive the compressor, TCT , can be analytically modelled following from equation (57) as follows:
(58)
k 1 p k hC T02 TW m air c P T01 1 02 p 01 k 1 2 I C C c c P T01 p 2 k 1 k 1 n p 01 p2 k 1 1 k 1 p 01 p 02 k hC T02 TW m air c P T01 1 p 01 2

TCT

c c P T01

Therefore, for a given intercooler efficiency, Intercooler, and for a given volumetric efficiency of the engine, V, the analytical model of the compressor total-to-total isentropic efficiency, CTT , can be rewritten by substituting equation (41) into equation (49) as follows:
k 1 p2 k c P T01 1 p 01 k 1 k P2 V V d N m n c P T01 1 p 02 hC T02 TW p 60 R T T 2 Intercooler 2 TRe f Amb 01

(59)

CTT

Also, for a given intercooler efficiency, Intercooler, and for a given volumetric efficiency of the engine, V, the analytical model of the compressor mechanical efficiency, CM , can be rewritten by substituting equation (41) into equation (56) as follows:
k 1 2 I C C c c P T01 p 2 k 1 n p 01

(60)

2

CM 1

P2 V V d N m n c P T01 hC T02 TW 60 R T2 Intercooler T2 TRe f Amb

k 1 1 p 02 k p 01

98

Vol. 5, Issue 1, pp. 84-106

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Thus, for a given intercooler efficiency, Intercooler, and for a given volumetric efficiency of the engine, V, the analytical model of the total power required to drive the compressor, WCT , can be as well rewritten by substituting equation (41) into equation (57) as follows:
(61)
k 1 k P2 V Vd N m n cP T01 1 p02 hC T02 TW p 60 R T2 Intercooler T2 TRe f Amb 01 k 1 2 I C C c cP T01 p2 k 1 k 1 n p01 p2 k 1 2 1 k 1 p01 k P2 V Vd N m n cP T01 1 p02 hC T02 TW p 60 R T 2 Intercoole 2 TRe f Amb r T 01 2

WCT cP T01

Finally, for a given intercooler efficiency, Intercooler, and for a given volumetric efficiency of the engine, V, the analytical model of the total torque required to drive the compressor, TCT , can be also rewritten by substituting equation (41) into equation (58) as follows:
k 1 k P2 V Vd N m n cP T01 1 p02 hC T02 TW p 60 R T2 Intercooler T2 TRe f Amb 01 k 1 2 I C C c cP T01 p2 k 1 k 1 p n 01 p2 k 1 1 p01 P2 V Vd N m n cP T01 1 hC T02 TW 60 R T 2 Intercoole 2 TRe f Amb r T 2

(62)

TCT

cP T01

2 k 1 p02 k p01

III.

RESULTS

In analytical sub-model of torque and power required to drive the compressor, analytical models of the thermodynamic torque required to drive the compressor rotor, TC , and of the thermodynamic power required to drive the rotor of the compressor, WC , have been developed in equations (10) and

(11), respectively. These two analytical models of TC and WC have been rewritten as a function of the compressor air mass flow rate, the tangential velocity of the rotor at the diffuser, and the rotor angular speed as indicated in equations (16) and (17), respectively. By taking into account the heat transfer

mode in the compressor, TC and WC have been modelled as a function of compressor air mass flow rate, temperature, and pressure ratio as shown in equations (23) and (22), respectively. Since the kinetic energy that generates the velocities at the diffuser of the supercharging diesel radial centrifugal compressor plays the key role in shaping the thermodynamic power required to drive the

rotor of the compressor, WC , as indicated in equation (28), the second sub-model developed in this paper has been the analytical model of the velocities at the diffuser of the supercharging diesel radial centrifugal compressor. The tangential velocity of the air flow at the diffuser, ct2, the radial velocity of the air flow at the diffuser, cr2, and the absolute velocity of the air flow at the diffuser, c2, have been analytically modelled as a function of the compressor input parameters and characteristic parameters as indicated in equations (35), (36), (37), respectively. In addition, instead of using an empirical

99

Vol. 5, Issue 1, pp. 84-106

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
formula for estimating the compressor air mass flow rate, an analytical model of the mass flow rate of air is used in this study that has been analytically derived from the first principles of physics in another research paper by linking the intake manifold to the engine cylinders rather than to the supercharging compressor [24]. Therefore, for a given intercooler efficiency, Intercooler , and for a given volumetric efficiency, V, of the engine, ct2, cr2, and c2, have been analytically modelled as a function of Intercooler and V instead of mass flow rate of air as presented in equations (42), (43), and (44). In the third and last sub-model in this paper, which is the analytical model of the efficiency of supercharging diesel radial centrifugal compressor with vanes-based diffuser, the compressor total-tototal efficiency, CCT , has been analytically modelled as shown in equation (49). By incorporating the influence of CCT on WC , the developed analytical model of the thermodynamic power required to

drive the rotor of the compressor, WC , has been further developed as indicated in equation (50). The mechanical efficiency of the compressor, CM , has been analytically modelled as well in this study as indicated in equation (56). Following from this, the total power required to drive the compressor,

WCT , and the total torque required to drive the compressor, TCT , have been analytically modelled as
presented in equations (57) and (58), respectively. In addition, by introducing the use of the analytically developed model of the mass flow rate of air, for a given intercooler efficiency, Intercooler, and for a given volumetric efficiency of the engine, V, the analytically developed models of CCT , CM , WCT , and TCT , have been further extended as indicated in equations (59), (60), (61), and (62).

IV.

DISCUSSION

An analytical model of the supercharging diesel radial centrifugal compressor with vanes-based diffuser has been developed in this study. The model has been divided into three parts: (1) analytical sub-model of the torque and power required to drive the compressor, (2) analytical sub-model of the velocities at the diffuser of the compressor, (3) analytical sub-model of the efficiency of the compressor. In the first sub-model, analytical models of the thermodynamic torque required to drive the compressor rotor, TC , and of the thermodynamic power required to drive the rotor of the

compressor, WC , have been developed. The analytical models of TC and WC show that the thermodynamic torque and power required to drive the rotor of the compressor do not depend upon pressure changes in air flow as might be thought but rather on the mass flow rate of air and on the rotor speed of the compressor, as indicated in equations (10), (11), (16), and (17). In this sub-model,

WC is proportional to hC , T02 , mair , cP , and T01 , as indicated in equation (22). It can also

conceived from this sub-model that WC is rootedly less proportional to rP , as indicated in equation

(22). This sub-model shows as well that TC is proportional to hC , T02 , mair , cP , and T01 , and is inversely proportional to c , as indicated in equation (23). It also shows that TC is rootedly less proportional to rP , as indicated in equation (23). In the second sub-model developed in this paper, the velocities at the diffuser of the supercharging diesel radial centrifugal compressor have been analytically modelled using an analytical model of the mass flow rate of air instead of using an empirical formula for estimating the compressor air mass flow rate. Therefore, for a given Intercooler and V, the analytical models of ct2, cr2, and c2 have been developed as a function of Intercooler and V instead of mass flow rate of air as presented in equations (42), (43), and (44). In this velocities sub-model, it can be conceived from equation (31) that cr2 is proportional to hC , T02 , cP , and T01 , cr2 is inversely proportional to mair and 2 , and cr2 is

100

Vol. 5, Issue 1, pp. 84-106

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
rootedly less proportional to rP . It can be also conceived from equation (35) that ct 2 is rootedly

proportional to hC , T02 , cP , and T01 , ct 2 is inversely rootedly proportional to mair , and ct 2 is rootedly less proportional to rP . In addition, equation (33) in this sub-model shows that c2 is proportional to hC , T02 , cP , and T01 , c2 is inversely proportional to mair and 2 , and c2 is rootedly less proportional to rP . In the third and last sub-model in this paper, an analytical model of the efficiency of supercharging diesel radial centrifugal compressor with vanes-based diffuser has been developed using of the analytically developed model of the mass flow rate of air. Hence, for a given Intercooler and V, the analytically developed models of CCT , CM , WCT , and TCT , have been further extended as indicated in equations (59), (60), (61), and (62). In this last sub-model, CTT is proportional to cP and T01 ,

CTT is inversely proportional to hC , T02 , and mair , and CTT is rootedly less proportional to
p2 p02 p and is rootedly less inversely proportional to p , following from equation (49). Also, 01 01

CM is proportional to n , hC , T02 , and mair , CM is inversely proportional to I C , C , c , cP ,


and T01 , CM is rootedly less inversely proportional to to

p2 , and CM is rootedly less proportional p01

p02 , as presented in equation (56). p01

V.

CONCLUSION

The present study has analytically modelled the supercharging diesel radial centrifugal compressor with vanes-based diffuser. This research has presented analytical models of the torque and power required to drive the compressor, the velocities at the diffuser of the compressor, and the efficiency of the compressor. For a given intercooler efficiency, Intercooler, and for a given volumetric efficiency of the engine, V, the analytically developed models of CCT , CM , WCT , and TCT , have been presented in equations (59), (60), (61), and (62). In addition, for a given intercooler efficiency, Intercooler, and for a given volumetric efficiency of the engine, V, the analytically developed models of ct2, cr2, and c2, have been modelled as a function of Intercooler and V instead of mass flow rate of air as presented in

equations (42), (43), and (44). The study has shown that WC is proportional to hC , T02 , mair , cP , and T01 . It also has shown that WC is rootedly less proportional to rP . The study has highlighted as well that TC is proportional to hC , T02 , mair , cP , and T01 , and is inversely proportional to c . It also has shown that TC is rootedly less proportional to rP . The paper has also presented that cr2 is proportional to hC , T02 , cP , and T01 , and is inversely proportional to mair and 2 . It also has shown that cr2 is rootedly less proportional to rP . The paper has demonstrated as well that ct 2 is rootedly

proportional to hC , T02 , cP , and T01 , and is inversely rootedly proportional to mair . It also has shown that ct 2 is rootedly less proportional to rP . The study has also shown that c2 is proportional to

101

Vol. 5, Issue 1, pp. 84-106

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

hC , T02 , cP , and T01 , and is inversely proportional to mair and 2 . It also has shown that c2 is
rootedly less proportional to rP . The study has highlighted as well that CTT is proportional to cP and

T01 , and is inversely proportional to hC , T02 , and mair . It also has shown that CTT is rootedly less
proportional to 2 and is rootedly less inversely proportional to 02 . Finally, the paper has p p 01 01 demonstrated that CM is proportional to n , hC , T02 , and mair , and is inversely proportional to I C ,

p C , c , cP , and T01 . It also has shown that CM is rootedly less inversely proportional to 2 p 01 p and is rootedly less proportional to 02 . These developed analytical models can help in accurately p 01
analyzing, based entirely on the principles of physics and with explainable mathematical trends, the performance of the supercharging diesel radial centrifugal compressors with vanes-based diffuser with respect to steady state response. The developed analytical models in this study address flaws exist in corresponding models presented in key references in this research area, such as [22, 21]. These developed models have been derived step by step from the principles of physics as a way of validating these models. In addition, the fact that these developed analytical formulae are obviously dimensionally correct supports the validity of these developed analytical models. They have two key advantages: (1) Widely valid model that is not restricted to a specific dataset; (2) Can help in developing and assessing supercharging diesel radial centrifugal compressors technologies as well as diesel powertrain technologies.

VI.

FUTURE WORK

This study exhibits research on the following points: (1) validating the developed analytical models experimentally, (2) validating the developed analytical models against computational fluid dynamics models, (3) developing a simplified version of the proposed analytical models for ITS and control applications, (4) Cost-effectively broadening the operation range of centrifugal compressors by shifting the stall line without generating complex 3D flow structures, loss generation mechanisms, impeller/diffuser interactions, stalling mechanism, and further mechanical stresses.

NOMENCLATURE
FRW p A

mair
c TC r1 r2 c ct1 ct2 r 2

Reaction of the internal wall of the compressor to the pressure forces of the air flow. Air flow pressure. Exposed area in the rotor to air flow. Mass flow rate of air. Air speed. Thermodynamic torque required to drive the rotor of the compressor. Effective radius of the impeller. Effective radius of the diffuser. Angular speed of the rotor of the compressor (rad/s). Tangential velocity of the air flow at the entrance of the impeller. Tangential velocity of the air flow at the diffuser. Effective radius of the rotor. Backsweep angle of the diffuser of the compressor as indicated in Figure 5.

102

Vol. 5, Issue 1, pp. 84-106

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

QC
h z cP T02 T01 rP

Heat flow rate lost by the compressor. Specific enthalpy per unit mass of fluid flow. Altitude. Specific heat capacity of air at constant pressure. Stagnation temperature at state 2 on Figure 3, i.e. at the outlet of the diffuser on Figure 2. Stagnation temperature at state 1 on Figure 3, i.e. at the inlet of the impeller on Figure 2. Compression ratio of the compressor, i.e.

p 02 . p 01

k hC TW c1 c2 Pi V Vd Nm n R Ti TRefAmb

Specific heat ratio of air. Heat transfer coefficient of the compressor which depends on the characteristic geometry of the compressor. Temperature of the compressor wall internal surface. Absolute velocity of air flow at the entrance of the impeller. Absolute velocity of air flow at the exit of the diffuser. Intake manifold pressure. Volumetric efficiency of filling in the engine 3 cylinders. Displaced volume of the engine cylinders (m /cycle). Crankshaft rotational speed (rev/min). Number of crank revolutions for each power stroke per cylinder (two for 4-stroke cycles; one for 2-stroke cycles). Universal gas constant. Intake manifold temperature. Ambient reference temperature, i.e. 298 K. Power loss due to friction in compressor. Displaced volume of air by the rotor in one revolution of the rotor of the compressor (m3 / cycle). Angular speed of the rotor of the compressor (rad/s). Moment of inertia of the compressor as an integrated system, i.e. of the compressor shaft, impeller, and diffuser collectively. Compressor speed deceleration (rad/s2).

PCF VCd c IC C

ACKNOWLEDGMENT
The financial support provided by the International Islamic University Malaysia (IIUM) for this research under research grant # RMGS 09-10 is thankfully acknowledged. The technical support provided by the Center for Sustainable Mobility at Virginia Polytechnic Institute and State University (Virginia Tech) is thankfully acknowledged as well.

REFERENCES
[1]. [2]. Cox, H.R., (1985) "The beginnings of jet propulsion," The Royal Society of Arts Journal, pp. 705-723. Ibaraki, S., Tomita, I., Ebisu, M., Shiraishi, T., (2012) Development of a wide-range centrifugal compressor for automotive turbochargers, Mitsubishi Heavy Industries Technical Review, vol. 49, no. 1, pp. 68-73. Obert, E.F., (1973) Internal combustion engines and air pollution, Harper & Row Publishers, Inc. Sahasrabudhe, A.B., Notani, S.S., Purohit, T.M., Patil, T.U., and Joshi, S.V., (2011) Measurement of Carbonyl emissions from exhaust of engines fueled using BiodieselEthanol-Diesel blend and development of a catalytic converter for their mitigation along

[3]. [4].

103

Vol. 5, Issue 1, pp. 84-106

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
with CO, HCs and NOx, Int. J. Advances in Engineering & Technology, vol. 1, no. 5, pp. 254-266. Faris, W.F., Rakha, H.A., Kafafy, R.M., Idres, M. and Elmoselhy, S.A.M., (2011) Vehicle fuel consumption and emission modelling: an in-depth literature review, Int. J. Vehicle Systems Modelling and Testing, vol. 6, nos. 3/4, pp. 318-395. Rakha, H.A., Ahn, K., Faris, W., and Moran, K.S., (2012) Simple vehicle powertrain model for modeling intelligent vehicle applications, IEEE Trans. on Intelligent Transportation Systems, vol. 13, no. 2. Wahlstrm, J., and Eriksson, L., (2009) Modeling of a diesel engine with VGT and EGR capturing sign reversal and non-minimum phase behaviors, Technical Report Nr. LiTH-R2882. Department of Electrical Engineering, Linkpings Universitet. Sandberg, T., (2001) Heavy truck modeling for fuel consumption simulations and measurements, Master Thesis, Linkoping University. Schleer, M.W., (2006) Flow structure and stability of a turbocharger centrifugal compressor, PhD Dissertation, Swiss Federal Institute of Technology (ETH), Zurich. Emmons, H., Pearson, C., and Grant, H. (1955) Compressor surge and stall propagation, Transactions of the ASME, vol. 79, pp. 455-469. Jansen, W., (1964). Steady fluid flow in a radial vaneless diffuser, Journal of Basic Engineering-Transactions of the ASME, vol. 86, pp. 607-619. Senoo, Y., Kinoshita, Y. (1977) Influence of inlet flow conditions and geometries of centrifugal vaneless diffusers on critical flow angle for reverse flow, Journal of Fluids Engineering-Transactions of the ASME, vol. 99, no. 1, pp. 98-103. Fink, D. A., Cumpsty, N. A., Greitzer, E. M., (1992) Surge dynamics in a free-spool centrifugal-compressor system, Journal of Turbomachinery-Transactions of the ASME, vol. 114, no. 2, pp. 321-332. Longley, J. P., (1994) A review of nonsteady flow models for compressor stability, Journal of Turbomachinery-Transactions of the ASME, vol. 116, no. 2, pp. 202-215. Dalbert, P., Ribi, B., Kmecl, T., Casey, M.V., (1999) "Radial compressor design for industrial compressors," Proceedings of Institute of Mechanical Engineers, Part C, vol. 2132. Came, P. M., Robinson, C. J. (1999) Centrifugal compressor design, Proceedings of the Institution of Mechanical Engineers Part C, Journal of Mechanical Engineering Science, vol. 213, no. 2, pp. 139-155. Dalbert, P., Ribi, B., Kmeci, T., and Casey, M. V., (1999). Radial compressor design for industrial compressors, Proceedings of the Institution of Mechanical Engineers Part C, Journal of Mechanical Engineering Science, vol. 213, no. 1, pp. 71-83. Ibaraki, S., Higashimori, H., and Mikogami, T., (1998) "Flow investigation of a centrifugal compressor for automotive turbochargers," SAE Technical Papers, Paper # 980771. Lawless, P.B., (2000) Experimental evaluation of precursors to centrifugal compressor instability, International Journal of Turbo and Jet Engines, vol. 17, pp. 279-288. Moss, S., Smith, C., and Foote, W., (1942) Energy transfer between a fluid and rotor, ASME Transactions, vol. 64, pp. 567-585. Taylor, C.F., (1985) The internal-combustion engine in theory and practice, The MIT Press. Heywood, J.B., (1988) Internal combustion engine fundamentals, McGraw Hill, New York. Baines, N.C., (2005) Fundamentals of Turbocharging, Concepts ETI, Inc. Faris, W.F., Rakha, H.A., Kafafy, R.M., Idres, M., and Elmoselhy, S.A.M., (2011) Diesel Powertrain Intake Manifold Analytical Model, Internal Report, The International Islamic University Malaysia. Smits, J.J.M., (2006) Modeling of a Fluid Flow in an Internal Combustion Engine, Research Report # WVT 2006.22, Eindhoven University of Technology. Fung, Y.C., (1969) A first course in continuum mechanics, Englewood Cliffs, NJ: PrinticeHall. Ahmed, S.A., (2010) An experimental investigation of the flow field between two radial plates, Canadian Aeronautics and Space J., vol. 65, no. 1. Faris, W.F., Rakha, H.A., Kafafy, R.M., Idres, M. and Elmoselhy, S.A.M., (2012) Diesel

[5].

[6].

[7].

[8]. [9]. [10]. [11]. [12].

[13].

[14]. [15].

[16].

[17].

[18]. [19]. [20]. [21]. [22]. [23]. [24].

[25]. [26]. [27]. [28].

104

Vol. 5, Issue 1, pp. 84-106

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
[29]. engine analytical model, Int. J. Scientific & Engineering Research, vol. 3, no. 8. Omran, R., Younes, R., and Champoussin, J., (2007) Neural networks for real-time nonlinear control of a variable geometry turbocharged diesel engine, Int. J. Robust and Nonlinear Control, DOI: 10.1002/rnc.1264. Wahlstrom, J., Eriksson, L., (2011) Modeling diesel engines with a variable-geometry turbocharger and exhaust gas recirculation by optimization of model parameters for capturing non-linear system dynamics, Proceedings of the Institution of Mechanical Engineers, Part D, Journal of Automobile Engineering, vol. 225, no. 7. Watson, N., and Janota, M.S., (1982) Turbocharging the internal combustion engine, WileyInterscience Publications, John Wiley, New York. Harari, R., and Sher, E., (1995) Measurement of engine friction power by using inertia tests, Proc. SAE Int. Congress & Exposition, Detroit, USA, SAE Paper # 950028.

[30].

[31]. [32].

AUTHORS BIOGRAPHIES:
Waleed F. Faris is a Professor at the Mechanical Engineering Department of the International Islamic University Malaysia (IIUM) affiliated with this institution since 2004. He has been a Visiting Scholar in Virginia Tech Transportation Institute (VTTI), in 2008. He obtained his BSc in Mechanical Engineering with specialisation in Construction Equipment and Off-Road Vehicles from Zagazig University, Egypt, in 1989, MSc in Applied Mechanics from the same university, in 1996, and PhD in Non-linear Dynamics from Virginia Tech, USA, in 2003. He has to his credit five books and more than 140 technical papers in reputed journals and refereed conferences in vehicle dynamics and control and NVH. He is a member of the Japanese Society of Automotive Engineers, he is on the editorial board of three journals on automotive enginineering and two other journals on management and applied sciences, and a technical committee member and reviewer of several international journals and conferences worldwide. Hesham A. Rakha is a Professor at the Charles E. Via Jr. Department of Civil and Environmental Engineering and the Director of the Center for Sustainable Mobility at the Virginia Tech Transportation Institute. He is a Professional Engineer in Ontario and a member of the Institute of Transportation Engineers (ITE), American Society of Civil Engineers (ASCE), Institute of Electrical and Electronics Engineers (IEEE), and Transportation Research Board (TRB). He is on the Editorial Board of the Journal of Transportation Letters and an Associate Editor for the IEEE Transactions of ITS and the Journal of ITS. He has authored/co-authored a total of 200 refereed publications in journals and conference in the areas of traffic flow theory, traffic modelling and simulation, dynamic traffic assignment, traffic control, transportation energy and environmental modelling, and transportation safety modelling. Raed Ismail Kafafy is an Assistant Professor at the International Islamic University Malaysia (IIUM). He received his BSc and MSc from Cairo University, Egypt. In addition, he received his PhD from Virginia Polytechnic Institute and State University, USA. His research area includes advanced space propulsion, computational engineering, plasma and gas physics, and space systems.

Moumen Idres is an Assistant Professor at the International Islamic University Malaysia (IIUM). He received his BSc and MSc from Cairo University, Egypt. In addition, he received his PhD from Old Dominion University, USA. His research area includes aerospace propulsion, acoustics and noise analysis, computational fluid mechanics, flight mechanics and control, and renewable energy.

105

Vol. 5, Issue 1, pp. 84-106

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Salah A.M. Elmoselhy is a PhD Candidate in mechanical engineering working with the International Islamic University Malaysia (IIUM) and the Center for Sustainable Mobility at Virginia Polytechnic Institute and State University (Virginia Tech). He holds MS in mechanical design and production engineering that he received from Cairo University. He holds as well MBA in international manufacturing business that he received from Maastricht School of Management (MSM). He has ten years of industrial experience in CAD/CAM and robotised manufacturing systems. He has been recently a researcher at the Engineering Department and Fitzwilliam College of Cambridge University from which he received a Diploma of postgraduate studies in engineering design.

106

Vol. 5, Issue 1, pp. 84-106

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

HYBRID LEAN-AGILE DESIGN OF MOBILE ROBOTS


Salah A.M. Elmoselhy Fitzwilliam College, Cambridge CB3 0DG, UK

ABSTRACT
Determining how and when value is added in the mobile robot design process is problematic. Lean design and agile design paradigms have been proposed to design robots; yet, none of them could strike a balance between cost-effectiveness and short duration of the design process without compromising quality of performance. The present research therefore identifies empirically the most influential mobile robot design activities and strategies on mobile robot performance. The study has identified statistically the most positively correlated mobile robot design activities and strategies with mobile robot performance. The study has shown that 65% of typical mobile robot design activities and strategies are affiliated with the lean design paradigm, while the remaining 35% are affiliated with the agile design paradigm. In addition, it has been found that 22% of the lean mobile robot design activities and strategies and 25% of the agile mobile robot design activities and strategies significantly with 99% reliability are among the most positively correlated design activities and strategies with improving mobile robot performance; these particular mobile robot design activities and strategies have proved to significantly improve mobile robot performance by more than 10%. The study shows that hybrid lean-agile design is thus an applicable and viable mobile robots design paradigm.

KEYWORDS: Mobile Robots, Design Process, Lean Design, Agile Design

I.

INTRODUCTION

Lean and agile approaches have been adopted by the designers of robots for years. Lean design is value optimization through minimizing waste in the design process [1]. It usually leads to cost reduction. According to Womack, J., et al., [2, 3] significant interest has been shown in recent years in the idea of lean operations. More recently, a growing awareness has been established that lean principles can be readily transferred to the design sector [1]. The lean design process would only be successful if the success criterion was only cost. On the other hand, agile design is a design system with flexible technology, qualified and trained human resources, and shared information that responds quickly to continuous and unpredicted changes in customers needs and desires and in market demand [4]. Having this ability can make the mobile robot design process successful if the success criterion is short lead time [5, 6]. The current challenge in the design process of robots is to improve value added to customer while shortening the robot design process duration [7] [8]. Therefore, robot designers face a dilemma that they need to strike a balance between robot design duration and robot performance in the most costeffective way. Chalupnik [9, 10] recently reported that minimizing variations in performance caused by variations in uncontrollable external noise parameters or by variations in design parameters, was investigated extensively. Aravinth, et al., [11] has proposed more recently minimizing variations in performance caused by variations in internal factors in the design process of complex products, such as failure modes. Yet, internal factors in the robot design process, such as design activities and strategies, and their relation to robot performance have not been yet investigated empirically. Browning [12] reported that determining how and when value is added in the design process is

107

Vol. 5, Issue 1, pp. 107-121

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
problematic. Thus, the present research aims to help novel designers of mobile robots in resolving this dilemma by identifying empirically the most influential mobile robot design activities and strategies on robot performance and by identifying the most efficient mobile robot design paradigm. This research investigates how and when value is added in the design process of mobile robots. The paper starts with identifying the technical attributes and design specifications of mobile robots. A quasi experiment on the design of mobile robots is presented after that as a case study. Following from this experiment, the design activities and strategies typically implemented in the mobile robot design process and their lean/agile classification are investigated next. The study then identifies the most influential lean mobile robot design activities and strategies and agile mobile robot design activities and strategies on mobile robot performance. Finally, the paper investigates the most efficient design paradigm in the design process of mobile robots.

II.

MOBILE ROBOT PERFORMANCE AND TECHNICAL ATTRIBUTES

Mobile robots are industrially sought for their advantages that range from reducing operating costs, improving product quality and consistency, as well as the quality of work for employees, to increasing production output rates, increasing product manufacturing flexibility, reducing raw materials waste and increasing yield [13]. The recent trend in robots design is mobile robots [14]; thus, the design of mobile robot has been chosen to be the basis of the design experiment in the present research. The quality of mobile robots is measured against the following mobile robot performance attributes that were used as the rubric of evaluation in this design experiment and that were extracted from the industrially adopted set of technical attributes of a quality mobile robot [13, 15, 16]: (1) minimum floor space requirements for agile motion; (2) adaptable to the surroundings and capable of making decisions accordingly, such as in case of path irregularity, missing a junction, and encountering obstacles [17]; (3) Capable of recognizing the position of an object; (4) Capable of controlling the force used to grip an object; (5) Provides the flexibility for picking and depositing loads to a variety of station types and elevations; (6) Capable of following a no straightforward path; (7) Fast response; (8) Stability [18]; (9) Accuracy; (10) Payload capacity; (11) Reliability; (12) Maintainability; (13) Safety. The degree of striking a balance between these competing technical attributes shapes the mobile robot performance, which in turn should be realized cost-effectively within the shortest duration of robot design process possible [19, 20]. These attributes have been observed in the mobile robot design quasi experiment.

III.

MOBILE ROBOT DESIGN QUASI EXPERIMENT

A quasi experiment on mobile robot design was conducted based on a contest among groups of novel designers who were undergraduates at the Engineering Department of Cambridge University to design, build, and test a mobile robot such that the robot carries out a set of tasks successfully within a certain timeframe. In this experiment, the novel designers were observed while designing, building, and testing their mobile robots, and were asked to respond to a questionnaire on the design activities and strategies they adopted. The design independent variables in this experiment were the design activities and strategies, and the design dependent variable was the robot performance. The data that have been collected for analysis have been collected from observations and from the responses to the questionnaire. The following subsections elaborate on the mobile robot design experiment setup and specification, mobile robot performance evaluation criteria, method of analysis of the experiment results, and experimental observations and responses to the questionnaire that all have been adopted in this mobile robot design experiment.

3.1. Mobile Robot Design Experiment Setup and Specification


In the experimental setup of this quasi experiment, each design team was divided into three subteams: one for mechanical aspects, one for electronics aspects, and one for software aspects. The design experiment was to design and build a mobile robot that is able to collect six pallets from a conveyor belt, B, and to transport them to one of two delivery points, D1 or D2, depending on the type of pallet within five minutes, as illustrated in Figure 1 [21]. The task would continue until half dozen pallets were transferred or the time limit was reached. In this experiment, the following conditions were applied: (a) the conveyer that is indicated in light green in Figure 1 could be

108

Vol. 5, Issue 1, pp. 107-121

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
started/stopped & reversed using Light Dependent Resistor (LDR) optical switches; (b) an adjustable Light Emitting Diode (LED)-based beam suitable for driving an optical sensor, mounted just above the conveyer belt was available [21].

Figure 1. Mobile robot design experiment contest area topography [21]

As to the mechanical sub-system in this experiment, there was a set of resources available for the competing design groups. These resources available for the mechanical sub-system included: (1) transmission components such as wheels, (2) castors, (3) D.C. motors, (4) gearboxes, (5) pneumatic actuators, (6) swivel connector, (7) pneumatic valve assembly, (8) pneumatic hoses and connectors, (9) fasteners, (10) springs, (11) spur and bevel gears, (12) gear racks, (13) bearings, (14) adhesives, (15) lubricants, (16) structural materials with the availability of a workshop for processing these structural materials [21]. The competing design groups in the domain of electronics sub-system were also provided with a set of resources. These resources for the electronics sub-system included: (a) I2C bus (b) transducers, (c) LEDs, (d) ICs, (e) diodes, (f) MOSFETs, (g) capacitors, (h) resistors, (i) infra red emitter/detector assemblies, (j) infra red detector amplifier assembly, (k) potentiometers, (l) D plugs, (m) a data sheet provides data on I/O ports of the D.C. motors, (n) PCBs, (o) soldering and circuit construction equipment, (p) 5v power supply lead for prototyping, (q) motor/gearbox to PCB header lead, (r) microcontroller to PCB 12v lead, (s) thermistor assembly & thermocouple materials [21]. As to the software sub-system, there was a set of resources available for the competing design groups. These resources for the software sub-system included: (I) C++ compiler, (II) 32-bit microcontroller, (III) software library, (IV) power supply unit plus output lead [21]. In the simulation setup, the competing design groups in the domain of mechanical sub-system were provided with computers, CAD system, and CAM system. The competing design groups in the domain of software sub-system were provided with computers, sensor simulation PCB, and I2C cable [21]. Having seen this, let us now investigate the performance evaluation criteria adopted in this design experiment.

3.2. Mobile Robot Design Experiment Performance Evaluation Criteria


The robot performance evaluation criteria are collectively the mobile robot performance score based on conducting a set of tasks, accomplishing of which needs meeting the mobile robot performance attributes mentioned in section II, within a specific timeframe. The task to be performed by the robot was to collect six pallets from a conveyor belt, B, and to transport them to one of two delivery

109

Vol. 5, Issue 1, pp. 107-121

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
points, D1 or D2 as illustrated in Figure 1, depending on the type of pallet within five minutes. The task would continue until half dozen pallets were transferred or the time limit was reached. Now, let us have a look at the method of analysis of the experiment results.

3.3 Method of Analysis of the Mobile Robot Design Experiment Results


The analysis of the mobile robot design experiment results in this research uses both descriptive statistics and inferential statistics. In the inferential statistics part, the non-parametric-statistic tool in the Statistical Package for Social Sciences (SPSS) is used in order to avoid making assumptions about the populations parameters and consequently to improve the validity of the statistical analysis results. In order to identify which design team implemented which design activity and strategy to what extent in the experiment, a questionnaire lists the design activities and strategies and the degree of their implementation was constructed. In order to investigate the possible binding between some design activities and strategies, a dependency analysis has been conducted. The assessment of how effective was the implementation of the design activities and strategies is implied in the correlation between the extent of their implementation and the performance score scored by the design team. The adopted statistical approach has the following five attributes: (1) the population is novel designers of mobile robots; (2) the sampling frame is based on a bounded and unbiased collection of designers in which every single individual is identified and can be selected; (3) the data type is categorical random variable; (4) the sampling design is based on the probability simple random sampling because of its cost-effectiveness and reasonable accuracy; (5) the target sample size to be not less than 30 novel designers which is the minimum statistically representative sample size [22, 23]. This method of assessment has been applied to the experimental observations and responses to the questionnaire.

3.4 Experimental Observations and Responses to the Questionnaire


The research observations obtained from observing the mobile robot design activities in the experiment were further verified by including them in the questionnaire handed to the novel designers. The questionnaire was administered to the respondents in 2008. The responses to the questionnaire were categorized as follows: Strongly disagree is ranked 1, Disagree is ranked 2, Agree is ranked 3, and Strongly agree is ranked 4. An average value in each column is used to fill in the gap of empty responses as a way of manipulating responses to questionnaires [24]. There were 29 design teams participating in this design experiment and a sample size of 174 could be realized which exceeds the requirement of the minimum representative sample size of 30 novel designers. This satisfies the first of the two criteria for the sample attribute of being representative, which are sample size and sampling design. The second criterion for the sample attribute of being representative, i.e. sampling design, was also satisfied since the sampling design in this research was based on probability simple random sampling which is suitable for limited generalization purposes and cost-effectively results in reasonably fair results. The pragmatic reader might now well ask: How have been the results analyzed? The next section will answer this question.

IV.

STATISTICAL ANALYSIS OF THE RESULTS

The observations in this mobile robot design experiment have shown that there were design strategies and activities that were commonly adopted by the entire mobile robots novel designers involved in the experiment and there were some mobile robot design strategies and activities that were adopted only by some of them. The scores achieved by the design teams have shown that there were six design teams the performance of their robots was superior. A statistical analysis has been conducted for each design team in order to examine whether there is correlation between their mobile robot performance and the design activities and strategies they adopted. In the descriptive statistics a frequency analysis of the data was conducted including the mean and standard deviation. In the inferential analysis, a non-parametric statistical analysis was conducted using Spearman correlation coefficient which provides more rigorous results than the parametric statistical analysis [23]. This section will present firstly the results of analysis of Bi-variate correlation with robot performance. Secondly, frequency analysis of the data is presented. Thirdly, the results of the non-parametric statistical analysis and the results of the dependency analysis are demonstrated. After that the reliability analysis results are presented. Finally, the implication of percentage of variation in robot performance due to a design variable (r2) is demonstrated.

110

Vol. 5, Issue 1, pp. 107-121

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963 4.1 Bi-variate Correlation Between Lean and Agile Design Activities and Strategies and Mobile Robot Performance
The ranges of statistical correlation adopted in the present research are: (1) no correlation when the correlation coefficient ranges from 0 to less than 0.1; (2) low correlation when the correlation coefficient ranges from 0.1 to less than 0.3; (3) moderate correlation when the correlation coefficient ranges from 0.3 to less than 0.6; (4) high correlation when the correlation coefficient ranges from 0.6 to 1 [25]. This section presents the results related to the lean mobile robot design activities and strategies and the agile mobile robot design activities and strategies. It has been found that 65% of the total mobile robot design activities and strategies are affiliated with the lean design paradigm. It has been as well found with 99% reliability that 22% of these lean robot design activities and strategies significantly are among the most positively correlated design activities and strategies with improving mobile robot performance; these particular mobile robot design activities and strategies have proved to significantly improve mobile robot performance by more than 10%. These moderate positively correlated lean design activities and strategies with mobile robot performance are presented in Appendix A. It has been also found that 35% of the total mobile robot design activities and strategies are affiliated with the agile design paradigm. It has been found as well with 99% reliability that 25% of these agile robot design activities and strategies significantly are among the most positively correlated design activities and strategies with improving mobile robot performance; these particular mobile robot design activities and strategies have proved to significantly improve mobile robot performance by more than 10%. These moderate positively correlated agile design activities and strategies with mobile robot performance are presented in Appendix B. The appendices show the most influential lean/agile design activities and strategies on mobile robot performance, sorted in a descending order, respectively. In the appendices, the first column entitled Observation/Hypothesis presents a brief description of the design activity/strategy. The next column in the appendices entitled Design Phase implies to which design phase, i.e. scope-based, conceptual, preliminary, and detailed design phases, the investigated design activity/strategy is related. The third column in the appendices entitled Design Strategy /Activity shows whether the design variable under investigation is a design strategy or design activity. The fourth column in the appendices entitled Observation/Hypothesis elucidates whether the design variable under investigation has been identified due to experimental observation or by a hypothesis deduced from literature review. The fourth column in the appendices entitled Reference in Literature to Hypothesis refers to the relevant references in literature for those design variables that have been identified through literature review. The last column in the appendices entitled Percentage of Variation in Mobile Robot Performance (r2) depicts the corresponding value of r2 for each design variable under investigation. Having seen this, let us now investigate the frequency analysis of the data.

4.2 Frequency Analysis of the Data


Frequency analysis provides an insight into the descriptive statistics of the collected data and of the categories of the collected data. This section presents the frequency analysis of the most positively correlated design activity with mobile robot performance. The most positively correlated design activity with mobile robot performance has been to have the largest number of design iterations, if any, to occur within the software sub-system, i.e. agile design activity #1 in Appendix B. This section shows the frequencies and descriptive statistics of this design activity with mobile robot performance. Table 1 shows that the total valid percentage of data has been 99.4%, which is proof on valid results. The largest percentage of responses as to this design activity, i.e. 42%, has been of Agree category, as depicted in Figure 2.

111

Vol. 5, Issue 1, pp. 107-121

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Table 1. Frequencies and descriptive statistics of the design activity of having the largest number of iterations in software subsystem Cumulative Frequency Valid Strongly Disagree Disagree Do Not Know Agree Strongly Agree Total Missing Total System 12 30 24 72 36 174 1 175 Percent 6.9 17.1 13.7 41.1 20.6 99.4 .6 100.0 Valid Percent 6.9 17.2 13.8 41.4 20.7 100.0 Percent 6.9 24.1 37.9 79.3 100.0

Figure 2. Percentage of responses in each category of response to having the largest number of iterations in software subsystem

This section paves the way to investigate whether or not there has been interdependency among the data.

4.3 Non-parametric Analysis and dependency analysis results using Bi-variate correlation and partial correlation analyses
In order to investigate how rigorous the bi-variate correlation analysis results are, a dependency analysis has been conducted. The dependency analysis explores whether or not an independent variable, which was proven to be correlated with a dependent variable, is in turn a dependent variable on other variables. The dependency analysis in the present study is twofold. Firstly, a mutual dependency analysis based on bi-variate correlation coefficient is conducted. Secondly, a partial correlation analysis is conducted consequently in order to control for the effect of each of the two mutually dependent variables on each other in relation to other variables.

112

Vol. 5, Issue 1, pp. 107-121

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
It has been found that there has been only pair of mobile robot design activities and strategies that has mutual dependency. This pair is the lean mobile robot design strategy which is to adopt modular design, and the lean mobile robot design strategy which is to strike a balance between functionality and design iterations. The bi-variate correlation coefficient between these two design strategies is 0.711, as shown in Table 3, which plainly implies strong potential for mutual dependency, since it is larger than 0.6 [25]. Hence, a partial correlation analysis between the lean mobile robot design strategy of adopting modular design and mobile robot performance, controlling for the lean mobile robot design strategy of striking a balance between functionality and design iterations, has been conducted as shown in Table 4. The results of the partial correlation analysis have shown that the effect of the later of these two design strategies on the relationship between the former and mobile robot performance is 0.049 as indicated in Table 4, i.e. negligible since the amount of influence is less than 5% [25]. Hence, the correlation coefficient between the former and robot performance remains unchanged in the low correlation category. Also, a partial correlation analysis between the later of these two design strategies and mobile robot performance, controlling for the former has been conducted as shown in Table 5.
Table 3. Result of the dependency analysis bivariate correlation Adopting modular design Spearman's rho Striking a balance between Functionality & Design iterations Correlation Coefficient Sig. (2-tailed) N **. Correlation is significant at the 0.01 level (2-tailed). Table 4. Partial correlation between the lean mobile robot design strategy of adopting modular design and normalized performance score, controlling for the lean mobile robot design strategy of striking a balance between functionality and design iterations Control Variables Striking a balance between Functionality & Design iterations Adopting modular design Adopting modular design Correlation 1.000 Significance (2tailed) df Correlation Significance (2tailed) df .049 SCORENORMAL -0.711** 0.000 174

. 0 .049 .427 171

.427 171 1.000 . 0

SCORE-NORMAL

Table 5. Partial correlation between the lean mobile robot design strategy of striking a balance between functionality and design iterations and normalized performance score, controlling for the lean mobile robot design strategy of adopting modular design Striking a balance between Functionality & Design iterations .009 .890 171

Control Variables Adopting modular design

SCORENORMAL SCORE-NORMAL Correlation Significance (2tailed) df 1.000 . 0

113

Vol. 5, Issue 1, pp. 107-121

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Striking a balance between Functionality & Design iterations Correlation .009 Significance (2tailed) df 1.000

.890 171

. 0

The results of the partial correlation analysis have shown that the effect of the former of these two design strategies on the relationship between the later and mobile robot performance is 0.009 as indicated in Table 5, i.e. negligible since the amount of influence is less than 5% [25]. Thus, the correlation coefficient between the later of these two design strategies and mobile robot performance remains unchanged in the no correlation category. This might raise the following question: How reliable are these results? This question will be addressed in the following section.

4.4 Reliability Analysis Results


In reliability statistics, if the data collected reached the level of 0.7 or more on the Cronbachs Alpha scale, the collected data has then good internal consistency [24]. Since the Cronbachs Alpha internal reliability factor of the collected data is 0.705, as shown in Table 6, there is good internal consistency of the data, based on the average inter-item correlation. Table 6. Reliability statistics
Cronbach's Alpha .678 Cronbach's Alpha Based on Standardized Items .705 N of Items 67

There are some assumptions based on which the reliability analysis has been conducted: (1) observations are independent; (2) errors are uncorrelated between the mobile robot design activities and strategies. The following section explores the implication of these results.

4.5 Implication of Percentage of Variation in Mobile Robot Performance due to a Design Variable (r2)
In a research project that includes several variables, it is often sought to know how one variable is related to another. Correlational research attempts to determine whether and to what degree, a relationship exists between two or more quantifiable variables, such as two design activities. Correlation implies prediction of the value of one variable if we know the value of the other correlated variable, but does not necessarily imply full causation. The reason why correlation does not necessarily imply full causality is that a third variable may be involved of which we are not aware. However, correlational research can imply partial correlation in terms of prediction of percentage of variation in, for instance, variable B due to variable A [25]. The correlation coefficient (r) value ranges from -1 to 1. Having correlation coefficient of a value -1 indicates perfect negative relation between the variables under examination. If the correlation coefficient has a value of 0, there is no relation between the variables under examination. A correlation coefficient of a value 1 is interpreted as a perfect positive relation between the variables under examination. The square of the correlation coefficient (r2) represents the percentage of variation in one of the two variables under investigation due to the other correlated variable, which implies a causal link between these two variables. Causality in this research is determined according to the percentage of variation in technical performance due to the variable (r2) of moderate-to-high correlation coefficient (r). The most correlated design activities and strategies to mobile robot performance are determined according to their percentage of variation in technical performance due to the variable correlation coefficients with mobile robot performance, and according to their resulting p-value. The aggregation of percentage of variation in mobile robot performance due to the design variables has reached collectively more than 100% since there is overlap in the affected areas in mobile robot performance by the design variables. This research has shown that the design activities and strategies

114

Vol. 5, Issue 1, pp. 107-121

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
indicated in Appendix A and Appendix B, i.e. design variables, are independent except for 3% of them where mutual dependency has been identified as proved in the dependency analysis results indicated in section 4.3. This helps now in deducing the most efficient mobile robot design paradigm.

V.

HYBRID LEAN-AGILE MOBILE ROBOT DESIGN PROCESS

Based on the presented results, adopting both lean robot design activities and strategies and agile robot design activities and strategies together in the mobile robot design process has been proved to be practically valid. In addition, the design experiment has proved that both lean mobile robot design activities and strategies and agile mobile robot design activities and strategies are correlated with and have significant influence on improving mobile robot performance. Besides, it has been found that there are mobile robot design activities which have attributes of both the lean and agile design paradigms. For instance, the design activity of evaluating design concepts exhibits attributes of both lean and agile design paradigms. This further supports the practical validity of adopting both lean mobile robot design activities and strategies and agile mobile robot design activities and strategies together in the mobile robot design process. Therefore, the present research proposes a hybrid leanagile mobile robot design paradigm in which both lean mobile robot design and agile mobile robot design activities and strategies are adopted in the mobile robot design process benefiting from the attributes of both the lean design and agile design paradigms. The proposed mobile robots hybrid lean-agile design pillars include: (1) adopting the most effective lean design strategies, such as considering reliability of the mobile robot in the design process in terms of the ability of the mobile robot to perform its required functions under stated conditions for a specified robot service time, (2) adopting the most effective agile design strategies, such as having designs that are less vulnerable to failure modes and are less exposed to and less sensitive to the uncontrollable external factors by shifting complexity to the software subsystem rather than to the mechanical subsystem, (3) adopting the most effective lean design activities, such as adopting testable design inter-deliverables within and among system modules based on project milestones in order to detect mistakes as early as possible and to minimize mistakes impact on the successful completion of the design project, (4) adopting the most effective agile design activities, such as having iterations in the software subsystem rather than in the mechanical subsystem in order to end up with shorter development time, (5) adopting a three-phase hybrid lean-agile risk management action plan that helps in integrating mobile robot design activities and strategies in order to minimize risk in the mobile robot design process the first phase of which is before the beginning of the mobile robot design process in which SWOT (i.e. Strengths, Weaknesses, Opportunities, and Threats) analysis is conducted; the second phase of this plan is during the mobile robot design process in which the design team proves value of design concept to stakeholders at the end of each design phase ensuring that the mobile robot satisfies stakeholders, fits for its intended purpose, is of a quality to last its design lifetime, and can be made at an acceptable cost; the third and last phase of this plan is after the end of the mobile robot design process in which Failure Modes and Effects Analysis (FMEA) is conducted and ultimately the models of mobile robots which fall short of the set target are killed off as soon as this appears; This approach to managing risk in product design process is expected to help in realizing the sought harmonious integration between the product development activities and strategies, (6) adopting mobile robot design functional strategy in terms of the following items: standard components, modular design, communized architecture of mobile robots chassis and frame parts, and concurrent engineering in the design process. The pragmatic reader is now invited to explore how valid this research is.

VI.

RESULTS AND DISCUSSION

The questionnaire which has been designed to collect data for the present study has been designed with emphasis placed on maximizing clarity of the wording of the questionnaire and minimizing the influence of the questionnaires problems such as bias. In order to maximize clarity of questions, clarification footnotes have been used. In addition and in order to minimize bias, a two-fold strategy has been adopted; firstly, in order to avoid researchers bias, closed-ended questions have been used; secondly, in order to spot respondents bias, repeatedly inverted questions have been used. The statistical sampling in this research is representative and the experimental results are statistically

115

Vol. 5, Issue 1, pp. 107-121

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
significant with 99% reliability without making any assumptions about the population of novel designers of mobile robots. In this study, the reliability statistics test results, based on Spearman correlation coefficient and nonparametric statistical analysis, have proved the reliability of the data used in this research and thus have verified the results of this research. In order to investigate the validity of the results of this study, an inferential statistical analysis was conducted on the resulting correlation coefficients. The p-value has been adopted as a measure that a result is true to the population. A cut-off p-value of 0.1 is adopted in this research. In addition, the statistical sampling in this research is representative in terms of sampling design that is suitable for limited generalization with cost-effectively fair statistical results and sample size that satisfies the minimum statistically representative sample size. Causality in this study is determined according to the percentage of variation in robot performance due to a variable (r2) of moderate-to-high correlation coefficient (r). The research results are also valid in terms of the four validity types; firstly, in terms of statistical conclusion validity, since the resulted relationships are meaningful and reasonable; secondly, in terms of internal validity, since the results are causal rather than being just descriptive; thirdly, in terms of construct validity, since the results represent what is theoretically intended; fourthly, in terms of external validity, since the results can be limitedly generalized to the population of novel designers of mobile robots. Hence, this research has proved the validity of the correlation between lean and agile mobile robot design activities and strategies and mobile robot performance. In addition, it has validated the practicality of having both lean mobile robot design activities and strategies and agile mobile robot design activities and strategies implemented in the mobile robot design process. The study has shown that 65% of typical mobile robot design activities and strategies are affiliated with the lean design paradigm, while the remaining 35% are affiliated with the agile design paradigm. In addition, it has been found with 99% reliability that 22% of the lean mobile robot design activities and strategies and 25% of the agile mobile robot design activities and strategies significantly are among the most positively correlated design activities and strategies with improving mobile robot performance; these particular mobile robot design activities and strategies have proved to significantly improve mobile robot performance by more than 10% and thus should receive the highest priority of being assigned design process resources. Thus, the study has shown that with 99% reliability more than 10% of the variation in mobile robot performance can be explained by adopting a hybrid leanagile mobile robot design paradigm that adopts both lean mobile robot design activities and strategies and agile mobile robot design activities and strategies together in the mobile robot design process. Hence, adopting a hybrid lean-agile mobile robot design paradigm is technically valid.

VII.

CONCLUSION

This paper has determined how and when value is added in the mobile robot design process by identifying empirically the most influential mobile robot design activities and strategies on mobile robot performance. The paper has identified first the key technical attributes of mobile robots. Secondly, the research methodology and statistical analysis have helped in identifying the causal relationships between the lean and agile mobile robot design activities and strategies and mobile robot performance, as presented in Appendices A and B. Finally, the research methodology and statistical analysis have helped as well in proving that adopting both lean mobile robot design activities and strategies and agile mobile robot design activities and strategies in the mobile robot design process is practically valid. The proposed mobile robots hybrid lean-agile design pillars include: (1) adopting the most effective lean design strategies, (2) adopting the most effective agile design strategies, (3) adopting the most effective lean design activities, (4) adopting the most effective agile design activities, (5) adopting a three-phase hybrid lean-agile risk management action plan in order to minimize risk in the mobile robot design process the first phase of which is before the beginning of the mobile robot design process in which SWOT (i.e. Strengths, Weaknesses, Opportunities, and Threats) analysis is conducted; the second phase of this plan is during the mobile robot design process in which the design team proves value of design concept to stakeholders at the end of each design phase ensuring that the mobile robot satisfies stakeholders, fits for its intended purpose, is of a quality to last its design lifetime, and can be made at an acceptable cost; the third and last phase of this plan is after the end of

116

Vol. 5, Issue 1, pp. 107-121

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
the mobile robot design process in which Failure Modes and Effects Analysis (FMEA) is conducted and ultimately the models of mobile robots which fall short of the set target are killed off as soon as this appears; This approach to managing risk in product design process is expected to help in realizing the sought harmonious integration between the product development activities and strategies, (6) adopting mobile robot design functional strategy in terms of the following items: standard components, modular design, communized architecture of mobile robots chassis and frame parts, and concurrent engineering in the design process. The study has shown that 65% of typical mobile robot design activities and strategies are affiliated with the lean design paradigm, while the remaining 35% are affiliated with the agile design paradigm. In addition, it has been found with 99% reliability that 22% of the lean mobile robot design activities and strategies and 25% of the agile mobile robot design activities and strategies significantly are among the most positively correlated design activities and strategies with improving mobile robot performance; these particular mobile robot design activities and strategies have proved to significantly improve mobile robot performance by more than 10% and thus should receive the highest priority of being assigned design process resources. Thus, the study has shown that with 99% reliability more than 10% of the variation in mobile robot performance can be explained by adopting a hybrid leanagile mobile robot design paradigm that adopts both lean mobile robot design activities and strategies and agile mobile robot design activities and strategies together in the mobile robot design process. Hence, adopting a hybrid lean-agile mobile robot design paradigm is technically valid.

VIII.

FUTURE WORK

The present research exhibits further investigation of the validity of these results using a larger sample size that is large enough to represent the whole population of novel designers of mobile robots. In addition, it exhibits further investigation of the pillars and validity of the proposed hybrid lean-agile mobile robot design paradigm in an industrial setting. Moreover, the conclusions indicated herein can be used to guide another new experiment to see if the results would be improved.

ACKNOWLEDGEMENT
The people of the Cambridge Engineering Design Centre, Cambridge University, are acknowledged for their help in accomplishing this work. Also, the support of this research provided by the EPSRC under IMRC grant number EP/E001777/1 and by Cambridge Overseas Trust is acknowledged.

REFERENCES
[1]. Baines, T., Lightfoot, H., Williams, G.M., Greenough, R., (2006) State-of-the-art in lean design engineering: a literature review on white collar lean, Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture, Vol. 220, pp15391547. Womack, J., Jones, D., Roos, D., (1990) The machine that changed the world, New York: Macmillan. Womack, J.P., Jones, D.T., (1996) Lean thinking, New York: Simon and Schuster. Yusuf, Y.Y., Sarhadi, M., Gunasekaran, A., (1999) Agile manufacturing: the drivers, concepts and attributes, International Journal of Production Economics, Vol. 62, pp33-43. Clemson, B., Alasya, D., (1992) Implement TQM and CIM together, Proc. International Engineering Management Conference, IEEE Engineering Management Society, New York. Maul, R., Tranfield, D., (1992) Methodological approaches to the regeneration of competitiveness in manufacturing, Proc. 3rd International Conference on Factory 2000, IEE, UK, pp12-17. Baxter, M.R., (1995) Product design: practical methods for the systematic development of new products, Chapman and Hall. Annappa, C.M., Panditrao, K.S., (2012) Application of value engineering for cost reduction a case study of universal testing machine, International Journal of Advances in Engineering & Technology, Vol. 4, No. 1, pp. 618-629. Chalupnik, M.J., Eckert, C.M., Clarkson, P.J., (2006) Modelling design processes to

[2]. [3]. [4]. [5]. [6].

[7]. [8].

[9].

117

Vol. 5, Issue 1, pp. 107-121

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
improve robustness, Proc. 6th Integrated Product Development Workshop, IPD 2006, Schonebeck/Bad Salzelmen b. Magdeburg, Germany. Chalupnik, M.J., Wynn, D.C., Eckert, C.M., Clarkson, P.J., (2007) Understanding design process robustness: a modelling approach, Proc. 16th International Conference on Engineering Design (ICED'07), Paris, France, pp455-456. Aravinth, P., Muthu Kumar, T., Dakshinamoorthy, A., Arun Kumar, N., (2012) A criticality study by design failure mode and Effect analysis (FMEA) procedure in LINCOLN V350 PRO welding machine, International Journal of Advances in Engineering & Technology, Vol. 4, No. 1, pp611-617. Browning, T.R., Deyst, J.J., Eppinger, S.D., (2002) Adding value in product development by creating information and reducing risk, IEEE Transactions on Engineering Management, Vol. 49, No. 4, pp443-458. ABB Robotics Product Guide, (2008) http://www.abb.com/product/ap/seitp327/cc4949febe7dcfe9c12573fa0057007a.aspx . Christensen, H.I., Dillmann, R., Hgele, M., Kazi, A., Norefors, U., (July 2008) European robotics, European Robotics Forum. Fanuc Robotics, M410 ib series, Product Guide, (2008) http://www.fanucrobotics.com/file_repository/fanucmain/m-410iB%20Series.pdf . HK Systems, Automated Guided Vehicles, HK30/F, Product Guide, (2008) http://www.hksystems.com/agv/forked-vehicles.cfm?m=2&s=3 . Alipour, K., Moosavian, S.A.A., Bahramzadeh, Y., (2008) Dynamics of wheeled mobile robots with flexible suspension: analytical modelling and verification, International Journal of Robotics and Automation, Vol. 23, No. 4, pp242-250. Moosavian, S.A.A., Alipour, K., (2007) On the dynamic tip-over stability of wheeled mobile manipulators, International Journal of Robotics and Automation, Vol. 22, No. 4, pp322-328. Liu, D., Deters, R., Zhang, W.J., (August 2009) Architectural design for resilience, Enterprise Information Systems, pp1-16. Otto, K.N., Antonsson, E.K., (1991) Trade-Off Strategies in Engineering Design, Research in Engineering Design, Vol. 3, No. 2, pp87-104. Cambridge Universitys Engineering Department (CUED), (2007) Second Year Undergraduate Integrated Design Project, Cambridge University Press. Alder, H.L., Roessler, E.B., (1962) Introduction to probability and statistics, W.H. Freeman and Company. Wackerly, D.D., Mendenhall, W., Scheaffer, R.L., (1996) Mathematical statistics with applications, Duxbury Press. Sekaran, U., (2003) Research methods for business, John Willey & Sons, Inc. Cohen, J., (1988) Statistical power analysis for the behavioural sciences, New York: Academic Press. Prabhakar Murthy, D.N., Rausand, M., sters, T., (2008) Product reliability: specification and performance, Springer. Coulibaly, A., Houssin, R., Mutel, B., (2008) Maintainability and safety indicators at design stage for mechanical products, Computers in Industry, Vol. 59, No. 5. Yavuz, H., (2007) An integrated approach to the conceptual design and development of an intelligent autonomous mobile robot, Robotics and Autonomous Systems, Vol. 55, pp498 512. Lvrdy, V., Browning, T.R., (2005) Adaptive test process designing a project plan that adapts to the state of a project, INCOSE Publications. Huang, C., (2000) Overview of modular product development, Proc. of National Science Council, ROC(A), Vol. 24, No.3, pp149-165. Clark, K.B., Baldwin, C.Y., (2000) Design rules. Vol. 1: The power of modularity, Cambridge, Massachusetts: MIT Press. Amon, C.H., Finger, S., Siewiorek, D.P., Smailagic, A., (1995) Integration of design education, research and practice at Carnegie Mellon University: a multi-disciplinary course in wearable computer design, Proc. Frontiers in Education Conference, IEEE, Vol. 2, pp14-

[10].

[11].

[12].

[13]. [14]. [15]. [16]. [17].

[18].

[19]. [20]. [21]. [22]. [23]. [24]. [25]. [26]. [27]. [28].

[29]. [30]. [31]. [32].

118

Vol. 5, Issue 1, pp. 107-121

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
22. [33]. Pahl, G., Beitz, W., (1998) Engineering design: a systematic approach, Springier-Ver-. Ag. [34]. Court, A.W., (1998) Issues for integrating knowledge in new product development: reflections from an empirical study, Knowledge-Based Systems, Vol. 11, pp391398. [35]. Bernard, R., (1999) Early evaluation of product properties Within the integrated product development, SHAKER VERLAG. [36]. Browning, T.R., Fricke, E., Negele, H., (2005) Key concepts in modelling product development processes, Wiley InterScience, pp104-128. [37]. Krishnan, V., (1998) Modeling ordered decision making in product development, European Journal of Operational Research, Vol. 111, No. 2, pp351-368. [38]. Pearce, R.D., (1999) Decentralised R&D and strategic competitiveness: globalised approaches to generation and use of technology in multinational enterprises MNEs, Research Policy, Vol. 28, pp157178.

APPENDIX A
Moderate Positively Correlated Lean Activities and Strategies with Mobile Robot Performance
Observation/Hypothesis Design Phase Design Strategy /Activity Observation /Hypothesis Reference in Literature to Hypothesis [26, 27] Percentage of Variation in Mobile Robot Performance (r2) 0.2**

1. Considering reliability of the mobile robot in the design process, in terms of the ability of the mobile robot to perform its required functions under stated conditions for a specified robot service time; 2. Aiming at striking a balance between fast response on one hand and stability, accuracy, and payload fulfilment of the mobile robot final concept on the other hand; 3. Adopting the simplest design, that meets the design problem requirements specification using the minimum set and most effective combination of system components, rather than the cheapest design and rather than the lightest weight design; 4. Adopting testable design interdeliverables within and among system modules based on project milestones, in order to detect mistakes as early as possible and to minimize mistakes impact on the successful completion of the design project; 5. Adopting early verification and validation of the design concept, e.g. early testing in the design process, in order to avoid becoming trapped in incompetent design concepts; 6. Given the intertwined and overlapping nature of the subsystems of the mobile robot, e.g. the mechanical-electronics interconnection for robot speed, adopting modular deliverables and testing, i.e. testing deliverables of each module, rather than sub-systems deliverables and testing;

Conceptual & Preliminary design phases

Strategy

Hypothesis

Conceptual & Preliminary design phases

Strategy

Hypothesis

[28, 15]

0.11**

Conceptual & Preliminary design phases

Strategy

Observation

N/A

0.1**

Preliminary & Detailed design phases

Activity

Hypothesis

[29, 30]

0.2**

Conceptual & Preliminary design phases

Activity

Observation

N/A

0.11**

Preliminary & Detailed design phases

Activity

Hypothesis

[31]

0.11**

119

Vol. 5, Issue 1, pp. 107-121

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
7. Having a multi-disciplinary team of novel designers, each of whom is more aware of more than one of relevant disciplines such as material science, design development approaches, leadership skills, ..etc., and is more aware of the overlap and intersections between them, in order to improve the opportunity of ending up with a better design, and to minimize the risk of mistake occurrence; 8. Setting an operational design strategy of modular testing, i.e. testing the deliverables between the system submodules as a way of verifying conformance of these system submodules to the conceptual functional requirements, in order to detect mistakes as early as possible and to minimize mistakes impact on the successful completion of the design project; 9. Documenting the outcome of design discussions and consequently analyzing that outcome; 10. Meeting up collectively in the conceptual design phase than in detailed design phase; 11. Starting the design process as early as possible in the project timeframe and assigning sufficient time for presenting the outcome of the design process milestones; Phase of Design Scope Activity Hypothesis [32] 0.1**

Conceptual & Preliminary design phases

Strategy

Hypothesis

[29]

0.1**

Conceptual & Preliminary design phases Conceptual & Detailed design phases Phase of Design Scope & Conceptual design phase

Activity

Hypothesis

[33]

0.1** 0.1**

Activity

Hypothesis

[34]

Activity

Observation

N/A

0.1**

12. Checking accuracy of manufacturing Detailed design Activity and assembly of the final prototype, in phase order to avoid unexpected failure due to manufacturing defects and/or assembly mistakes; 13. Adopting quick testing of inter- Preliminary & Activity deliverables between the modules of the Detailed design software draft code, and extensive testing phases of the overall software draft code on a prototype PCB or on an equivalent facility at the end of the project, in order to strike a balance between minimizing cost of test and detecting mistakes as early as possible. **. Correlation is significant at the 0.01 level (2-tailed).

Hypothesis

[31]

0.1**

Hypothesis

[29]

0.1**

Appendix B Moderate Positively Correlated Agile Activities and Strategies with Mobile Robot Performance
Observation/Hypothesis Design Phase Design Strategy /Activity Observation /Hypothesis Reference in Literature to Hypothesis N/A Percentage of Variation in Mobile Robot Performance (r2) 0.25**

1. Shifting complexity towards the software subsystem, rather than towards the mechanical subsystem, in order to have the largest number of design iterations, if any, to occur within the software sub-system, followed

Preliminary & Detailed design phases

Activity

Observation

120

Vol. 5, Issue 1, pp. 107-121

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
accordingly by the electronics subsystem number of iterations; 2. Having iterations in the software Preliminary & Activity subsystem, rather than in the mechanical Detailed subsystem, in order to end up with shorter design phases development time; 3. Having designs that are less vulnerable Conceptual & Strategy to failure modes (e.g. out-of-plane Preliminary buckling, roll mode and pitch mode design phases instability, and friction wear) (and consequently have better reliability) and are less exposed to and less sensitive to the uncontrollable external factors (and consequently had better robustness), by shifting complexity to the software subsystem rather than to the mechanical subsystem; 4. Having less constraints on the design Conceptual & Strategy by shifting complexity to the software Preliminary subsystem and consequently to the virtual design phases space, e.g. data processing time, rather than to the mechanical subsystem and consequently to the physical space, e.g. suspension space; 5. Adopting top-down system structure Phase of Activity decomposition in order to analyze the Design Scope functional structure of the mobile robot and consequently to map this functional structure of the mobile robot to the requirements specification and consequently to come up with the mobile robot functional requirements; 6. Having resources provisions for Phase of Strategy unforeseen troubles, in order to minimize Design Scope vulnerability of our design development process to the influence of external factors; 7. Adopting a decentralised decision Conceptual & Strategy making strategy, by empowering sub- Preliminary & teams to be authorised to make tactical Detailed decisions without need to refer them to design phases the team leader, rather than strategic decisions that can degrade performance in this case; 8. Making a prediction of the progressive Conceptual & Activity failure of the design through drawing Preliminary sketches or through conducting a finite design phases element analysis, in order to avoid unexpected failure. **. Correlation is significant at the 0.01 level (2-tailed).

Observation

N/A

0.25**

Observation

N/A

0.2**

Observation

N/A

0.2**

Hypothesis

[35]

0.1**

Hypothesis

[36]

0.1**

Hypothesis

[37, 38]

0.1**

Observation

N/A

0.1**

AUTHORS BIOGRAPHY
Salah A.M. Elmoselhy holds MS in mechanical design and production engineering that he received from Cairo University. He holds as well MBA in international manufacturing business that he received from Maastricht School of Management (MSM). He has ten years of industrial experience in CAD/CAM and robotised manufacturing systems. He has been recently a researcher at the Engineering Department and Fitzwilliam College of Cambridge University from which he received a Diploma of postgraduate studies in engineering design. He is currently a PhD Candidate in mechanical engineering working with the International Islamic University Malaysia (IIUM) and the Center for Sustainable Mobility at Virginia Polytechnic Institute and State University (Virginia Tech).

121

Vol. 5, Issue 1, pp. 107-121

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

PRESSURE DROP OF CUO-BASE OIL NANOFLUID FLOW INSIDE AN INCLINED TUBE


Mahdi Pirhayati1, Mohammad Ali Akhavan-Behabadi2, Morteza Khayat1
Department of Mechanical and Aerospace Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran 2 School of Mechanical Engineering, College of Engineering, University of Tehran, Tehran, Iran
1

ABSTRACT
An emprical study was carried out to study pressure drop in forced laminar nanofluid flow inside an inclined copper tube under constant heat flux condition at the outer wall.The CuO-base oil nanofluid in different nanoparticle weight concentration of 0.5%, 1% and 2% was produced by means of ultrasonic device in two steps method. In this study, the effect of tube inclination and nanofluid with different concentration are sudied.Results show that using nanofluid has slightly increased the pressure drop. Also, increase of tube inclination from zero (horizontal) to 30 degree at constant nanofluid concentration, the pressure drop decreased for Re<170.
KEYWORDS: Nanofluid, Pressure drop, inclination, single phase, constant heat flux, experimental

I.

INTRODUCTION

Thermal load removal is a great concern in many industries including power plants, chemical processes and electronics. In order to meet the ever increasing need for Most of these methods are based on structure variation, vibration of heated surface, injection or suction of fluid and applying electrical or magnetic fields which are well documented in literature [1,2]. However, applying these enhanced heat transfer techniques are no longer feasible for cooling requirement of future generation of microelectronic systems, since they would result in undesirable cooling system size and low efficiency of heat exchangers. To obviate this problem, nanofluids with enhanced thermo-fluidic properties have been proposed since the past decade. Nanofluid is a uniform dispersion of nanometersized particles inside a liquid which was first pioneered by Choi [3]. Excellent characteristics of nanofluids such as enhanced thermal conductivity, long time stability and little penalty in pressure drop increasing and tube wall abrasion have motivated many researchers to study on thermal and flow behavior of nanofluids. These studies are mainly focused on effective thermal conductivity, phase change behavior, tribological properties, flow and convective heat transfer of nanofluids. A wide range of experimental and theoretical studies has been performed on the effect of different parameters such as particle concentration, particle size, mixture temperature and Brownian motion on thermal conductivity of nanofluids. The results showed an increase in thermal conductivity of nanofluid with the increase of nanoparticles concentration and mixture temperature [47].Wen and

122

Vol. 5, Issue 1, pp. 122-129

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Ding [8] have studied Al2O3/water nanofluid heat transfer in laminar flow under constant wall heat flux and reported an increase in nanofluid heat transfer coefficient with the increase in Reynolds number and nanoparticles concentration particularly at the entrance region. In addition, few works have studied friction factor characteristics and pressure drop of nanofluids flow besides the convective heat transfer. Xuan and Li [9] investigated the flow and convective heat transfer characteristics for Cu/water nanofluids inside a straight tube with a constant heat flux at the wall, experimentally. Results showed that nanofluids give substantial enhancement of heat transfer rate compared to pure water. They also claimed that the friction factor for the nanofluids at low volume fraction did not produce extra penalty in pumping power. In laminar flow, Chandrasekar et al. [10] investigated the fully developed flow convective heat transfer and friction factor characteristics of Al2O3-water nanofluid flowing through a uniformly heated horizontal tube with and without wire coil inserts. They concluded that for the nanofluid with a volume concentration of 0.1%, the Nusselt number increased up to 12.24% compared to that of distilled water. However, the friction factors of the same nanofluid were almost equal to those of water under the same Reynolds numbers.Ben Mansour et al [11] has numerically investigated Water-AL2O3 nanofluid inside an inclined conjugated tube . By passing laminar nanoflouid flow in heated tube, they found that using the nanofluid increase the bouyancy of secondary induced flow and decrease friction at the inner wall. Recently,the effects of adding nano diamond with different concentration to engine oil on the pressure drop inside the microfin tube under constant heat flux at the outer wall and laminar nanofluid flow conditions was investigated by Akhavan-behabadi et al[12]. The results show an increase in pressure drop with enhances nano particle concentration. Review of literature shows that only a few articles have considered the pressure drop of nanofluid flow inside an inclined tube other than horizontal tube. In the present work, the simultaneous effects of adding nanoparticles to the base fluid and tube inclination on flow pressure drop are studied. A new suspension of nanofluid namely CuO-Base oil is selected for this investigation. The main reason for choosing CuO-base oil nanofluid is that copper oxide nanoparticles are used as additives for industrial oils such as engine oil, heat transfer oil and lubricating oil in order to remove heat from high heat flux surfaces.These additives also have shown anti wear and anti friction characteristics result in reducing pressure drop, due to their spherical shapes [13, 14]. Also, to study on the behavior of CuO nanoparticles more effectively, a type of oil with no additives (SN-500) is used. This type of oil is the basic component of the industrial oils. It is apparent that the effect of nanoparticles on heat transfer performance of the specified oil can be generalized to the mentioned industrial oils for the sake of heat transfer enhancement. This study results lead the maker to manufacture a smaller and more efficient non horizontal heat exchanger with various industrial application such as vertical heat exchanger instead of radiator in vehicle and modern thermal powerplants that work with lower pumping power of nanofluid flow. The next sections will give useful information about CuO nano particle, producing method of CuObase oil nanofluid with different concentrations, experimental apparatus, measurment tools and test section.in addition, in the result section, validation of experimental and theoretical data are checked for base oil at zero and 30 degrees of tube angles.

II.

NANOFLUID PREPARATION

The solid particles used in this study were CuO. They were produced with an average particle size of 40 nm and purity of 99% by means of chemical analysis method. The SEM (scanning electron microscope) micrograph of the CuO nanoparticles and the XRD (X-ray diffractions) Pattern are shown in Figs. 1 and 2, respectively. Reflections in the XRD pattern can be attributed to the CuO using JCPDS (Joint Committee on Powder Diffraction Standards). Also it can be seen from the SEM image of the sample that the majority of nanoparticles are in the form of large agglomerates before dispersion.

123

Vol. 5, Issue 1, pp. 122-129

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Figure 1, SEM image of CuO nanoparticles

Figure 2, XRD analysis of CuO nanoparticles

Nanofluids with particle weight concentrations of 0.5%, 1% and 2% were prepared by dispersing specified amount of CuO nanoparticles in base oil using an ultrasonic processor (Hielscher Company, Germany) generating ultrasonic pulses of 400 W at 24 kHz frequency.

Figure 3. Schematic diagram of experimental apparatus

124

Vol. 5, Issue 1, pp. 122-129

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
This device is used to break large agglomerates of nanoparticles in the fluid and make stable suspension. No surfactant was used as they may have some influence on the effective thermal conductivity of nanofluids. It was observed with naked eyes that the nanofluids were uniformly dispersed for 24 h and the complete sedimentation occurred after a week.

III.

EXPERIMENTAL APPARATUS

The schematic diagram of experimental apparatus is shown in Fig. 3. The flow loop consists of a rotary test section, heat exchanger, reservoir, gear pump, flow meter and flow controlling system. Fluid leaving the test section enters the flow meter, cools partially in the reservoir and then pumps through a heat exchanger in which water is used as cooling fluid, and again enters the test section. The test section can be rotated from zero degree (horizental) to 90 degree (vertical). In this study, nanofluids with different particle weight fractions of 0.5%, 1% and 2% are used as the working fluids. Also, pure base oil is used for the sake of comparison. The experimental apparatus is designed to measure pressure drop characteristics of working fluids over the length of the test section in horizontal and inclined (+30 degree) tube state.A round copper tube of 12.7 mm outer diameter, 0.9 mm wall thickness and 1200 mm length is used. Fig. 4 shows the cross sectional area of the applied rotary test sections. The nanofluid flowing inside the test section is heated by an electrical heating coil wrapped around it to generate constant heat flux. Flow measuring section is consisted of a 1 lit. glass vessel with a valve at its bottom. Flow rate is measured directly from the time required to fill the glass vessel. To adjust the flow rate, the valve of the bypass line is used. The total pressure drop of fluid flow along the test section is measured by high precision differential pressure transmitter (PMD-75). As it is shown in Fig. 3, this instrument measures the pressure difference between the inlet and outlet of test section. Provisions are also made to measure all the other necessary parameters. The ranges of operating parameters are defined in Table 1.

0.9, mm

10.9, mm

Figure 4. Cross section and inclination of rotary test section

IV.

DATA COLLECTION AND DATA REDUCTION

All the physical properties of base oil and nanofluids are measured using accurate measuring instruments. To measure the density of the base oil and nanofluids with different weight fractions at different temperatures, SVM3000 instrument (made in Austria) is used. The rheological behavior and viscosity of the CuObase oil nanofluid was measured using Brookfield viscometer (DV-II+Pro Programmable Viscometer) with a temperature controlled bath, supplied by Brookfield engineering laboratories of USA.
Table 1. The range of operating parameters Parameter Nanofluid Nanoparticles weight concentration, % Range CuObase oil 02

125

Vol. 5, Issue 1, pp. 122-129

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Heat flux, W/m2 Reynolds number Mass flow rate, kg/s Tube length, mm Outer diameter, mm 3200 10170 0.0080.048 1200 12.7

V.

RESULTS AND DISCUSSION

5.1. Validation check


In order to verify the accuracy and the reliability of the experimental System, the pressure drops are experimentally measured using base oil as the working fluid before obtaining those of oil based CuO nanofluids. The experiments are conducted within the Reynolds number of 170. Due to flow low Reynolds number, hydrodynamically fully developed laminar flow is assumed for theoretical calculations.Also, because oil has got high Prandtl number, the flow is in the thermal entrance region (x/D<0.05RePr). Experimentally measured pressure drop is compared with the pressure drop obtained from the following theoretical equation. (1) In which, is measured at the average of inlet and outlet temperatures. Fig. 6 show the variation of the theoretical values for pressure drop along the test section versus measured pressure drop.

Figure 6. Comparsion betweem theoretical and experimental pressure drop of base oil flow inside round tube at zero and 30 degree of tube inclination.

As it can be seen from Fig. 6, the deviation of the experimental data from the theoretical one is within 18% and +3%. There is a good agreement between theoretical and experimental data.then the pressure drop parameter of CuO-base oil nanofluids flowing inside round tube in horizontal and inclined tube state are investigated experimentally for laminar flow under constant heat flux condition.Note that in the following results, pressure drop data is not achieved under exactly the same Reynolds numbers. This is because the viscosity of oil-based nanofluid is so dependent on fluid temperature and particle weight fraction.

5.2. PRESSURE DROP RESULTS


The measured pressure drop along the round tube for the flow of pure oil and CuObase oil nanofluids with different weight fractions as a function of Reynolds number in horizontal an inclination state are given in Fig. 7, 8.

126

Vol. 5, Issue 1, pp. 122-129

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
The results show that, with increase nanoparticle concentration, the pressure drop increase in both

Figure7. Variation of pressure drop with Reynolds number for base oil and nanofluids flow inside the horizontal tube at constant heat flux.

Figure 8. Variation of pressure drop with Reynolds number for base oil and nanofluids flow inside the inclined tube at constant heat flux.

horizontal and inclined tube. There is not a noticeable increase in pressure drop of nanofluid with 0.5% wt. particle concentration compared to that of pure oil flow. This enhancement trend tends to continue for the nanofluids with higher weight fractions.In addition, the results show that for the round tube the maximum pressure drop increasing is achieved when nanofluid with 2% wt. concentration is used instead of base fluid in both test section position. The variation of pressure drop versus Reynolds number for the 0.5% wt. nanofluid flow inside round tube at constant heat flux at zero and 30 degree tube inclination are depicted in Fig. [9]. the obtained results show that the tube inclination have decreased pressure drop remarkably compared to that of horizontal tube at low Reynolds number.This can also lead to wall shear stress decreasing which results in pressure drop decline.

127

Vol. 5, Issue 1, pp. 122-129

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Figure 9. Comparison of variation of pressure drop with Reynolds number for horizontal tube with inclined (30 degree) tube at the constant nanofluid concentration (0.5%wt).

VI.

CONCLUSION

In the present study, pressure drop characteristics of the pure base oil and CuObase oil nanofluid flow inside in horizental and inclined tube are investigated. 1. For a given tube and at a same flow conditions, there is an increase in pressure drop of nanofluids compared to that of base liquid. 2. At the same flow conditions and for a given nanofluid with constant particle concentration, tube inclination decreases the pressure drop compared to that of the horizontal tube, significantly. In the future study, a wide range of study shall be performed on the heat transfer and pressure drop of the various nanoparticles such a MWCNT. in the different condition because of complicated behavior of nanofluid.furthermore, the effects of tube inclination with non-circular cross section such a flattened tube can be studied.

ACKNOWLEDGEMENT
The authors would like to acknowledge the financial support of Hamyan Hayat Gostar Company (HHG) for this research. Also, the financial support of Iranian Nanotechnology Initiative Council (INIC) is appreciated.

REFERENCES
[1] A.E. Bergles, Recent development in convective heat transfer augmentation, Appl.Mech. Rev. 26 (1973) 675682. [2] J.R. Thome, Engineering Data Book III, Wolverine Tube Inc., 2006 [3] S.U.S. Choi, Enhancing thermal conductivity of fluids with nanoparticles, ASME 231 (1995) 99105. [4] M. Chandrasekar, S. Suresh, A. Chandra Bose, Experimental investigations and Theoretical determination of thermal conductivity and viscosity of Al2O3/water nanofluid, Exp. Therm. Fluid Sci. 34 (2010) 210216. [5] W. Yu, H. Xie, L. Chen, Y. Li, Enhancement of thermal conductivity of kerosenebased Fe3O4 nanofluids prepared via phase-transfer method, Colloids Surf., A 355 (2010) 109113. [6] H.A. Mintsa, G. Roy, C.T. Nguyen, D. Doucet, New temperature dependent thermal conductivity data for water-based nanofluids, Int. J. Therm. Sci. 48 (2009) 363371.

128

Vol. 5, Issue 1, pp. 122-129

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
[7] R.S. Vajjha, D.K. Das, Experimental determination of thermal conductivity of three nanofluids and development of new correlations, Int. J. Heat Mass Transfer 52 (2009) 46754682. [8] D. Wen, Y. Ding, Experimental investigation into convective heat transfer of nanofluids at the entrance region under laminar flow conditions, Int. J. Heat Mass Transfer 47 (2004) 51815188 [9] Y. Xuan, Q. Li, Investigation on convective heat transfer and flow features of nanofluids, J. Heat Transfer 125 (2003) 151155. [10] M. Chandrasekar, S. Suresh, A. Chandra Bose, Experimental studies on heat transfer and friction factor characteristics of Al2O3/water nanofluid in a circular pipe under laminar flow with wire coil Inserts, Exp. Therm. Fluid Sci. 34 (2010) 122130. [11] R. Ben Mansour, N. Galanis, C.T. Nguyen, (2009),"Developing laminar mixed convection of nanofluids in an inclined tube with uniform wall heat flux", International Journal of Numerical Methods for Heat & Fluid flow, Vol. 19 Iss: 2 pp.146 164 [12]M.A. Akhavan-Behabadi, M.Ghazvini, E.Rasouli, Experimental investigation on heat transfer and pressure drop nano diamond-engine oil nanofluid in a microfin tube, IHTC14-22483,(2010) [13]A.H. Battez, R.G. alez, J.L. Viesca, J.E. Fernandez, J.M. Diaz Fernandez, A. Machado, R. Chou, J. Riba, CuO, ZrO2 and ZnO nanoparticles as antiwear additive in oil lubricants, Wear 265 (2008) 422428. [14] Y.Y. Wu, W.C. Tsui, T.C. Liu, Experimental analysis of tribological properties of lubricating oils with nanoparticle additives, Wear 262 (2007) 819825.

AUTHORS
Mahdi Pirhayati was born in Tehran, Iran. He received his Bachelor degree in Mechanical Engineering from Shahid Chamran University, Ahvaz, Iran in 2009 and now he is senior student of master of Mechanical Engineering at Science and Research Branch, Islamic Azad University, Tehran, Iran in 2012. His research interests include hybrid vehicle, green vehicle, new energy generation, renewable energy. He is currently working on nanofluid flow under differrent condition.

Mohammad A. Akhavan-Behabadi is a professor of mechanical engineering at University of Tehran, Iran. He received his Ph.D. from the Indian Institute of Technology at Roorkee, India, in 1993. He is the head of the School of Mechanical Engineering, College of Engineering, University of Tehran, Tehran, Iran. He has co-authored more than 70 journal and conference publications. His research interests include experimental two-phase and single-phase convective heat transfer. He is currently working on augmentation of heat transfer by different passive techniques in two-phase flow and also nanofluid single-phase flow.

Morteza Khayat is an assistant professor of mechanical engineering at Science and Research Branch, Islamic Azad University, Tehran, Iran. He received his Ph.D. from the Sharif University of Technology, Tehran, Iran, in 2007. He is the head of College of Mechanical and Aerospace Engineering, Islamic Azad University, Tehran, Iran. His research interests include multi phase flow and nanofluid. He is currently working on heat transfer in proxy media.

129

Vol. 5, Issue 1, pp. 122-129

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

A MAXIMUM POWER POINT TRACKING METHOD BASED ON ARTIFICIAL NEURAL NETWORK FOR A PV SYSTEM
Abdessamia Elgharbi1, Dhafer Mezghani1, Abdelkader Mami2
1

Laboratory of Analyse and Control of Systems, Department of Electric Engineering National School of Engineering of Tunis, PB 37, Le Belvedere, Tunis1002, Tunisia. 2 Department of Physics, Faculty of Sciences of Tunis, Electronic Laboratory, 2092 El Manar, Tunis, Tunisia.

ABSTRACT
Solar photovoltaic system characteristics depends on environmental factors, therefore a maximum power point tracking MPPT technique is needed to keep the working point of the system as close as possible to the MPP. In this paper we present a PV generator composed by four PV panel Kaneka GSA211 (60Watt) placed in series, and a neural network model developed by the authors. The aim of this study focuses on the application of the artificial neural networks to extract the maximum power point of a photovoltaic generator that feeds a motorpump group unit through a PWM inverter installed in the laboratory. The output of the ANN is the optimal voltage Vopt which is compared to the PV generator voltage V pv, then passed through an integrator to extract the stator frequency fs that are given to the PWM control of the DC-AC inverter to find out the sinusoidal reference voltage and the sampled wave. The training of the ANN is done with Levenberg Marquardt algorithm and the whole technique is being simulated and studied using MATLAB software [24].

KEYWORDS: MPPT, Artificial Neural Network, LM algorithm, PV system, MATLAB Simulink.

I.

INTRODUCTION

The production of energy is a challenge of great importance for the coming years. Indeed, the energy needs of the people are rising. Furthermore, developing countries will need more energy to complete their development. Today, much of the world's energy is supplied from fossil sources. Consumption of these sources leads to emissions of greenhouse gases followed by an increase in pollution. Moreover, the additional danger is that excessive consumption of natural resource reserves reduces this type of energy in a dangerous way for future generations. Unlike fossil fuels, renewable energy such as solar, wind, hydropower and biomass are unlimited and reduce emission of greenhouse gases. Renewable energies include a number of technology clusters by source of energy valued and useful energy obtained. The studied photovoltaic structure is composed by a photovoltaic generator, a DC / AC inverter and a motor-pump unit connected with a storage tank. By applying the technique of maximum power point tracking, the efficiency of the system rises whatever is the irradiation and the temperature of the environment. Several different MPPT techniques have been proposed in the literature [1-2], several papers tackle the problem concerning the search of the optimal operation point by using Hill-climbing algorithms [1-3-4-5], fuzzy logic or digital signal processing. The uses of neural network in the industrial electronics have been increased, and have a large perspective in intelligent control area that is evident by the publications in the literature. Considering

130

Vol. 5, Issue 1, pp. 130-140

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
the immense potentiality of neural networks in future, their applications in industrial electronics area are yet in the stage of infancy [6]. In a first part, after a brief modelling of the PV module, we present the model and the simulations of the I-V and P-V characteristics with different levels of illuminations and temperatures. In a second part, an artificial neural network is presented then trained with LM algorithm and the back propagation method to extract the optimal voltage Vopt of the same PV module. The simulations are carried out to verify the proposed ANN method in the section IV. Finally, concluding and remarks are given in section VI.

II.

PHOTOVOLTAIC MODULE MODELLING

Photovoltaic conversion is produced by subjecting the solar cell to sunlight. The received energy causes chaotic movement of the electrons within the material, the current collection is done by the metal contacts (electrodes). If these electrodes are connected to an external circuit, a direct current flows. In this PV generator, a number of solar cells are assembled to form a photovoltaic module, the link between these modules in parallel rise the direct current value and its link in series rise the direct voltage value. Thus, the group of linked PV modules according to desired values of both the current and the voltage forms the PV generator. The PV array characteristics presents three important points, the short circuit current, the open circuit voltage and the optimum power delivered by the PV module to an optimum load when the PV modules operate at their maximum power point. Our model of PV module consisting 4 cells Kaneka GSA211 (60Watt) in series which has been evaluated using MATLAB environment. The PV generator behaviour is equivalent to a current source shunted by a junction diode, if we neglect the physical phenomena of PV cell such as contact resistance, the current lost by photocell sides as well as the age of cells [7-8-9].

Figure 1. Electrical schema of PV module

The relationship between the output voltage Vpv and the load current Ipv can be expressed as: (1) Where; Ec: solar illumination in W/m2 Ecref: the reference illumination (1000W/m2) T: the ambient temperature in C Tref: the reference ambient temperature (25C) Tp : the surface temperature of the PV generator (C ou K) Icc : the total short-circuit current for the state reference in A Kisc : the short-circuit -temperature current coefficient (Kisc = 0.0017 A/C) Is : the opposite total current of the PV generator in A ni : the ideality factor of the PV field K : the Boltzman constant (K=1.38.10-23 j /K) q : the electron charge (q=1.6.10-19 C)

131

Vol. 5, Issue 1, pp. 130-140

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
The output of the PV model characteristics I-V and P-V is shown first for different illuminations levels (600; 800; 1000 W/m2) at 30C in figures 2 and 3, and then for various temperature (30; 35; 40 C) for 1000 W/m2 in figures 4 and 5 respectively.

Figure 2. I-V plot of PV module

Figure 3. P-V plot of PV module

Figure 4. I-V plot of PV at 800W/m2

Figure 5. P-V plot of PV module at 800W/m2

The P-V plot is overlaid on the I-V plot of the PV module, as shown in Figure 2 and 3. It reveals that the amount of power produced by the PV module varies greatly depending on its operating conditions. It is important to operate the system at the MPP of PV module in order to exploit the maximum power from the module. The aim of a maximum power point tracking system is to force the PV generator to operate on points which are located on this curved trace. The operating point of a PV generator should be continuously adjusted in order to compensate the variation of load, temperature and irradiance level.

III.

ARTIFICIAL INTELLIGENCE AND ARTIFICIAL NEURAL NETWORK

3.1. Artificial neural networks


Artificial neural network (ANN) technology has been successfully applied to solve very complex problems. Recently, its application in various fields is increasing rapidly [10-l1]. The science of artificial neural network is based on the neuron. In order to understand the structure of artificial network, the basic element of the neuron should be understood. A system with embedded computational intelligence is considered as an intelligent system that has learning, self-organizing and generalisation capability. In fact neural network is more genetic in nature that tends to emulate the biological NN directly. From twice decade, NN technology captivates the attention of a large number of scientific communities, since then, the technology has been advancing rapidly and its applications are expanding in different areas [12]. The operation of the artificial neurons is inspired by their natural counterparts. Each artificial neuron has several inputs and one single output, the axon. Each input is characterized by a certain weight indicating the influence of the corresponding signal over the output neuron. The neuron calculates an equivalent total input signal as the weighted sum of the individual input signals. The resulting quantity is then compared with a constant value named the threshold level and the output signal is calculated as a function of their difference, this function is named the activation function. The input weights, the threshold level and the activation function are the parameters which completely describe an artificial neuron.

132

Vol. 5, Issue 1, pp. 130-140

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Over the last few years, more sophisticated types of neurons and activation functions assembled in algorithms have been introduced in order to solve different sorts of practical problems. In particular, Quasi-Newton Levenberg Marquardt method has useful for many control system and system identification applications [13].

3.2. Artificial neural network architecture


Neural network architecture is specified in finding the appropriate solution for the non-linear and complex systems or the random variable ones. Among its types, there is the back propagation (or feed-forward) network which is more widespread, important and useful [14]. The function and results of ANN are determined by its architecture that has different kinds, and the simpler architecture contains three layers as shown in figure 6. The input layer receives the extern data, the second layer (hidden layer) contains several hidden neurons which receive data from the input layer and send them to the third layer (output layer). This latter responds to the system.

Figure 6. Architecture of Back Propagation Neural Network.

We can conclude unlimited neural network architecture, more several hidden layer and neuron in each layer are added; the more complex they become. The realization of the back propagation network is based on two main points: learning and knowledge. This research was applied by the use of sigmoid function as an activation function in order to calculate the hidden layer output and a linear function to calculate the output [15]. The output for the sigmoid function varies continuously but not linearly as the input changes. Sigmoid units bear a greater resemblance to real neurons than do linear or threshold units, but all three must be considered rough approximations [16-17]. The result of the transfer function is usually the direct output of the processing element. An example of a sigmoid transfer function is shown in figure 7.

Figure 7. Sigmoid transfer function.

This sigmoid transfer function takes the value from the summation function, and turns it into a value between zero and one, mathematically it is given by: (2) It takes as parameter the weighted sum of the neuron inputs, given by [18]:

133

Vol. 5, Issue 1, pp. 130-140

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
(3) Learning networks multilayer fear be made by different learning algorithms, the best known is the backpropagation which become so popular that appears as a synonym of neural networks. We present the method of obtaining the gradient, which is based on the calculation of successive partial derivatives of composite functions [19]. The cost function used is the squared error: (4) i is the index of the output neurons, desired output of the output neurons. is the measured output of the output neurons and is the

The weights of the network are modified according to the following rule: (5)
And

(6) is a positive constant called the gradient step. The calculation of the quantity is starting from

the output layer and shifting to the input layer. The spread in the opposite direction of the NN activation of the neuron of the network, justifies the name of the algorithm. The calculation in made as follows: (7) By placing, (8) We obtain: (9) And (10) Then (11)

is called contribution to the error of neuron , where is the index of an output neuron, we obtain: (12) (13) Then (14) Where is the index of a hidden neuron, we set:

134

Vol. 5, Issue 1, pp. 130-140

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
(15) is the index of all neurons in which the neuron sends connexion. The calculation results are: (16)
Then

(17) The neural network used here has two input layers (illumination and temperature), hidden layer with 10 neurons. The hidden layer contains tan-sigmoid functions and the output layer (the optimal voltage) contains a linear function. This neural network will be trained by a back propagation method and the Levenberg Marquardt algorithm. The LM algorithm is the second order method which implements an iterative approximation of the hessian matrix (or its inverse) and which consists in modifying the weights by the following formula: (18) (19) H: the hessian matrix with general term (20) The LM algorithm is equivalent to the application of the simple gradient rule with a step of 1/i. The LM algorithm shows a faster convergence and a better accuracy than other algorithms. Although the LM algorithms needs an important memory space in training stage, this method is preferred to be utilised.

3.3. Artificial neural network training


Artificial neural network have memory, which corresponds to the weights in the neurons. The weights and biases of the network are adjusted by the learning rate in order to move the network output closer to the targets. The 'newff' function allows a user to specify the number of layers, the number of neurons in the hidden layer and the activation function used as described below. After training, the network weights are set by the back-propagation learning rule. The number of epochs for this example is set to 100 and the learning rate is 0.02. During training, the input vector will be passed through the neural network and the weights will be adjusted 100 times. The learning rate of the network is also set [22-23]. The following Matlab code creates a feed forward network: net = newff(pr,tr,[10,1],{'tansig','purelin'},'trainlm','learngdm','msereg'); net.trainparam.epoch=100; net.trainparam.Ir=0.02; net= train(net,pr,tr); gensim(net);

IV.

ARTIFICIAL NEURAL NETWORK APPROACH POINT TRACKING

OF

MAXIMUM POWER

4.1. Neural network MPPT


Photovoltaic power generation requires so much larger initial cost compared to other power generation sources that it is imperative to extract as much available solar energy as possible from the PV array. Maximum power output of the PV array changes when solar irradiation, temperature, and/or load levels vary. Control is therefore, needed for the PV generator to always keep track of the maximum power point. By controlling the switching scheme of the inverters connected to the PVs the MPP of the PV array can always be tracked [20-21].

135

Vol. 5, Issue 1, pp. 130-140

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Nonlinear I-V characteristics of a PV module match very well to a neural network application. The block diagram of the proposed MPPT scheme is shown in the figure 8. In this scheme ANN is used to find out optimal voltage which is compared with the PV generator voltage. The error is given to the integrator controller. The output of the integrator controller is the stator frequency fs that are given to the PWM control of the DC-AC inverter to find out the sinusoidal reference voltage and the sampled wave.

Figure 8. Proposed MPPT scheme.

4.2. Training results


In order to simulate the system, the PV model described above implemented in Matlab Simulink [24] is submitted under real conditions of irradiation and temperature, and Neural network controller was defined and designed using neural network toolbox, as follow in figure 9, the training results of ANN are shown in figure 10, 11 and 12.

Figure 9. PV model and Neural Network scheme.

Simulation studies have been carried out to verify the proposed artificial neural network method.

136

Vol. 5, Issue 1, pp. 130-140

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Figure 10. PV model performance plot

The performance plot is mapped between mean squared error and number of epochs that leads the train data to the best performance. The next figure presents the training state which determines the position of gradient, mu and the validation check at epoch 5 in which the network is completely trained.

Figure 11. PV model training state plot

Figure 12. PV model regression plot

137

Vol. 5, Issue 1, pp. 130-140

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
The figure 12 is the plot that tells the linear regression of targets relative to outputs. The linear line proves that the output data is exactly the same as the target data.

Figure 13. Tension Vpv at constant temperature

Figure 14. Tension Vpv at constant illumination

The figure 13 and 14 shows the simulation of the PV module voltage and the optimal voltage in different values of illumination and temperature.

V.

FUTURE WORK

The proposed work is on-going project hence there are different path to explore it, as we will use the bond graph method for modelling and the artificial neural network for vector control of the motorpump. We can use some other network to increase the system accuracy other than back-propagation network.

VI.

CONCLUSIONS

Due to the importance of photovoltaic systems, this paper presents a study of maximum power point tracking using artificial neural network. To extract the optimal voltage of PV generator we have trained the network by a back propagation method and the Levenberg Marquardt algorithm. Simulation studies have been carried out to verify the proposed artificial neural network method. The obtained results show that the proposed approach could furnish a new interesting point of view in tracking the maximum power point of PV systems.

ACKNOWLEDGEMENTS
We are especially thankful to Prof. Abdelkader MAMI and Dr. Dhafer MEZGHANI for the time and guidance given through this work, we also would like to thank all members of Analyze and command systems in the ACS-laboratory.

REFERENCES
[1] C.Hua, J.Lin, C.Shen. (1998), "Implementation of DSP-Controlled PV system with peak power tracking", IEEE Trans, Ind, Electron, vol45(1), pp,11-24. [2] Y. Chen, Smedley K. Vacher F. Brouwer J, (2003), "A New Maximum power Point Tracking Controller for PV Power Generation". In Proc, IEEE Applied Power Electron, Conf, Miami Beach, FL, USA, pp56-62. [3]. G. Petrone, G. Spagnuolo, R. Teodorescu, M. Veerachary, and M.Vitelli. (2008), "Reliability Issues in Photovoltaic Power Processing Systems", IEEE Trans. On Ind. Electron. Vol. 55, pp. 2569-2580. [4]. JHR Enslin, MS Wolf , DB Snyman, W Swiegers. (1997), "Integrated photovoltaic maximum power point tracking converter". IEEE transEnergy Convers, Vol. 44, pp. 769-773. [5] M.A.S. Masoum, H. Dehbonci, E.F Fuchs, "Theoretical and Experimental Analyses of PV Systems with Voltage and Current-Based Maximum Power Point Tracking", Accepted for publication in IEEE Trans, on Energy Conversion. [6]. B. K. Bose, (2007), "Neural Network Applications in Power Electronics and Motor DrivesAn Introduction and Perspective", IEEE Trans. On Industrial Electronics, vol. 54, no. 1, pp. 14-33.

138

Vol. 5, Issue 1, pp. 130-140

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
[7]. D.Mezghani, (2009), "Study of a photovoltaic pumping by a bond graph approach", thesis prepared in the Laboratory Analysis and Control Systems (ACS) team modeling and control of photovoltaic systems in the Sciences university of Tunis. [8].Y.Oueslati, (2007), "Study of performance of a photovoltaic generator coupled to the network draft" Master, High School of Sciences and Techniques of Tunis. [9]. A. Elgharbi, (2010), "Ameliorated control of a motor-pump coupled to a photovoltaic generator". Sciences university of Tunis, masters memory, December. [10] Bose B K, (2001), "ANN Application in Power electronics", IEEE, Ind, Electron conf, Denver CO, USA, pp1631-1638. [11] S. Haykin, (1999), "Neural Networks_A Comprehensive Foundation", 2nd Edition, New York Prentice Hall Inc. [12]. M.Hatti, (2007), "Neural Network Controller for P E M Fuel Cells", IEEE International Symposium on Industrial Electronics, pp. 341 346. [13]. L. Wenzhe, A. Keyhani, and A. Fardoun, (2003) "Neural network based modeling and parameter identification of switched reluctance motors",IEEE Trans. Energy Conv., vol. 18, no. 2, pp. 284290. [14] M.Shamim Kaiser, Subrata Kumar Aditya, Rezaul Karim Mazumder,(2006), "Performance Evaluation of a Maximum Power Point Tracker (MPPT) for Solar Electric Vehicle using Artificial Neural Network", Daffodil International University Journal Of Science And Technology, Vol1, Issue1. [15]. Eduard Mujadi, (2000) "ANN Based Peak Power Tracking for PV Supplied DC Motors". Solar Energy, Vol. 69, N.4, pp343-354. [16] F.Laurene,(2008), "Fundamentals of Neural Networks", Architectures, Algorithms, and Applications. [17] Yann Morere, (1996), "Identification par Rseaux de Neurones", DEA memory in automatic of industrial and human systems, Valencia and Hainaut-Cambresis University. [18] http://alp.developpez.com/tutoriels/intelligence-artificielle/reseaux-de-neurones/#LVII. [19]. Gauthier E. (1999), "Use of the artificial neural network for autonomous vehicle control", Phd thesis, National Polytechnic Institute of Grenoble. [20]. M. Veerachary, T. Senjyu, and K. Uezato, (2003) "Neural-network-based maximum-power-point tracking of coupled-inductor interleavedboostconverter-supplied PV system using fuzzy controller", IEEE Trans. Ind.Electron., vol. 50, no. 4, pp. 749758. [21]. E. Roman, R. Alonso, P. Ibanez, S. Elorduizapatarietxe, and D.Goitia, (2006) "Intelligent PV module for grid-connected PV systems", IEEE Trans.Ind. Electron., vol. 53, no. 4, pp. 10661073. [22] http://www.iau.dtu.dk/research/control/nnlib/manual.pdf. [23] http://fr.w3support.net/index.php. [24] http://www.mathworks.com/products/matlab/.

APPENDIX
Kaneka Panel: Nominal Power = 60W, Voc = 92V, Isc = 1.19A, Voltage in mpp = 67V, Current in mpp= 0.9A. Illumination values: 200 to 1000 W/m2. Temperature values: 5 to 45 C. Artificial Neural Network: 2 Input Layer, 10 Hidden Layer, 1 Output Layer, learngdm learning function, msereg network performance function, trainlm network training function, Maximum number of epochs to train = 100, Learning Rate = 0.02.

AUTHORS
ABDESSAMIA ELGHARBI: Was born in Tunisia in December 1978. He was received the master diploma in electronic ( Numeric Analysis and Treatment of Electronic systems ) from the Sciences university of Tunis in 2010. Since 2010 he works as a temporary teacher in VHDL language in Sciences university of Tunis. He is now preparing his thesis in the same university. DHAFER MEZGHANI: was born in Tunisia. He received the Master's degree in Automatic from High School of Science and Technology of Tunis (E S S TT) in 2002. Between 2002 and 2005, he occupies an assistant contractual position at High School of Computing and Technology (E S T I). Between 2005 and 2008, he becomes incumbent as assistant at National School of Computer Science (E N S I), in April 2009, he obtained his PhD in electrical engineering at the National School of Engineers of Tunis (E N I T) Since September 2010, he was assistant-master at National School of Computer Science and it operates in the field of electronics and micro-

139

Vol. 5, Issue 1, pp. 130-140

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
electronics for embedded systems design (F P G A, microcontrollers). Also, its research affect the bond graph modeling, analyze and control of power renewable systems (photovoltaic and wind) at the Faculty of Sciences of Tunis and in the ACS- laboratory in ENIT, this research are jointly supervised with specialty societies. ABDELKADER MAMI: Was born in Tunisia, he is a Professor in Faculty of Sciences of Tunis (F S T). He was received his dissertation H.D.R (Enabling to Direct Research) From the University of Lille (France) 2003, he is a president of commuted thesis of electronics in the Sciences Faculty of Tunis, He is a person in charge for the research group of Analyze and Command of Systems in the ACS-Laboratory in ENIT of Tunis and in many authors fields.

140

Vol. 5, Issue 1, pp. 130-140

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

AN ELABORATION OF QUANTUM DOTS AND ITS APPLICATIONS


Sambeet Mishra1, Bhagabat Panda2, Suman Saurav Rout3
2

School of Electrical Engineering, KIIT University, Bhubaneswar, India Asst. Professor, School of Electrical Engineering, KIIT University, Bhubaneswar, India

1,3

ABSTRACT
Being a semiconductor of very tiny size, quantum dots cause the band of energies to change into discrete energy levels. Band gaps and their related energy depend on the relationship between the size of the crystal and the exciton radius. The height and the energy between different energy levels varies inversely with the size of the quantum dot. The smaller the quantum dot, the higher is the energy it possesses. Quantum dots are defined as very small semiconductor crystals of size varying from nanometer scale to a few micron i.e. they are so small that they are considered dimensionless and are capable of showing many chemical properties by virtue of which they tend to be lead at one minute and gold at the second minute. Quantum dots generally house the electrons just the way the electrons would have been present in an atom, by applying a voltage. And therefore they are very judiciously named as the artificial atoms. This application of voltage may lead to the modification of the chemical nature of the material anytime it is desired, resulting in lead at one minute to gold at the other minute. But this method is quite beyond our reach in the present scenario. The applications of the quantum dots are quite vast e.g. they are very effectively applied to: Light emitting diodes: LEDs eg. White LEDs, Photovoltaic devices: solar cells, Memory elements, Biology : biosensors, imaging, Lasers, Quantum computation, Flat-panel displays, Photodetectors, Life sciences and so on and so forth. The nanometer sized particles are able to display any chosen colour in the entire ultraviolet visible spectrum when a small change in their size or composition is done.

KEYWORDS:

Quantum dot, Artificial atoms, Colloidal Synthesis, Lithography, Epitaxy, Multiple Exciton Generation, Metamaterials.

I.

INTRODUCTION

Quantum dots, otherwise known as the artificial atoms, house the electrons just the way the electrons would have been present in an atom, by applying a voltage. This application of voltage may also lead to the modification of the chemical nature of the material anytime it is desired, resulting in lead at one minute to gold at the other minute. But this method is quite beyond our reach till now. Being a semiconductor of very small size, quantum dot causes the band of energies to turn to discrete energy levels. Band gaps and related energy depend on the relationship between the size of the crystal and the exciton radius. The height and the energy between different energy levels varies inversely with the size of the quantum dot. The smaller the quantum dot, the higher is the energy possessed by it. The nanometer sized particles[3] are able to display any chosen colour in the entire ultraviolet visible spectrum through a minor change of their own size or composition.

141

Vol. 5, Issue 1, pp. 141-145

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Fig.1 Quantum dots arranged by size, emitting light of different colours

Smaller sized quantum dots give rise to the higher energy which results in smaller wavelength[4]. This smaller wavelength will affect colour of the dot i.e. for more wavelength the colour will be red and with decrease in wavelength the colour will gradually change towards blue. For a CdSe quantum dot, a dot sized 5nm will show red colour whereas 1.5nm sized dot will show violet colour. Dots could also constitute of the materials which are capable of absorbing and emitting lights at any wavelength their designer set them in or could serve as more efficient and better tuned semiconductor lasers. Here we are going to highlight the field of quantum dots with the help of its fabrication process, its applications, the ways to boost energy using quantum dot, and some of the paths which can be covered in the near future so as to obtain new applications through quantum dots.

II.

FABRICATION METHODS

2.1 LITHOGRAPHY:
By growing quantum dots in a semiconductor heterostructure which refers to a plane of one semiconductor sandwiched between two other semiconductors. If this sandwiched layer is very thin i.e. about 10 nanometers or less, then the electrons can not move vertically and thus are confined to a particular dimension[2]. This is called the quantum well. When thin slice of this material is taken to create a narrow strip then it results in a quantum wire, as it gets trapped in a 2 dimensional area. Rotating this to 90 degrees and repeating the procedure confines the electron in a 3 dimension which is called the quantum dot. According to quantum mechanics and the Heisenbergs uncertainty principle[1], the more confined an electron is, the more uncertain is its momenta; and hence, the wider the range of momentum is, the higher is the energy possessed by the electron i.e. may be infinite in case the electron is confined to an infinitely thin layer. The electrons confined in an electron wire are free only in one dimension, those confined in a plane have no freedom in the 3rd dimension, and those confined in a quantum dot are not free in any dimension.

2.2 COLLOIDAL SYNTHESIS:


By growing quantum dots in a beaker which may be made up of nearly every semiconductor and many a metals eg cobalt, gold, nickel etc.

142

Vol. 5, Issue 1, pp. 141-145

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963 2.3 EPITAXY:
Self-assembled dots are also grown, by depositing a semiconductor with larger lattice constant on a semiconductor with smaller lattice constant eg. Germanium on Silicon. These self-assembled dots are used to make quantum dot lasers. Hence, the quantum dots are actually formed when very thin semiconductor films buckle due to the stress of having lattice structure slightly different in size from those on which the films are grown.

III.

APPLICATION

Dots provide a wide range of properties in electronic and optical applications[4]. Some of the applications include the following fields: Quantum computation Photovoltaic devices: solar cells Biology : biosensors, imaging Light emitting diodes: LEDs eg. White LEDs Flat-panel displays Memory elements Photodetectors Lasers Life sciences. Out of these numerous applications of quantum dots, one of the major application is in the field of solar cells. Most solar cells are made up of a sandwich of two crystal layers: one that is slightly negatively charged and one that is slightly positively charged. The negatively charged crystal layer has many extra electrons, and when a photon with enough energy strikes the material, it emits an electron on the positively charged layer, increasing its energy and leaving behind a "hole." The electron-hole pairing is known as an exciton. If the photon has insufficient energy, the electron stays put. If the photon has more than enough energy, the charge flows using only the energy it needs, and the remainder warms up the device. MEG, abbreviated name for Multiple Exciton Generation, is one of the technologies of the "thirdgeneration" solar technology. Using these advancements, solar panels can possess the advantages of being thinner, lighter, cheaper, more flexible and fundamentally more efficient than current devices on the market. As a result of this, solar energy will be more cost-effective and will form a greater share of the world's energy supply. The smaller size of quantum dots allows them to contain charges and more efficiently convert light to electricity. When a photon having at least double the energy that is needed to move an electron, strikes the lead selenide quantum dots, it can excite two or more electrons instead of letting the extra energy go to waste, generating more current than a conventional solar cell. By this way, the energy can be saved in case of the quantum dot solar cell which is not possible in case of traditional solar cells. Since the quantum dots have tunable band gap, so Quantum dots can generate multiple exciton (electron-hole pairs) after collision with one photon. So, the generation of the electricity increases and the maximum theoritical efficiency can be raised to a range of a high 63.2%.

IV.

WAYS OF BOOSTING THE EFFICIENCY

Besides cost, another limitation of the solar cell has always been its efficiency. It indicates producing such material which can be optimized to generate electricity with maximum efficiency. This can be done using metamaterials which shows properties not found in the nature and are capable of changing properties of light dramatically.

143

Vol. 5, Issue 1, pp. 141-145

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Fig.2 Nano-structured metamaterial [12].

Metamaterials consist of layers of silver and titanium oxide and the tiny components called quantum dots. As we have already discussed, the metamaterials have the capability of changing the properties of light dramatically. The light hence becomes "hyperbolic," which increases the output of light from the quantum dots. Such materials could easily enhance the efficiency of solar cells. "Altering the topology of the surface by using metamaterials provides fundamentally new route to manipulating light," said Evgenii Narimanov, a Purdue University associate professor of electrical and computer engineering. Researchers are working to make the metamaterials perfect, which might be capable of ultra-efficient transmission of light, with potential applications including advanced solar cells and quantum computing.

V.

BOONS PROVIDED BY QUANTUM DOTS

1) Quantum dots may result in a 7-fold increase in final output according to the experiments being done from 2006. The experiments show that the quantum dots of lead selenide can produce as many as seven excitons from one high energy photon of sunlight (7.8 times the bandgap energy). This compares favorably to present photovoltaic cells which can only manage one exciton per high-energy photon, with high kinetic energy carriers losing their energy as heat. 2) One dimensional architectures are useful for designing next generation solar cells. 3) Quantum dots may be able to increase the efficiency and reduce the cost of present day's typical silicon photovoltaic cells. 4) Quantum dot photo-voltaic would be theoretically cheaper to manufacture.

VI.

FUTURE PROSPECTIVE

There are many possibilities of quantum dots in the future e.g. i) Anti-counterfeiting capabilities: This is the unique ability[10] of quantum dot, which is not possible iusing the semiconductors. It can be used to control absorbtion and emission spectra to produce unique validation signatures. ii) Counter espionage/ defense applications: This is a protection against friendly-fire events, where quantum dots are integrated with dust so that they can track the enemy particles.

144

Vol. 5, Issue 1, pp. 141-145

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

ACKNOWLEDGEMENTS
The authors would like to raise the vote of thanks to KIIT University for providing a learning atmosphere to accomplish the needs of this paper & also Professor Biswajit Ghosh whose guidance is immense in the preparation of the paper in each step.

REFERENCE
[1]. Mark A. Reed, M. A., January 1993, Quantum Dots, SCIENTIFIC AMERICAN, Available from: www.eng.yale.edu/reedlab/publications/58%20QDotsSciAm1993.pdf [2]. Holister, P., Vas, C. R., and Harper, T., October 2003, Quantum Dots, Technology White Papers nr. 13, Available from: http://images.iop.org/dl/nano/wp/quantum_dots_WP.pdf [3]. Gunjan Mishra, Quantum Dots, Available from: http://wolfweb.unr.edu/homepage/bruch/Phys461/6.pdf [4]. Brittany Webb, Quantum Dots, Available from: ion.chem.usu.edu/~tapaskar/BrittQuantum%20Dots.pdf [5]. Available from http://www.evidenttech.com/search [6]. Available from http://www.sciencedaily.com/releases/2012/05/120524143529.htm [7]. Available from http://www.scientificamerican.com/article.cfm?id=quantum-dots-and-more-use [8]. Kamat, P., October 2008, Workshop on Nanoscience for Solar Energy Conversion, Quantum Dot Solar Cells: Semiconductor Nanocrystals As Light Harvester, Available from:portal.ictp.it/energynet/material/Kamat.pdf [9]. Quantum dot solar cell. ( n.d. ). In Wikipedia. Retrieved June 20, 2012, from http://en.wikipedia.org/wiki/Quantum_dot_solar_cell [10]. McDaniel, James., Quantum dots: Science and Applications, Available from ion.chem.usu.edu/~tapaskar/James-Quantum%20Dots%20Seminar.pdf [11]. Diamond, Joshua. Quantum Dots in the Undergraduate Physics Curriculum: Physics Laboratory Modules. Available from http://www.siena.edu/physics/qdots/ [12]. Retrieved on 2 June,2012 from the website http://phys.org/news/2012-05-metamaterials-quantumdots-technologies.html. Author Sambeet Mishra is perusing his Masters in Power and Energy system, and has contributed his research works in the a areas of e-governance, Power Electronics, Dye-sensitized solar cells, Solar thermal generating systems, CdTe etc. His research works have been published in the many international journals like IEEE-XPLORE, American Institute of Physics (AIP), and other international journals. He has presented several papers in various International Journals & taken active role in workshops on Solar Energy. And is the senior member of International Association of Engineers (IAENG) and lifetime member of Solar Energy Society Of India (SESI). Presently his research is concentrated at the field of sustainable energy with environmental impact and proficient energy audit. Bhagabat Panda is a professor in School of Electrical, KIIT University. He has completed BE in Electrical and ME in Power System Engineering. He is contributing his research works in the field of Electrical.

Suman Saurav Rout has completed his graduation in Electrical Engineering and is currently persuing M.Tech in KIIT University and is working in the field of renewable energy harvesting technologies.

145

Vol. 5, Issue 1, pp. 141-145

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

ANALYSIS AND EVALUATION OF COGNITIVE BEHAVIOR IN SOFTWARE INTERFACES USING AN EXPERT SYSTEM
Saad Masood Butt & Wan Fatimah Wan Ahmad
Computer and Information Sciences Department, Universiti Teknologi PETRONAS, Tronoh, Perak, Malaysia

ABSTRACT
In most of the situations, usability evaluations of software interfaces are done by usability experts. Using such professionals needs a certain dimension in business. So, in a lot of small and medium scaled company's, software developers are compelled to learn to manage usability factors. This is not much simpler than training usability engineers on how to create a software application. As a remedy, an expert system CASI for software developers has been designed. In this paper, the expert system of Cognitive Analysis of Software Interfaces (CASI) is outlined to integrate cognitive modelling concepts and is considered as a crucial process for the development of interactive software interfaces. The recommended expert system is entirely dependent on the complete analysis of the user actions and specifications that display the psychological strategy of particular users. Moreover, this system helps designers and software developers to evaluate software prototypes in an intelligent way based on user perception and evaluation views. The paper presents a case study on the development of a rehabilitation database for a person with physical limitations. The results mentioned in this paper show that with the help of the expert system CASI more usability problems in the software interfaces can be detected. Hence, enhancing the usability of software interfaces by an automated CASI system is feasible.

KEYWORDS: Software Engineering (SE), Human Computer Interaction (HCI), Cognitive Science, Software
Interface, Artificial Intelligence (AI), Expert System, Usability Evaluation, Usability Engineering (UE), User Interface, Cognitive Analysis of Software Interface (CASI).

I.

INTRODUCTION

In the designing of the software interface, experts of the SE and HCI need to understand the users behavior, users familiarity with different features of a software interface and users expertise while working with other software interfaces. The HCI deals with social, cognitive and interaction phenomena. Where the social layer is focused on how people interact with each other as well as with technology based on the surroundings. A Software Interface is an effective source to transfer information and provide communication between a user and a computer. Designing a software interface that is easy to use, easy to learn, and easy to memorize are the attributes of the software usability evaluation [1]. The software usability evaluation is an important concept in the discipline of the HCI. In the HCI, Usability Engineering plays an important role to achieve users goals in an effective, efficient and satisfied way. Its a discipline that helps to achieve usability during the design of software interfaces. Usability engineering itself is a vast topic but usability evaluation is the part that contains the various techniques like the heuristic evaluation, guideline reviews and cognitive walkthrough [2]. In this paper, an expert system CASI has been developed in order to produce a highly interactive software interface to achieve the users goals. The paper is divided into five sections. Section 1 is the

146

Vol. 5, Issue 1, pp. 146-154

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Introduction; section 2 is the literature review; section 3 focuses on the expert system CASI; section 4 discusses the case study of the expert system CASI. In the end, section 5 shows the results.

II. LITERATURE REVIEW


The paper [3] describes a design process that helps to link both the SE and HCI processes. The scenarios presented in this paper serve as a link between the two disciplines. In the end, a tool was discussed, Scenic Vista, that works as a prototype to link the design artifacts of the SE and HCI. The methodology mentioned in [4] discusses the integration of the modern systems development life cycle (SDLC) with the human-computer interaction (HCI) in an information system (IS). As in the traditional development lifecycles of the IS, the role of the HCI is too low only at the design phase or at a later stage that affects the overall development. Thus, there is a gap found between the HCI and SE, and in order to remove this gap a human-centered IS development approach is introduced. According to [5], a software development team needs to focus on the functionality of the system as well as increase the Usability of the software during the SDLC. One of the methods used in Usability Testing is the Heuristic Evaluation (HE). The HE is a good method to find major and minor problems in the software interface. The HEs main goal is to find Usability problems in the software interface so that they can be attended to as part of the software design process. As mentioned in [6], Nielsen developed 10 heuristics but later 12 heuristics were developed against the original 10 heuristics. Research shows that the modified heuristics are more efficient and capture more of the defects that were missed by the old heuristics. Despite these benefits, some research shows the pitfalls of the HE. It shows that the HE does not find as many defects as some other Usability Engineering methods. A single evaluator may be able to find a small percentage of defects, so it is useful to involve more than one evaluator and later their results can be aggregated [7]. As mentioned in [8] Automation is the use of control systems and information technologies to reduce the need for human manpower in the production of goods and services. Today automation is required to perform daily routines and repetitive work. It is also important to automate those software processes that take a considerable amount of time and contain a cycle between various processes. As discussed in [9], the HE evaluators feel that it is difficult to a make report on paper, which is timeconsuming and cumbersome. So there is a need to have some type AI based interface evaluator system, which is discussed in section 4. The HCI strategy concentrates on the human-machine relationships and users. It describes what a program should do from a users viewpoint. It views users restrictions like physical, intellectual, successful and behavioural. The HCI growth distinguishes between the users obligations and the systems obligations during users interactions with the systems and how users can socialize with the systems. Zhang et al. [10] have recommended a strategy that views the HCI concerns and has particular cases of assess items. Table 1 presents the information of the HCI concerns which are composed of four significant places, namely the actual physical, intellectual, affective and behavioral along with their example evaluated items. These HCI concerns highlight on the non-functionality specification research of the software development. As defined by Lawson [12], the disappointment of the user in software is the occurrence of an obstacle that prevented the satisfaction of a need. The latest reports on users disappointment features the problems that took place behind the screen level [11] and the issues of using business sites [14]. These problems took place once the software was developed and sent to the customers [13]. Another research into users disappointment by Besserie et al. [14] has outlined the disappointment of the computer-based performance knowledge by the users during their everyday performance. The outcome of their research reveals that one-third to one-half of the time is invested before using the system due to the problems in using the application which causes the disappointment. Frustration considerably impacts the level of job fulfillment, office efficiency and public well-being.

III. EXPERT SYSTEM CASI


The expert system evaluates the interface per prototype and works on the concept of inference [15]. In this expert system there are some Facts and Rules which have been defined. The Facts are like

147

Vol. 5, Issue 1, pp. 146-154

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
inferences and on the basis of these Facts some Rules have been defined by the users and are stored in the Inference Engine. The Rules are either self-defined or system defined. The self-defined Rules are based on the users interest whereas the system defined Rules contain a combination of Heuristic and Cognitive walk through. These Rules help to evaluate the user prototypes and architectural prototypes. In this paper, the author has discussed a case study of the development system and has focused on user defined Rules. The expert system CASI contains three phases. a. Facts and Rules b. Decision Tree c. Results

a. Facts and Rules


For this system, five Rules are defined: Rule A: Go back to the previous Process, i.e., IUP Symbol: RA Rule 1: Easy to use This means that the prototype makes the task easy to use. Symbol: R1 Rule 2: Easy to learn The task is easy to learn and the next time the user performs the same task easily without thinking much. Symbol: R2 Rule 3: User perception The interface was designed according to the users perception. Symbol: R3 Rule 4: Easy Mastery The interface provides enough information so that the user doesnt need to study the Help file. Symbol: R4 Rule 5: Provided Functionality All the functionalities are available that the user stated during the requirement gathering phase. Symbol: R5

b. Decision Tree of CASI

Figure 1: Decision tree of CASI

148

Vol. 5, Issue 1, pp. 146-154

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Rule R1, R2, R3 and R4 are stored in the Inference Engine. The expert system evaluates the output (that comes from the IUP phase) by R1. If R1 proves to be correct, then the prototype will move to R2 for evaluation. If it fails at any Rule, then the flow will move towards RA. RA is a state to improve the prototype according to the self-defined or system defined Rules.

c. CASI Process

Figure 2: CASI Process

CASI contains four elements named: Process, Knowledge Base, Inference Engine and Database. Figure 2 depicts the clear understanding of the flow of the process between these elements.

IV. EXPERIMENTAL MODEL


In this section, the author discusses the case study which is the development of the university online classroom booking system that was built on the UZAB Model. Each prototype was tested by the expert system CASI. Further improvement was noted where the expert system could not evaluate according to the users perception.

Figure 3: Main Screen

149

Vol. 5, Issue 1, pp. 146-154

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Figure 4: Expert system CASI Evaluates Main Screen

Figure 4 shows the results of the expert system CASI while evaluating the Main Screen. Termination occurs where any Rule fails to achieve the users goal. Similarly, Figure 6 shows the result of the visual limitation screen.

Figure 5: Visual Limitation Screen

150

Vol. 5, Issue 1, pp. 146-154

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Figure 6: Expert system CASI Evaluates Visual Limitation Screen

Figure 7: Datasheet View

Figure 8: General Mediciation Screen

151

Vol. 5, Issue 1, pp. 146-154

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Figure 9: Evaluation of General Mediciation Interface

V. ANALYSIS AND RESULTS


The paper has brought a solution to the field of interface evaluation for the SE and HCI experts. On the one hand, it demonstrates how the expert system CASI is usable in the HCI design. On the other hand, it shows the benefits of using the expert system CASI based on the case study. The feedback obtained from the evaluators was generally positive towards the acceptance of the expert system CASI. All of the evaluators liked the new method of evaluating the software interface but provided recommendations for future improvement in the expert system CASI. The results obtained from the expert system CASI is based on Quality, Time, and Error detection, and it is found that the expert system CASI helps to improve the quality of the software interface and can detect more errors in software interfaces in less time. a. Quality Improvement by CASI The term quality in the field of interface evaluation means having zero defects and achieving the maximum interface usability. CASI proved this quality definition. CASI helps the SE and HCI experts to detect defects in order to achieve interface usability. b. Time Saving CASI provides rapid results in less time as compared to the traditional software interface evaluation techniques. c. Error Detection CASI is designed on those FACTS and RULES that help the SE and HCI experts to detect errors in software interfaces and help them to fix the error as soon as they are detected.

VI. CONCLUSIONS
With the rapid increase in the field of Cognitive Science and the growth of the interactive technology innovation, the computer is widely used in our daily life. The described expert system CASI is a helpful and effective approach to evaluate the software interfaces during their development phase. The expert system CASI will be challenging in the beginning when they are provided with the FACTS and RULES to evaluate every interface of the software. Though it is a good approach to produce a usable system that can fulfill a users requirements and work up to the uses perception. Successful testing of an expert system will contribute to the evaluation of software interfaces according to the users cognitive in a true manner. It is not the least point to evaluate software and increase usability.

152

Vol. 5, Issue 1, pp. 146-154

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Furthermore, new ideas and techniques must be considered to enhance the features of the expert system CASI.

ACKNOWLEDGMENT
The author of the paper would like to thank Universiti Teknologi PETRONAS, software evaluators and other staff members for their valuable feedback during the intermediate phase of the methodology presented in this paper.

REFERENCES
[1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11]

[12] [13]

[14] [15] [16]

[17] [18] [19] [20]

Yonglei Tao, Work in progress - introducing usability concepts in early phases of software development, 35th ASEE/IEEE Frontiers in Education Conference, Publication Year: 2009 , Page(s): 702 706. Ritter,F.,E., Baxter, G., D., Jones, G., and Young, R., M., 2000. User interface evaluation: How cognitive models can help. G. Mori, F. Paterm and C. Santoro. Ctte Support for developing and analyzing task models for interactive system design. IEEE Trans. Software Eng., 28(8):797813, 2002. A. Dix, J. E. Finlay, G. D. Abowd, and R. Beale. Human-Computer Interaction (3rd Edition). Prentice-Hall, Inc., Upper Saddle River, NJ, USA, 2003. A. Monk, P. Wright, J. Haber, and L. Davenport. Improving Your Human-Computer Interface: A Practical Approach. Prentice Hall International, Hemel Hempstead, 1993. M. Y. Ivory and M. A. Hearst. The state of the art in automating usability evaluation of user interfaces. ACM Comput. Surv., 33:470516, December 2001. P. G. Polson, C. Lewis, J. Rieman, and C. Wharton. Cognitive walk- throughs: a method for theory-based evaluation of user interfaces. Int. J. Man-Mach. Stud., 36:741773, May 1992. J. Nielsen and R. L.Mack. Usability inspection methods. Wiley, 1 edition, April 1994. http://en.wikipedia.org/wiki/Automation last accessed 2-6-2012. Law, E.L.-C., Hvannberg, E.T., 2004a. Analysis of strategies for improving and estimating the effectiveness of heuristic evaluation. In:NordiCHI 2004, Tampere, Finland, pp. 241250. Ashok Sivaji, Azween Abdullah, Alan G. Downe, Usability Testing Methodology: Effectiveness of Heuristic Evaluation in E-Government Website Development, ISBN 978-0-7695-4412-4, Proceedings of 2011 Fifth Asia Modelling, AMS 2011 Conference,pp.68-72. http://www.usabilitybok.org/methods/p275?section=basic-description last accessed 7-5-2011. R. Molich, A. D. Thomsen, B. Karyukina, L. Schmidt, M. Ede, W. van Oel, and M. Arcuri. Comparative evaluation of usability tests. In CHI 99 extended abstracts on Human factors in computing systems, CHI 99, pages 8384, New York, NY, USA, 1999. ACM. Lawson,R. Frustration: The development of a scientific concept. New York: MacMillan 1965. Patrick, J. R. Future of the Internet. Keynote Speech. Americas Conference on Information Systems, 2003. Zhang, P., Carey, J., Teeni, D., and Tremaine, M. Integrating Human-Computer Interaction Development into the Systems Development Life Cycle: A Methodology. Communications of the Association for Information Systems, vol. 15, pp. 512-543, 2005. Bryant, M. Introduction to user involvement, The Sainsbury Center for Mental Health, 2001. Rosen,J. A Methodology of Evaluating Predictive Metrics, Software Metrics Symposium, IEEE Computer Society, 1998. Integrating Human-Computer Interaction Development into SDLC: A Methodology, Proceedings of the Americas Conference on Information Systems,New York, August 2004. http://www.useit.com/papers/heuristic/heuristic_list.html

Authors Saad Masood Butt received his BS (Software Engineering) degree from Bahria University Islamabad, Pakistan in 2008. He completed his MS (Software Engineering) degree in 2010 from Bahria University Islamabad, Islamabad Pakistan. He is the recognized Engineer of Pakistan approved by Higher Education Commission and Pakistan Engineering Council (PEC). He has got more than 4 years experience and was associated with various organizations in Pakistan. Currently, he is pursuing his PhD degree in the department of Computer and Information Sciences at Universiti Teknologi PETRONAS, Malaysia.

153

Vol. 5, Issue 1, pp. 146-154

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Wan Fatimah obtained her Ph.D from Universiti Kebangsaan Malaysia. She is currently an Associate Professor at Universiti Teknologi PETRONAS, Malaysia. Her research interests include topics on Multimedia, Human computer interaction, mathematics education, e-learning

154

Vol. 5, Issue 1, pp. 146-154

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

AN INNOVATIVE APPROACH AND DESIGN ISSUES FOR NEW INTELLIGENT E-LEARNING SYSTEM
Gopal Sakarkar1, Shrinivas Deshpande2, Vilas Thakare3
MCA Department, RCOEM, Nagpur, Maharashtra, India Head, MCA Department, PGDCS&T, HVPM, Amravati, Maharashtra, India 3 Professor,P.G. Department, S.G.B. Amravati University, Amravati, Maharashtra, India
2 1

ABSTRACT
The E-Learning is new trend in the current 21st century. All learners and the instructors are very depending on online e-materials available on Internet. But the major problem facing both the learners and Instructors are, which is best e-learning content delivery systems out of the number of available application tools? This paper presents the result of an experimental result of case studies of different e-learning applications used in the real life around the world wide. Commonly e-learning system consider as web base application, but now a days it is also available in different format like desktops application, mobile application etc. After reviewing and studying a various e-learning systems, different observations and also some of the recommendations are given for enhancing the future of effective e-learning systems. Intelligent e-learning approach is main focus of this study work

.
INTRODUCTION

KEYWORDS: E-learning, Semantic Web, Software Agent, Moodle, peerwise.

I.

In recent years new demand of the modern education aids are to make an availability of learning stuff in an electronic format, the term called as e-Learning. The e-Learning is basically a computer-based learning or a web-based learning, or by the use of mobile technologies, it includes virtual classrooms and digital collaboration and uses. But future of this trend will demand more than this static information displaying system. To accomplish this new demand of 21th century, researches now focusing on I-Learning that is stand for Intelligent Learning sources, which will be provided by the collaboration of the Intelligent Software Agents and the Semantic Web technology. Since young teachers community is belonging from the NetGen or Google generation [1]. The use of this modern ICT in education requires new teaching methodology for focusing on collaboration on the part of the teachers and students. The teachers role in virtual space and beyond is important, since he/she has to teach the students to make them creative and innovative [2]. The current manuscript is organized as: Various researchers views about online e-learning is describe in section II, The first case study on Moodle e-learning system is explain in section III, the second case study on Peerwise, an online questionnaire developed at Auckland University, New Zealand is demonstrated in section IV, observation found during the study is presented in section V, the suggested recommendation are in section VI, and finally authors are conclude the study in section VII.

155

Vol. 5, Issue 1, pp. 155-162

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

II.

RELATED WORK

Janis Kapenieks [1] had performed a new methodology for effective e-learning process, author used an Action research in an e-learning environment that helps the students not only to create new knowledge but also to change their views and interests in a way that enhances their creativity by enhancing their practical skills in problem solving and an analytic and reflective mind which in its stead enhances a persons intellectual potential. H. Fletcher et. al. [3] proposed a conceptual framework using andragogy theory, especially based on the learners need for self-directedness, and transformative learning theory. It supports the selfdirected learner and the learning process. Yasir Eltigani Ali Mustafa and Sami Mohamed Sharif [4] provide details about adaptive e-learning hypermedia approach. The proposed adaptation model in AEHS-LS specified the way in which the learners knowledge and learning style modify the presentation of the content. For experimental purpose authors used the VARK questionnaire to determine the learning style of the participants. But the limitation of this e-learning system is that it is totally depend on a human interaction such that to upload the e-learning resources like Audio version , Text Version ,Visual version. M. Grigoriadou et. al.[5] performed an empirical study to evaluate the adaptation framework and assess learners attitudes towards the proposed instructional design. The number of students involved in the experiment was 23, which is still relatively small. They used descriptive statistics analysis in the form of bar and line charts. Abdallah Gomah had find out that the majority of current web-based learning systems are closed learning environments, where courses and materials are fixed and the only dynamic aspect is the organization of the material that can be adapted to allow a relatively individualized learning environment. Author has suggest a Web recommender systems are important to facilitate web usage and decrease time and effort needed to reach information needed by a user from this huge number of web pages that seem similar[6]. The authors had focus on various issues that are normally ignored by any e-learning resource systems. They had found out that while designing a good online examination system, some of the wellknown problems related to systems are: Lack of well-structured online examination system, Incompatible question type and non-availability of question editor, Question upload and format related issues, Difficulty in question contribution, slow response of examination system, Question Bank ageing and finally Security of assessment platform [7]. Even though E-learning has grown and is expanding at a very rapid pace and the benefits it offers increase the number of e-learning users. Its functionality continues to expand and relies more and more heavily on the Internet. But it has been face lots of technological issues, such as preparing efficient infrastructure. Bandwidth and connectivity issues, learning material is also an issue, since a lack of quality content is prepared. Some of the basic challenges in e-learning systems are it must be used a multimedia instruction, autonomous learning, instructor-led interaction, improvement of learning effectiveness, social presence confidentiality, availability , integrity and security that are all also consider while design it[8]. Authors had discovered an innovative approach for enhancing e-learning system by introducing hinting e-learning system. After making a comparison between a computer and human teachers generated hints, they are concluded that not an e-learning system is not better than the teachers nor that the teachers are better than the e-learning system. However, they do suggest that human teachers can be replaced by the hinting e-learning system without a significant loss of effectiveness, because the difference between the hinting tutor and human teachers would be in the interval [-0.38, 0.70] with a 95% probability [9]. The Intelligent learning through a web services architecture helps in distributed, service oriented elearning system. Author has find out the major problem in e-learning system that information is present in distributed format on the Internet, but with the help of agent platform and web service based LCMS system , it provide a assistance to a learner [10] .E.Kovatcheva, R.Nikolov had find out the new approach of e-learning system i.e. Adaptive feedback approach. This model is based on Computer Adaptive Test Theory (CAT) and organization of the learning domains. They used the learning objects (LO) and the test item ontology as resource structuring. It supports flexible adaptive

156

Vol. 5, Issue 1, pp. 155-162

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
strategies for assessment and navigation through the content. The propose system supports adaptive feedback to the students depending on the learner evaluation [11].

III.

CASE STUDY-I

To find out limitations of e-learning system, study is performed on two different e-learning systems. The first one is worldwide used LMS i.e. . In Moodle, first Instructor needs to login in e-learning system by simply providing user name and password as shown in Figure 1.

Figure 1. Moodle Login Screen

Now, in the next step an instructor will add a various study materials of subject for learners as shown in Figure. 2.

Figure 2. Adding study materials by Instructor

After login by learner they will views different e-resource uploaded by Instructors as shown in Figure 3.

157

Vol. 5, Issue 1, pp. 155-162

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Figure 3. Learners e-Resources

Learners may simply download these e-Materials and use it for study as shown in Figure 4.

Figure 4. Learner Downloads the e-Materials

IV.

CASE STUDY-II

In the second case study an e-learning application of the Online Questionnaire PeerWise is studied which is developed and used at Department of Computer Science, The University of Auckland New Zealand. It provides the statistics of responses given by the students to different queries and also the graphical presentation of trend of student responses to the questions.

158

Vol. 5, Issue 1, pp. 155-162

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Figure 5. Peerwise Login for Instructor

The instructor needed to login to the system as shown in Figure 5 and can upload number of questions test for students as shown in Figure 6. Then students will after successfully login to the system can attempt to solve the question give in the test and recorded their response and result online as per shown in Figure 7.

Figure 6. List of questions with respective answer

159

Vol. 5, Issue 1, pp. 155-162

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Figure.7 Response percentage to question

V.

OBSERVATIONS

After reviewing study of number of e-learning systems and actual study of the above mention two systems , we found following few limitations and drawbacks in e-Learning resource provided by various LMS and they are as given below They have lack of flexibility. Less attention to learning style and its effects on learning. They are centralized and offer courses within fixed contents format. Do not provide personalization and intelligent help. Does not use data gathered from the students during the e-Learning process for further improvement. In both synchronous and asynchronous e-learning system, maximum use of human interaction is required. Human teacher may have limitation of knowing students subject knowledge , attitude toward learning, students state of mind , also not reliable tools available for know these above mention parameter. Human teacher cant be properly analyses students according to their basic knowledge. Human teacher may have limited information about diverse e-resources available on internet.

In these present e-learning systems, the content delivery is not automatic and in intelligent ways.

VI.

RECOMMENDATIONS
E-learning system should provide visual demonstration of topics. It should have statistical analysis before using of that particular online topic and also provide some opinions about topic. It should have ability to collaborate and innovate new topic information. It should use Artificial Intelligent with voice recognition to interact with user. It should have a learning ability. It should provide answers come from internal database, Web, Wiki, previous conversation. It should search information from several online databases, like Google, Ask.com, Bing and others.

160

Vol. 5, Issue 1, pp. 155-162

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

VII.

CONCLUSIONS

The boom of e-learning system is continuously increasing because of various international universities allow, a distant education all over the world. Which provide guaranteed flood of information to students which gives a holistic learning experience. The main limitations of these e-learning systems are they are static, not personalize and not provide an intelligent help to learners. There is high need to design and develop versatile, flexible and intelligent e-learning system in the distance learning environment and in LMS to achieve objective of teaching and learning process. The next upcoming challenges in developing effective e-learning systems, it may be online or offline is that it should be intelligent and user oriented .The user oriented means that it will provide the ematerials as per the users current requirement and level of knowledge. We intend that for the web base e-learning system, combination of emerging Semantic Web based technology and Intelligent Agent will be assist to achieve this herculean task.

ACKNOWLEDGEMENTS
The authors would like to thank ,the Moodle for providing an open source application available at web site www.moodle.com appears in this case study is a non commercial context under the GNU General Public License http://docs.moodle.org . Also authors would like to thanks, the PeerWise web site http://peerwise.cs.auckland.ac.nz/at/?nysscer_in application appears in this case study is which use as a non commercial purpose.

REFERENCES
[1]. Janis Kapenieks (2011), Knowledge creation: action research in e-learning teams,IEEE Global Engineering Education Conference (EDUCON) "Learning Environments and Ecosystems in Engineering Education",pp.859-864. [2]. Ferrari A., Cachia R., Punie Y. (2009) "Innovation and Creativity in Education and Training in the EU Member States: Fostering Creative Learning and Supporting Innovative Teaching." Volume, 55. [3]. Fletcher H. Glancy, Susan K. Isenberg, (2011) A CONCEPTUAL E-LEARNING FRAMEWORK,European, Mediterranean & Middle Eastern Conference on Information Systems 2011 (EMCIS2011), May 30-31 2011, Athens, Greece, pp.637-650. [4]. Yasir Eltigani Ali Mustafa and Sami Mohamed Sharif, (2011), An approach to Adaptive E-Learning Hypermedia System based on Learning Styles (AEHS-LS): Implementation and evaluation, International Journal of Library and Information Science Vol. 3(1), pp. 15-28. [5]. Grigoriadou M, Papanikolaou K, Kornilakis H, Magoulas G (2001). INSPIRE: An intelligent system for personalized instruction in a remote environment,. Proceedings of the Eighth International Conference on user modelling,Sonthofen, Germany. [6]. Abdallah Gomah,Samir Abdel Rahman,Amr Badr,Ibrahim Farag , (2011) ,An Auto-Recommender Based Intelligent E-Learning System",IJCSNS International Journal of Computer Science and Network Security, VOL.11 No.1, pp. 67-70. [7]. Sanjit Kumar Ghosh, Alok Tiwary, Amit Tiwary, and Yogesh Kumar Bhatt, (2011), "Developing Effective Online Assessment System for an Enterprise Multidimensional Strategies for Success", International Journal of e-Education, e-Business, e-Management and e-Learning, Vol. 1, No. 1,pp-5763. [8]. Najwa Hayaati Mohd Alwi, Ip-Shing Fan, (2010)"E-Learning and Information Security Management, International Journal of Digital Society (IJDS), Volume 1, Issue 2. [9]. Pedro J. Muoz-Merino, Member, IEEE, Carlos Delgado Kloos, Senior Member,IEEE,and Mario Muoz-Organero, Member, IEEE, (2011 )"Enhancement of Student Learning Through the Use of a Hinting Computer e-Learning System and Comparison With Human Teachers",IEEE TRANSACTIONS ON EDUCATION, VOL. 54, NO. 1, pp.164-167 [10]. Ahmad Luthif,( 2010) Intelligent Learning Objects(LOs) Through Web Service Architecture ,InternetWorking Indonesia Journal, Vol. 2, No.1, pp.17-22. [11]. E.Kovatcheva, R.Nikolov, (2008), "An adaptive feedback approach for e-learning systems, IEEE Education Society Students Activities Committee (EdSocSAC).

AUTHORS

161

Vol. 5, Issue 1, pp. 155-162

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Gopal Sakarkar holding a Master Degree in Computer Applications and Bachelor Degree in Computer Science from S.G.B. Amravati University, India. He is currently working as Asst. Prof. in department of Master in Computer Applications. He has published 5 research paper in various IEEE International Conferences and a paper in International Journal. He had deliver a guest speech at Universidad Nacional Experimental Politcnica Antonio Jos de Sucre (UNEXPO), Caracas Venezuela. He was also represent as core committee member during 2nd International Computer Science On-Line Conference 2012, organize at Czech Republic. His research areas are Intelligent Agent, Mobile Agent, Semantic Web, Ontology, Face-recognition and e-learning systems. S. P. Deshpande is currently working as Associate Professor at Post Graduate Department of Computer Science & Technology, MCA at Shree H.V.P. Mandals Amravati since last 15 years. He has published 25 papers in various national & International conferences & 5 papers in International journals. He has guided more than 100 students at Post Graduate level. His interest of research is Database management, Data Mining, Web based technologies, AI.

V. M. Thakare is Professor and Head at Department of Computer Science S.G.B. Amravati University, Amravati, India. He has received M.E. (Advance Electronics from Amravati University), P.G. DCM, from IICM, Ahmadabad and Ph.D. in Computer Science. He has been invited as a Keynote Speaker, Invited Speaker, Session Chair and Reviewer for more than 18 International & National Conferences. He has been actively involved in the research in the area of Robotics and AI, Computer Architectures, ICT, Software Engineering

162

Vol. 5, Issue 1, pp. 155-162

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

LOW TRANSITION TEST PATTERN GENERATOR ARCHITECTURE FOR MIXED MODE BUILT-IN-SELF-TEST (BIST)
P. Sakthivel1, K. Nirmal Kumar2, T. Mayilsamy3
Department of Electrical and Electronics Engg., Velalar College of Engineering and Technology, Erode, India
2 1

Department of Electrical and Electronics Engg., Info Institute of Engineering, Coimbatore, India

Department of Electrical and Electronics Engg., Vivekanandha College of Engineering for Women, Tiruchengode, India

ABSTRACT
In Built-In Self-Test (BIST), test patterns are generated and applied to the circuit-under-test (CUT) by on-chip hardware; minimizing hardware overhead is a major concern of BIST implementation. In pseudorandom BIST architectures, the test patterns are generated in random nature by Linear Feedback Shift Registers (LFSR). Conventional LFSRs normally requires more number of test patterns for testing the architectures which need long test time. Approach: This paper presents a novel test pattern generation technique called Low-Transition Generalized Linear Feedback Shift Register (LT-GLFSR) with Bipartite (half fixed), Bit-Insertion (either 0 or 1) and its output bits positions are interchanged by swapping techniques (Bit-Swapping). This method introduces Intermediate patterns in between consecutive test vectors generated by GLFSR which is enabled by a non overlapping clock scheme. This process is performed by finite state machine generate sequence of control signals. LT-GLFSR, are used in a circuit under test to reduce the average and peak power during transitions. LT-GLFSR patterns high degree of randomness and improve the correlation between consecutive patterns. LTGLFSR does not depend on circuit under test and hence it is used for both BIST and scan-based BIST architectures. Results and Discussions: Simulation results prove that this technique has reduction in power consumption and high fault coverage with minimum number of test patterns. The results also show that it reduces power consumption during test for ISCAS89 bench mark circuits. Generally LT-GLFSR is called GLFSR with Bipartite Technique. Proposed technique is called as LT-GLFSR with BI and BS.

KEYWORDS: Low Transition Generalized Linear Feedback Shift Register (LT-GLFSR (Bipartite)), Bipartite
Technique, LT-GLFSR (BI and BS), Finite State Machine(FSM), Bit Swapping(BS),Bit Insertion(BI).

I.

INTRODUCTION

Importance of testing in Integrated Circuit is to improve the quality in chip functionality that is applicable for both commercially and privately produced products. The impact of testing affects areas of manufacturing as well as those involved in design. Given this range of design involvement, how to go about best achieving a high level of confidence in IC operation is a major concern. The desire to attain a high quality level must be tempered with the cost and time involved in this process. These two design considerations are at constant odds. It is with both goals in mind (effectiveness and cost/time) that Built-In-Self Test (BIST) has become a major design consideration in Design-For-

163

Vol. 5, Issue 1, pp. 163-175

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Testability (DFT) methods. BIST is beneficial in many ways. First, it can reduce dependency on external Automatic Test Equipment (ATE) because it is large, vendor specific logic, non-scalable and expensive equipment. This aspect impacts the cost/time constraint because the ATE will be utilized less by the current design. The paper is organised into nine sections which are follows as: Section I describes the introduction about testing. Section II eloborates the prior works carried out by the reasearchers in the field of testing of VLSI circuits. Section III describes the proposed work. Materials and methods of the proposed work and their implemenations are discussed in sections IV, Vand VI respectively. Finally the results and their discussions are illustrated in sections VII and VIII. In addition, BIST provides high speed, in system testing of the Circuit-Under-Test (CUT) [13]. This is crucial to the quality component of testing. that stored pattern BIST, requires high hardware [3] overhead due to memory devices is in need to store pre computed test patterns, pseudorandom BIST, where test patterns are generated by pseudorandom pattern generators such as Linear Feedback Shift Registers (LFSRs) and cellular automata (CA), required very little hardware overhead. However, achieving high fault coverage for CUTs that contain many random pattern resistant faults (RPRFs) only with (pseudo) random patterns generated by an LFSR or CA often requires unacceptably long test sequences thereby resulting in prohibitively long test time. In general, the dissipation of power of a system in test mode is higher than in normal mode operation. Power increases during testing because of high switching activity [2], parallel testing of nodes, power due to additional load (DFT) and decrease of correlation [4] among patterns. This extra power consumption due to switching transitions (average or peak) can cause problems like instantaneous power surge that leads to damage of circuits (CUT), formation of hot spots, and difficulty in verification. Solutions that are commonly applied to relieve the extravagant power problem during test include reducing frequency and test scheduling to avoid hot spots. The former disrupts at-speed test philosophy and the latter may significantly increase the time. The aim of BIST is to detect faulty components in a system by means of the test logic that is incorporated in the chip. It has many advantages such as at-speed testing and reduced need of expensive external automatic test equipment (ATE). In BIST, LFSR is used to generate pseudorandom test patterns which are primary inputs for a combinational circuit or scan chain inputs for a sequential circuit [7]. BIST-based structures are very vulnerable to high-power consumption during test. The main reason is that the random nature of patterns generated by an LFSR significantly reduces the correlation not only among the patterns but also among adjacent bits within each pattern; hence the power dissipation is more in test mode like instantaneous power surge that leads to damage of circuits (CUT), formation of hot spots, and difficulty in verification. Solutions that are commonly applied to relieve the extravagant power problem during test include reducing frequency and test scheduling to avoid hot spots. The former disrupts at-speed test philosophy and the latter may significantly increase the time.

II.

PRIOR WORK

GLFSR [11], a combination of LFSR and cellular arrays, that is defined over a higher order Galois field GF (2), >1. GLFSRs yield a new structure when the feedback polynomial is primitive and when (>1) it is termed as MLFSR. Cellular automata algorithm for test pattern generation was applied [5] in combinational logic circuits. This maximizes the possible fault coverage and minimizes length of the test vector sequences. Also it requires minimum hardware. A low power/energy BIST architecture based on modified clock scheme test pattern generator was discussed [12], [8] it was discussed that an n bit LFSR is divided into two n/2 bit length LFSRs. The fault coverage and test time were the same as those achieved in conventional BIST scheme. A dual speed LFSR [16] test pattern for BIST was generated. The architecture comprised of a slow speed and a normal speed LFSR for test pattern generation. Slow speed LFSR was clocked by dual clocked flip-flop, this increased the area overhead than normal speed LFSR. Effective pattern generator should generate [6] patterns with high degree of randomness and should have efficient area implementation. GLFSR provide a better random distribution of the patterns and potentially lesser dependencies at the output. EGLFSR is known to be an enhanced GLFSR, which

164

Vol. 5, Issue 1, pp. 163-175

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
comprises of few more XOR gate in a test pattern generator than LFSR which achieves a better performance. Low power test patterns were generated [10] for BIST applications. It exploited low transition LFSR which was a combination of conventional LFSR and insertion of intermediate patterns (bipartite and random insertion technique) between sequences of patterns generated by LFSR that was implemented by modified clock scheme. A low transition generalized [14] LFSR based test patterns are generated for BIST architecture. LTGLFSR consists of GLFSR with bipartite technique. In Bipartitite technique (half fixed), among the available test patterns a portion of the bits are changed and remaining bits are unchanged inorder to obtain new vectors in between two consecutive patterns generated by GLFSR. Then multiplexer circuits are used to select either swapped output of GLFSR(bipartite) or output of bit insertion circuit [15] In this method,generated patterns has greater degree of randomness and improves corelation between consecutive patterns but it has slightly high transitions in sequence of patterns generated. Generally, power consumption is with respect to number of transition between consecutive patterns, by introducing the enable signals to activate the GLFSR, to reduce the number of transitions.In proposed method, LT-GLFSR can activated by four non-overlaping enable signals.This enable signal is to activate test pattern generator partly and remaining in idle when period of test pattern generation.

III.

PROPOSED WORK

This paper presents a new test pattern generator for low- power BIST (LT-GLFSR), which is employed for combinational and sequential architectures. The proposed design composed of GLFSR and intermediate patterns insertion technique (Bipartite, Bit Insertion and Bit Swapping techniques) that can be implemented by modified clock scheme and its control signals (codes) generated by finite state machine (FSM). FSM generates sequence of codes (en1en2sel1sel2) which are given in terms of 1011, 0010, 0111, and 0001. Enable signals (en1en2) are used to enable part of the GLFSR (bipartite) and selector signals (sel1sel2) are used to select either GLFSR output (bipartite and swapped output) or bit insertion circuit output. Intermediate patterns are in terms of GLFSR output and Bit-Insertion technique output. Swapped output is obtained by interchanging the position of output of the adjacent cells of the GLFSR.The proposed technique improves the correlation in two dimensions: 1) the vertical dimension between consecutive test patterns (Hamming Distance) and 2) the horizontal dimension between adjacent bits of a pattern sent to a scan chain. It results in reducing the switching activity which in turn results in reducing the average and peak power consumption [13]. The GLFSR [12] structure is modified in such a way that automatically inserts three intermediate patterns between its original pairs generated. The intermediate patterns are carefully chosen using bipartite and bit insertion techniques [10] and impose minimal time to achieve desired fault coverage. Insertion of intermediate pattern is achieved based on non overlapping clock scheme [12]. The Galois field (GF) of GLFSR (3, 4) [17]) is divided into two parts, it is enabled by two different clock schemes. The randomness of the patterns generated by LT-GLFSR has been shown to be better than LFSR and GLFSR. The favourable features of LT-GLFSR in terms of performance, fault coverage and power consumption are verified using the ISCAS benchmarks circuits.

IV.

MATERIALS AND METHODS

GLFSR Frame Work: The structure of GLFSR is illustrated in Fig.1. The circuit under test (CUT) is assumed to have outputs which form the inputs to that GLFSR to be used as the signature analyzer [11], [9]. The inputs and outputs are considered bit binary numbers, interpreted as elements over GF (2).The GLFSR, designed over GF (2), has all its elements belonging to GF (2). Multipliers, adders, and storage elements are designed using conventional binary elements. The feedback polynomial is represented in equation. 1 as

The GLFSR has m stages, D0, D1...Dm-1 each stage has storage cells. Each shifts bits from one stage to the next. The feedback from the Dm-1th stage consists of bits and is sent to all the stages. The coefficients of the polynomial i are over GF (2) and define the feedback connections.

165

Vol. 5, Issue 1, pp. 163-175

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Fig. 1 The generalized GLFSR

The GLFSR when used to generate patterns for circuit under test of n inputs can have m stages, each element belonging to GF (2) where (m x ) is equal to n. A non zero seed is loaded into the GLFSR and is clocked automatically to generate the test patterns. In this paper GLFSR with (>1) and (m >1) are used, where all possible 2m test patterns are generated. The feedback polynomial is a primitive polynomial of degree m over GF (2). The polynomial from [17] is described as in equation. 2: (2) m Where is the primitive element of GF (2 ) and Constructing a primitive polynomial of degree m over GF(2) using(equation.2) coefficients 0, 1.., m-1 as powers of , the primitive element of GF(2m). Let =3,m = 4,(GF(3,4))The primitive polynomial GF(212) and GF(23) are denoted by and respectively in equation. 3.

( x) ( x )( x 8 )( x 64 )( x 512)
the Expand form of polynomial is given in equation. 4

(3)

( x) ( x 4 1755x3 2340x 2 585) Solving the roots of primitive polynomial p(x) (5) p ( x) x 3 x 1

(4)

primitive polynomial of GF (23), in GF (212), 1755 becomes an element which corresponds to a primitive element of GF (23), . Substituting the corresponding values, the feedback polynomial is as in equation.6

( x) x 4 x3 6 x 2 5

(6)

The element , 5 and 6 are represented as x, x5 and x6 respectively in the polynomial form. The four Storage element of the GLFSR are represented as DI a2 x 2 a1 x a0

DII a5 x 2 a4 x a3 , DIII a8 x 2 a7 x a6 and DIV a11x 2 a10 x a9 respectively. Each storage element has storage cells. Storage elements are DI (D0,D1 & D2),DII (D3,D4 &
D5),DIII(D6,D7 & D8) and DIV (D9,D10 & D11). At each cycle, the values that are to be fed back into the storage elements are given by polynomials

(a11x 2 a10 x a9 ) 0 (a11x 2 a10 x a9 )1 a2 x 2 a1 x a0 (a11x 2 a10 x a9 ) 2 a5 x 2 a4 x a3 (a11x 2 a10 x a9 ) 3 a8 x 2 a7 x a6


with the above explanations the generalize GLFSR in Fig.1 is applied for GLFSR (3, 4) defined over GF (23) and its structure is given in Fig.2.

166

Vol. 5, Issue 1, pp. 163-175

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Fig. 2 Structure of GLFSR (3, 4)

Table 1 shows the first 15 states of the GLFSR (3, 4) with the initial seed 1111, 1111, 1111, and the GLFSR (1, 12), which is a 12 stages LFSR as a comparison.
Table 1. First 15 states of the GLFSR and LFSR

S.No. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

GLFSR(3,4) 1111,1111,1111 1101,1110,0010 1011,1001,1101 0111,0100,1111 1100,1111,0100 1111,1011,0100 1111,1101,1100 1111,1101,0001 1001,1110,1100 1111,0001,0111 1101,1111,1111 1101,1010,0010 1011,1001,0101 0111,0100,1110 0100,1110,0010 1010,1011,1101

LFSR(n=12) 1111,1111,1111 0111,1111,1111 0011,1111,1111 0001,1111,1111 1000,1111,1111 0100,0111,1111 0010,0011,1111 1001,0001,1111 0100,1000,1111 1010,0100,0111 0101,0010,0011 1010,1001,0001 0101,0100,1000 1010,1010,0100 0101,0101,0010 1010,1010,1001

V.

BIPARTITE (HALF-FIXED), BIT INSERTION AND BIT SWAPPING TECHNIQUE (INTERMEDIATE PATTERNS INSERTION TECHNIQUE)

The implementation of a GLFSR is to improve design features, such as testing power. However, such a modification may change the order of patterns or insert new pattern that affect the overall randomness. Intermediate bit patterns between Ti and Ti+1 of GLFSR are introduced by bipartite and bit insertion [10] technique. Two cells in an each field of the GLFSR are considered to be adjacent without intervening XOR gate. 5.1. Bipartite (half fixed) Technique The maximum number of transitions is n when Ti and Ti+1 are complements of each other. One strategy, used [19] to reduce number of transitions to maximum of n/2, is to insert a pattern T i1, half of which is identical to Ti and Ti+1. This Bipartite (half-fixed) strategy is shown symbolically in Fig. 3a.

167

Vol. 5, Issue 1, pp. 163-175

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Fig. 3a Patterns Insertion based on Bipartite Strategy

5.2. Bit Insertion Technique (0 or 1) Bit Insertion Technique (either 0 or 1) is called randomly insert a value in positions, where

t t
i j

i 1 j

, Briefly, (7)

Bit insertion technique symbolically represented as shown in Fig.3b. The cells (indicated b) show those bit positions where

t t
i j
i1

i 1 j

A random bit (shown as I in T ) is inserted, if the corresponding bits in Ti and Ti+1 are not equal (0 & 1) and is shown in equation. Note that, inserted bits are uniformly distributed over the length of the test vector.

Fig. 3b Patterns insertion based on Bit insertion strategy

5.3. Bit Swapping Technique Bit Swapping Technique is obtained by inter changing the positions of the bits of the test pattern. For example LT-GLFSR outputs of D0,D1 and D2 are interchanged by D3,D4 and D5. in LT-GLFSR, This process is done by 2x1 multiplexer enabled by selector signals. Multiplexer is used to select either bit swapped GLFSR output or Bit Insertion output. In this modifications [1] the output of the two cells will have its transition count reduced by Tsaved = 2(n-2) transitions. Hence, it reduced the 25% of total number of the transition for each cell swapped.

VI.

IMPLEMENTATION OF GLFSR WITH BIPARTITE BIT INSERTION AND BIT SWAPPING TECHNIQUE (LT-GLFSR)

Implementation of proposed methods, the GLFSR combine with Bipartite, Bit-Insertion and BitSwapping technique for low-power BIST. It is called as LT-GLFSR. The proposed method generates three intermediate patterns (Ti1, Ti2, and Ti3) between two consecutive random patterns (Ti and Ti+1) generated by GLFSR which is enabled by non overlapping clock schemes. LT-GLFSR provides more power reduction compared to LT-GLFSR (bipartite), conventional GLFSR and LFSR techniques. An intermediate pattern inserted by this technique has high randomness with low transitions can do as good as patterns generated by GLFSR in terms of fault detection and high fault coverage. In bipartite technique, each half of Ti1 is filled with half of Ti and Ti+1 is shown in equation 8.

168

Vol. 5, Issue 1, pp. 163-175

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
(8) GLFSR with bipartite technique [14], GLFSR is divided into two parts by applying two complementary (non-overlapping) enable signals (En1 & En2). First part of GLFSR includes flip-flop that are D0, D1, D3, D4, D6, D7, D9 and D10...Second part is D2, D5, D8 and D11. In other words, one of the two parts of GLFSR is working, when other part is in idle mode. GLFSR including flip-flops with two different enable signals is shown in Fig.4a.

Fig. 4a Architecture of LT- GLFSR with Bipartite Technique

In proposed method, GLFSR with bipartite and bit insertion technique has four different enable signals as shown in Fig. 4b.It has four non overlapping enable signals are En1, En2, Sel1 and Sel2.Generally, En1 & En2 are to activate GLFSR with bipartite technique as shown in Fig.4d and Sel2 & Sel2 are to activate the GLFSR with bit insertion technique as shown in Fig.4e by bit insertion circuit as shown in Fig.4c. Sequence of enable signals generated by finite state machine are given as 1011,0010,0111 and 0001. En1 and En2 are enable a part of GLFSR.Sel1 and Sel2 are selector signals of multiplexers and Hence, its select output of either GLFSR or Bit insertion circuit with respect to enable and selector signals. The first part of GLFSR is working and second part is idle, When En1En2Sel1Sel2 =1011. The second part works and first part is in idle, when En1En2Sel1Sel2= 0111. Idle mode part has to provide output as present state (stored value). Output of test pattern generator is in terms of part of GLFSR output in idle mode and remaining part is output of bit insertion circuit, when En1En2Sel1Sel2=0001&0010. The additional flipflops (shaded flip-flops(D)) are added to the LT- GLFSR architecture in order to store the nth,(n-1)th and (n-2)th bits of GLFSR. Initially, to store the (n-1)th and (n-2)th bits of GLFSR , when En1En2 = 10 and send (n-2)th bit value into the XOR gate of D2 and D8 flip-flop and (n-1)th bit value into the XOR gate of D2 and D11 flip-flop, when second part becomes active, that is En1En2 =01.Finally, to store the nth bit of GLFSR, when En1En2 = 01 and send its value into the XOR gate of D0,D7 and D10 flip-flop when the first part becomes active En1En2 =10. Generally, the output of LT-GLFSR is based on enable and selector signals. Note carefully that the new (shaded (D)) flip-flop does not change the characteristic function of GLFSR. The GLFSRs operation is effectively split into two parts and it is enabled by the four different enable signals as shown in Fig. 4f. This method is similar to the Modified clock scheme LFSR (Girard et al, 2001). They were used two n/2 length LFSRs with two different non-overlapping clock signals which increases the area overhead. Insertion of Intermediate patterns T i1, Ti2 and Ti3 between two consecutive patterns generated by GLFSR (3, 4) is Ti and Ti+1.

169

Vol. 5, Issue 1, pp. 163-175

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Fig. 4b Architecture of LT- GLFSR with Bipartite, BI and BS Technique

Fig. 4c an BI Circuit

One part of the LT-GLFSR flip-flops are clocked in each cycle, but in conventional LFSR and GLFSR flip-flops are clocked at the same time in each clock cycle, thus its power consumption is much higher than LT-GLFSR. The power consumed by LFSR, GLFSR, LT-GLFSR (Bipartite) and LT-GLFSR (Bipartite and BI) with ISCAS bench mark circuits are tabulated as shown in Table.III and IV. The following steps are involved to insert the intermediate patterns in between two consecutive patterns Step 1. en1en2 = 10, sel1sel2 = 11(1011). The first part (D0, D1, D3, D4, D6, D7, D9 and D10) of GLFSR is active and the second Part (D2, D5, D8 and D11) is in idle mode. Selecting sel1sel2 = 11, both parts of GLFSR are sent to the outputs (O1 to On). In this condition first part (D0,D1,D3,D4,D6,D7,D9 and D10) of GLFSR are send to the outputs (O0,O1,O3,O4,O6,O7,O9 and O10) as next state and no bit change in second part (D2,D5,D8 and D11) of GLFSR are send to the outputs (O2,O5,O8 and O11) as its present state (Stored value) and also position of outputs of D0,D1 and D2 are interchanged by D3,D4 and D5 . In this case, Ti is generated. Step 2. en1en2 = 00, sel1sel2 = 10(0010).The both parts of GLFSR are in idle mode. The first Part of GLFSR is sent to the outputs (O0,O1,O3,O4,O6,O7,O9 and O10) as its present state (stored value) but the bit insertion circuit inserts a bit (0 or 1) to the outputs (O2,O5,O8 and O11) and also position of outputs of D0 and D1 are interchanged by D3 and D4. Ti1 is generated.

170

Vol. 5, Issue 1, pp. 163-175

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Step 3. en1en2 = 01, sel1sel2 = 11(0111). The first part of GLFSR is in idle mode. The second part of GLFSR is active. In this condition first part (D0,D1,D3,D4,D6,D7,D9 and D10) of GLFSR is send to the outputs (O0,O1,O3,O4,O6,O7,O9 and O10) as present state and second part (D2,D5,D8 and D11) of GLFSR is send to the outputs (O2,O5,O8 and O11) as its next state and also position of outputs of D0,D1 and D2 are interchanged by D3,D4 and D5. Ti2 is generated. Step 4. en1en2 = 00, sel1sel2 = 01(0001). Both Parts of GLFSR are in idle mode. The second part of GLFSR is send to the Outputs (O2, O5, O8 and O11) as its Present state. Bit insertion circuit will insert a bit (0 or 1) into the outputs (O 0, O1, O3, O4, O6, O7, O9 and O10) and also positions of output of D2 are interchanged as D5. Ti3 pattern is thus generated. Step 5. The process continues by going through Step 1 to generate Ti+1

Fig.4d Bit Insertions in LT-GLFSR Bipartite Technique

Fig.4e Bit Insertions in LT-GLFSR Bipartite Technique

Fig. 4f Timing diagram of Enable signals

VII.

RESULTS

The test patterns generated by LFSR, GLFSR,LT-GLFSR(Bipartite) and LT-GLFSR(BI and BS) are used for verifying the ISCAS85 benchmark circuits S298 and S526. Simulation and synthesis are done in Xilinx 13 and power analysis is done using Power analyzer. The results in Table 3and 4, are the test patterns for fault coverage and the reduction in the number of test patterns. Power analysis is carried out with the maximum, minimum and typical input test vectors for stuck-at faults and transition faults of sequential circuits (CUT). Fig.5a shows the distribution of the number of transitions in each bit of the pattern generated using GLFSR, LT-GLFSR (BS) and LT-GLFSR (BI & BS) for 50 patterns. A transition in each bit of the patterns generated LT-GLFSR (bipartite) is varies in between 5 to 10 transitions. It has comparatively less number of transitions with patterns generated by GLFSR. Fig.5b shows the output of the LT-

171

Vol. 5, Issue 1, pp. 163-175

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
GLFSR (BI &BS). These test patterns reduce switching transitions in test pattern generator as well as for the circuit under test.

Fig.5c LT-GLFSR (Bipartite, BI and BS) Test pattern generator

VIII.

DISCUSSIONS

Test patterns are generated by LFSR, LT-GLFSR(bipartite) and LT-GLFSR(bipartite and bit insertion) and the analysis of randomness or closeness among the bit patterns are done. From the analysis the test patterns generated by LT-GLFSR(bipartite and bit insertion) has significantly greater degree of randomness, resulting in improved fault coverage when compared to standard LFSR and GLFSR. GLFSR is modified by means of clocking such that during a clock pulse one part is in idle mode and other part in active mode. This modification is known as LT-GLFSR which reduces transitions in test pattern generation and increases the correlation between and within the patterns by inserting intermediate patterns. From the discussed three methods, the LT GLFSR has less number of test patterns required for high fault coverage with high degree of closeness, randomness and low power consumption for the CUT.

Fig.5a Distribution of the number of transitions in each Bit of the pattern generated using GLFSR & LT-GLFSR (bipartite) for 50 patterns

172

Vol. 5, Issue 1, pp. 163-175

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Table 2. Test Patterns for first 20 states

Table 3 Transition Fault Detected in S298 No. Of faults: 25 Pattern Generation Number of test Pattern 53 17 12 13 Pattern Reduction (%) -32.09 22.67 23.65 Power (mW) 45.56 25.98 21.23 22.25

LFSR GLFSR LT-GLFSR (BS) LT-GLFSR(BI &BS)

Table 4 Transition Fault Detected in S526 No. Of faults: 20 Pattern Generation Number of test Pattern 567 234 197 180 Pattern Reduction (%) -41.26 34.74 31.2 Power (mW) 58.9 39.7 31.6 29.12

LFSR GLFSR LT-GLFSR (BS) LT-GLFSR(BI &BS)

IX.

CONCLUSION AND FUTURE SCOPE

An effective low-power pseudorandom test pattern generator based on LT- GLFSR (BI & BS) is proposed in this paper. Power consumption of LT-GLFSR is reduced due to the Bipartite, Bit insertion and Bit swapping technique. Only half of the LT-GLFSR flip-flops are clocked in each cycle then bit swapped with respect to selector signal. LT-GLFSRs provide for greater randomness than standard LFSR and GLFSR, which have the potential to detect most stuck-at and transition faults for CUT with a fraction of patterns. This will be significance for the faults detection for ISCAS circuits with a minimum number of input test patterns. The switching activity in the CUT and scan chains, their power consumption are reduced by increasing the correlation between patterns and also

173

Vol. 5, Issue 1, pp. 163-175

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
within each pattern. This is achieved with almost no increase in test length to hit the target fault coverage. As a future scope the proposed work is applied for the complex sequential circuits. Concept of GLFSR and Cellular Automata can be combined in order to get better degree of randomness and cover more number of faults with few numbers of patterns.

REFERENCES
[1]. AbdalLatif S. Abu-Issa & Steven F. Quigley, (2009) Bit-Swapping LFSR and Scan- Chain Ordering: A Novel Technique for Peak and Average-Power Reduction in Scan based BIST,IEEE Transactions on Computer-Aided Design of Integrated circuits and Systems, Vol.28, No.5. [2]. M. Chatterjee & D.K. Pradhan, (2003) A BIST pattern generator design for near-perfect fault coverage, IEEE Transactions on computers, Vol. 52, No.12, pp.1543-1556. [3]. M. Chatterjee (1998) An integrated framework for synthesis for testability (D), Dept. computer. Science, Texas, A&M University. [4]. X. Chen & M. Hsiao, (2003) Energy-Efficient Logic BIST Based on State Correlation Analysis, Proceedings of the VLSI Test Symposium, pp. 267-272. [5]. F. Corno, M. Rebaudengo, M. Reorda, G. Squillero & M. Violante, (2000) Low Power BIST via Non-Linear Hybrid Cellular Automata, Proceedings of the VLSI Test Symposium, pp.29-34. [6]. K. Dhiraj, P.C. Liu & K. Chakraborty, (2003) EBIST: A novel test generator with built-in fault detection capability, Proceedings of the Design, Automation and Test in Europe Conference and Exhibition, pp: 1-6. [7]. P. Girard, L. Guiller, C. Landrault, S. Prayossouda -vitch & H.J.Wunderlich, (2001) A Modified Clock Scheme for a Low Power BIST Test Pattern Generator, Proceedings of the VLSI Test Symposium. pp. 306-311. [8]. D. Gizopoulos, N. Krantitis, A. Paschalis, M.Psarakis & Y. Zorian, (2000) Low power/Energy BIST Scheme for Data paths, Proceedings of the VLSI Test Symosium, pp. 23-28. [9]. T.K Matsushima, T. Matsushima & S. Hirasawa, (1997) A new architecture of signature analyzers for multiple-output circuits, IEEE Computational Cybernetics Simulation, pp.3900-3905. [10]. M. Nourani, M. Tehranipoor & N. Ahmed, (2008) Low transition test pattern generation for BIST architecture, IEEE Transactions on Computers, Vol. 3, pp. 303-315. [11]. D.K. Pradhan & M. Chatterjee, (1999)GLFSR-A new test pattern generator for Built-in-Self-Test, IEEE Transactions on Computer-Aided Design Integrated Circuits Systems, Vol. 2, pp. 238-247. [12]. D.K. Pradhan & S.K. Gupta, (1991) A new framework for designing analyzing BIST techniques and zero aliasing compression, IEEE Transactions on Computers, Vol. 40,pp. 743-763. [13]. D.K. Pradhan, D. Kagaris & R. Gambhir, (2006),A hamming distance based test pattern generator with improved fault coverage, Proceedings of the 11th IEEE International on-Line Testing Symposium, pp. 221-226. [14]. P. Sakthivel & A. N. Kumar, (2011) LT-GLFSR Based Test Pattern Generator Architecture for Mixed Mode Built-in-Self-Test, European Journal of Scientific Research, Vol. 52, No.1,pp.6-15. [15]. P. Sakthivel & A. N. Kumar, (2012) Low Transition-Generalized Linear Feedback Shift Register Based Test Pattern Generator Architecture for Built-in-Self-Test, International Journal of Computer Science, Vol. 8, No.6, pp. 815-821. [16]. S. Wang & S.K. Gupta, (2002) DS-LFSR: A BIST TPG for Low Switching Activity, IEEE Transactions on Computer Aided Design Integrated Circuits Systems, Vol.7,pp. 842-851. [17]. Z. Wen-rong & W. Shu-Zong, (2009) A novel test pattern generator with high fault coverage for bist design, Proceedings of the 2nd International Conference Information Computer Science, pp.59-62. [18]. Y. Zorian (1993) A Distributed BIST Control Scheme for Complex VLSI Devices, Proceedings of the IEEE VLSI Test Symposium, pp .4-9. [19]. X. Zhang, K. Roy & S. Bhawmik, (1999) POWER TEST: A Tool for Energy Conscious Weighted Random Pattern Testing, Proceedings of the International Conference on VLSI Design, pp. 416-422, 1999.

AUTHORS
Sakthivel. P, Corresponding Author of the paper, He received the B.E degree in Electrical and Electronics Engineering from Coimbatore Institute of Technology, Coimbatore in 1998 and M.E degree in Applied Electronics from Coimbatore Institute of Technology, Coimbatore in 2001. He is Pursuing his P.hD in Testing of VLSI Circuits at Anna University, Chennai. Currently, he is working as Assistant Professor in the Department of Electrical and Electronics Engineering at Velalar College of Engineering and Technology, Tamilnadu, India. He is a Life

174

Vol. 5, Issue 1, pp. 163-175

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Member of ISTE. He has received the best Teaching Staff award for the academic year 2003 & 2010. His areas of interest include Electrical Engineering, VLSI design and low power testing and soft computing Techniques. Nirmal Kumar. A, received the P.hD. degree from PSG college of Technology in 1992, M.Sc (Engg.) degree from Kerala University in 1975 and his B.Sc (Engg.) degree from NSS college of Engineering, Palakkad in 1972. Currently, He is working as a Professor and Head of the Department of Electrical and Electronics Engineering in Info Institute of Engineering, Coimbatore, Tamilnadu, India. His fields of Interest are Power quality, Power drives and control and System optimization.

175

Vol. 5, Issue 1, pp. 163-175

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

CRYPTOGRAPHY SCHEME OF AN OPTICAL SWITCHING SYSTEM USING PICO/FEMTO SECOND SOLITON PULSE
I. S. Amiri1, M. Nikmaram2, A. Shahidinejad2, J. Ali1
Institute of Advanced Photonics Science, Nanotechnology Research Alliance, Universiti Teknologi Malaysia (UTM), 81310 Johor Bahru, Malaysia 2 Faculty of Computer Science & Information Systems (FCSIS), Universiti Teknologi Malaysia (UTM), 81300 Johor Bahru, Malaysia
1

ABSTRACT
We propose a system of microring resonators (MRRs) incorporating with an add/drop filter system. Optical soliton can be simulated and used to generate entangled photon, applicable in single and multiple optical switching. Chaotic signals can be generated via the MRRs system. Therefore continuous spatial and temporal signals are generated spreading over the spectrum. Polarized photons are formed incorporating the polarization control unit into the MRRs, which allows different time slot entangled photons to be randomly formed. Results show the single soliton pulse of 0.7 ps where the multi soliton pulse with FSR and FWHM of 0.6 ns and 20 ps are generated using the add/drop filter system. Here Ultra-short single soliton pulse with FWHM=42 fs can be simulated. These pulses are providing required communication signals to generate pair of polarization entangled photons among different time frame where the polarization control unit and polarizer beam splitter (PBS) are connected to the ring resonator system.

KEYWORDS: Microring Resonator, Photon, Spatial and Temporal Soliton.

I.

INTRODUCTION

Photon switching is concerning field of optical communication [1], where it employs quantum cryptography in a mobile telephone network, which is described by Yupapin [2]. Quantum key distribution supplies the perfect communication security [3-4]. Hence quantum cryptography can be performed through an optical-wireless link [5]. Research works have shown that techniques of continuous variable quantum cryptography are aimed and applied on the micro ring resonators [6]. Entangled photon pairs are an important resource in quantum optics [7], and are essential for quantum information [8] applications such as quantum key distribution [9-10] and controlled quantum logic operations [11]. Furthermore, control over the pair generation time is essential for scaling many quantum information schemes beyond a few gates. New quantum key distribution protocol points that data can be encoded on continuous variables of a single photon [12]. In order to give rise a range of light throughout a wide range, an optical soliton signal is suggested as an improved laser pulse used to make chaotic filter characteristics when propagate inside MRRs [13-16]. The capacity of the system can be increased when the chaotic packet switching is employed [17-20]. We propose a system that purposes localized soliton pulses [21] to figure the high capacity and security communication [22-25]. It is used to trap optical solitons to generate entangled photon pair [26-27]. Furthermore, the continuous variable quantum codes can be generated using the polarizer and beam splitter systems

176

Vol. 5, Issue 1, pp. 176-184

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
[28]. This research is supported by the Institute of Advanced Photonics Science, Nanotechnology Research Alliance, Universiti Teknologi Malaysia (UTM) and UTM IDF financial.

II.

THEORY AND SYSTEM

Schematic diagram of the proposed system is as shown in figure 1. A soliton pulse with 20 ns pulse width, peak power at 500 mW is inserted into the system. The parameters of the system are fixed to 0=1.55m [29], n0=3.34 (InGaAsP/InP) [30-32], Aeff=0.50, 0.25m2 and 0.12m2 for the different radii of microring resonators respectively, =0.5dBmm-1, =0.1 [33-36]. The coupling coefficient (kappa, ) [37] of the micro ring resonator ranged from 0.50 to 0.975 [38-39].

Fig.1: Schematic diagram of a single and multiple narrow pulse switching generation for continuous variable quantum key distribution with the different time slot, where PBS, polarizing beam splitter, Ds, detectors, Rs, ring radii and s, coupling coefficients

Input optical field (Ein) of the bright soliton pulse can be expressed as [40-41],

z T i0t Ein A sec h exp T0 2 LD

(1)

A and z are the optical field amplitude and propagation length, respectively [42-43]. T is a soliton pulse propagation time in a form proceeding at the group speed [44-45], T = t-1z, where 1 and 2 [46] are the coefficients of the linear and second order terms of Taylor expansion of the propagation 2 constant [13]. LD T0 2 is the dispersion length of the soliton pulse [47]. The frequency carrier
2 of the soliton is w0 [48-50]. When soliton peak intensity 2 / T0 is given, and then To is known

[51-52]. The nonlinear length is given by LNL 1/ NL where = n2k0 is the length scale, thus

LNL LD should be satisfied [53]. The refractive index (n) of light within the medium is given by
[54]

n n0 n2 I n0 (

n2 ) P, Aeff

(2)

where n0 and n2 are the linear and nonlinear refractive indexes, respectively. I and P are the optical intensity and optical power, respectively. The effective mode core area of the device is shown by Aeff. The effective mode core areas range from 0.50 to 0.10 m2 [54]. The resonant output is formed, thus,

177

Vol. 5, Issue 1, pp. 176-184

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
the normalized output [55] of the light field is the ratio between the output and input fields Eout (t) and Ein (t) in each roundtrip, which can be expressed as [56]

E out (t ) Ein (t ) (1 )
2 2

(1 x 2 2x 2 2 x 2 )

(1 x 1 1 ) 4 x 1 1 sin ( ) 2
2 2

(3)

Here the particular case of a Fabry-Perot cavity, which has an input and output mirror with a field reflectivity, (1- ), and a fully reflecting mirror is presented [57-58]. is the coupling coefficient, and x=exp(-L/2) represents a roundtrip loss coefficient, 0=kLn0 and NL=kLn2|Ein|2 are the linear and nonlinear phase shifts, k=2/ is the wave propagation. L and are a waveguide length and linear absorption coefficient, respectively [59]. The simulated results are based on the solution of the nonlinear Schrdinger Equation (NLSE) for the case of ring resonators using MATLAB programming.

III.

SIMULATION OF RESULT AND DISCUSSION

The large bandwidth within the micro ring device can be generated where required signals can perform the fix communication network. The nonlinear refractive index is n2=1.5 10-20 m2/W. In this case, the wave guide loss used is 0.5dBmm-1. As shown in figure 2, large bandwidth is formed within the first and second rings device. The compress bandwidth is obtained within the ring R3 and R4. The attenuation of the optical power within a microring device is required in order to keep the constant output gain. The ring radii R1=10 m, R2=7m, and R3=5m and R4=2m.

Fig. 2: Results obtained when temporal soliton is localized within a microring device with 20,000 roundtrips, where (a): Chaotic signals from R1, (b): Chaotic signals from R2, (c): Trapping of temporal soliton, (d): Localized temporal soliton with FWHM of 0.7 ps

Figure 3 shows the results when temporal and spatial optical soliton pulses are localized within a microring device and add/drop filter system with 20,000 roundtrips, where the single soliton with FWHM=42fs is generated. The multi soliton can be generated with FWHM and FSR of 20 Ps and 0.6 ns respectively. Here, the ring radii and their coupling coefficients are the same and Rad= 200 m with coupling coefficient of 1 2 0.1 .

178

Vol. 5, Issue 1, pp. 176-184

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Fig. 3: Results of temporal and spatial soliton generation, where (a): Chaotic signals from R1, (b): Chaotic signals from R2, (c): filtering signals, (d): Localized temporal soliton with FWHM of 42 fs, (e): Spatial soliton pulses, (f): Temporal soliton with FSR=0.6 ns and FWHM=20 ps

Thus each pair of polarization entangled photons can be made among different time frame by applying the polarization control unit and polarizer beam splitter (PBS) demonstrated in figure1. They can be constituted by the two polarization orientation angles as [0, 90] [60]. Here we bring in the technique that is used to generate the photon. A polarization device distinguishes the basic vertical and horizontal polarization states represent to an optical switch between the short and the long pulses. We presume those horizontally polarized pulses with a temporal separation of t. The coherence time of the consecutive pulses is larger than t. Then the pursuing state at time t1 is produced by equation (4). |>p=|1, H>s |1, H>i +|2, H>s |2, H>i (4)

In the formula |k, H>, k is determined as the number of time slots (1 or 2), where it announces the state of polarization [horizontal |H> or vertical |V>] [61]. The subscript identifies the signal (s) or the idler (i) state. The delay circuit comprises of a coupler and the difference between the round-trip times of the micro-ring resonator, which is equal to t [62]. The |H> can be converted into |V> at the delay circuit output that is the delay circuits convert r|k, H> + t2 exp(i) |k+1, V> + rt2 exp(i2) |k+2, H> + r2t2 exp(i3) |k+3, V> (5)

Where t and r is the amplitude transmittances to cross and bar ports in a coupler. Then equation (4) is convinced into the polarized state by the delay circuit as |>=[|1, H>s + exp(is) |2, V>s] [|1, H>i + exp(ii) |2, V>i] + [|2, H>s + exp(is) |3, V>s [|2, H>i + exp(ii) |2, V>i] = [|1, H>s |1, H>i + exp(ii) |1, H>s |2, V>i] + exp(is) |2, V>s |1, H>i + exp[i(s+i)] |2, V>s |2, V>i + |2, H>s |2, H>i + exp(ii) |2, H>s |3, V>i + exp(is) |3, V>s |2, H>i + exp[i(s+i)] |3, V>s |3, V>i As an outcome, we can get the following polarization entangled state as |>=|2, H>s |2, H>i + exp[i(s+i)] |2, V>s |2, V>i (7)

(6)

Because of the Kerr nonlinearity of the optical device, the strong pulses acquire an intensity dependent phase shift throughout propagation [63]. The polarization angle adjustment device is utilized to investigate the orientation and optical output intensity depicted [64]. Therefore, signal of solitons can be used to generate photon which is secured and unknown during propagation within

179

Vol. 5, Issue 1, pp. 176-184

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
communication systems [65]. Generated secured photons can be transferred via a wireless access point, and network communication system shown in figure 4.

Fig.4: System of optical photon transmission using a router and wireless access point

A wireless access system transmits data to different users via wireless connection [66-67]. The transmission of information can be sent to the Internet using a physical, wired Ethernet connection [68]. This method also works in reverse, when the router system used to receive information from the Internet, translating it into an analog signal and sending it to the computers wireless adapter.

IV.

FUTURE WORK

It is very difficult to construct a useful quantum key distribution system based on the entanglement photon generation and switching. To overcome this problem, a quantum memory can be used to store the quantum state of light. A ring resonator system can act as a quantum memory in which a quantum state of a photon will transfer to another quantum system. So, after the storage has been done for a very short time of nano or picosecond, the quantum state can be converted back to a photon state at an arbitrary time. Therefore, the probability of entanglement switching can be improved and the decay of the entanglement photon generation rate is avoided. The technique of quantum entanglement photon switching is called a quantum repeater which uses entangled states stored in quantum memories.

V.

CONCLUSION

Photon can be performed using single and multiple temporal and spatial soliton pulses generated by MRR system. We have shown that a large bandwidth of the arbitrary soliton pulses can be generated and compressed within a micro waveguide. The chaotic signal generation using a soliton pulse in the nonlinear micro ring resonators has been presented. Localized light of soliton perform secure and high capacity of optical communication. Localized spatial and temporal soliton pulse is useful to generate entangled photon pair providing quantum key applicable for communication wireless networks. In this study ultrashort of single optical soliton with FWHM=0.7 ps and 42 fs and multi optical soliton which FWHM and FSR of 20 ps and 0.6 ns were generated propagating along the entangled photon generation system which is connected to the drop port of the add/drop filter system connecting to series of microring resonators. Thus we have analyzed the entangled photon generated by chaotic signals in the series MRR devices applicable in optical wireless communication systems.

REFERENCES
[1] [2] I. S. Amiri, A. Afroozeh, J. Ali and P. Yupapin, "Generation Of Quantum Codes Using Up And Down Link Optical Solition", Jurnal Teknologi, 55. 97106, (2012) I. S. Amiri, A. Nikoukar, A. Shahidinejad, M. Ranjbar, J. Ali and P. P. Yupapin, "Generation of Quantum Photon Information Using Extremely Narrow Optical Tweezers for Computer Network Communication", GSTF Journal on Computing (joc), 2. 1. (2012)

180

Vol. 5, Issue 1, pp. 176-184

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
[3] M. kouhnavard, A. Afroozeh, M. A. Jalil, I. S. Amiri, J. Ali and P. P. Yupapin, "Optical Bistability In a FORR". In Proceedings of the The International Conference on Experimental Mechanics (ICEM), (Kuala Lumpur, Malaysia, 29 November-1 December, 2010). I. S. Amiri, A. Afroozeh, M. Bahadoran, J. Ali and P. Yupapin, "Molecular Transporter System For Qubits Generation", Jurnal Teknologi, 55. 155165, (2012) A. Nikoukar, I. S. Amiri, A. Shahidinejad, A. Shojaei, J. Ali and P. Yupapin, "MRR quantum dense coding for optical wireless communication system using decimal convertor", IEEE Explore, City, 2012. M. Kouhnavard, I. S. Amiri, M. Jalil, A. Afroozeh, J. Ali and P. P. Yupapin, "QKD via a quantum wavelength router using spatial soliton", IEEE Explore, City, 2010. E. Y. Zhu, Z. Tang, L. Qian, L. G. Helt, M. Liscidini, J. Sipe, C. Corbari, A. Canagasabey, M. Ibsen and P. G. Kazansky, "Direct generation of polarization-entangled photon pairs in a poled fiber", Physical Review Letters, 108. 21. 213902, (2012) J. Leach, E. Bolduc, D. J. Gauthier and R. W. Boyd, "Secure information capacity of photons entangled in many dimensions", Physical Review A, 85. 6. 060304, (2012) D. S. Simon, N. Lawrence, J. Trevino, L. D. Negro and A. V. Sergienko, "Quantum Key Distribution with Fibonacci Orbital Angular Momentum States", arXiv preprint arXiv:1206.3548, (2012) B. G. Christensen, K. T. McCusker, D. J. Gauthier and P. G. Kwiat, "High-Speed Quantum Key Distribution Using Hyper-Entangled Photons", Optical Society of America, City, 2012. A. Aspuru-Guzik and P. Walther, "Photonic quantum simulators", Nature Physics, 8. 4. 285-291, (2012) I. S. Amiri, M. H. Khanmirzaei, M. Kouhnavard, P. P. Yupapin and J. Ali Quantum Entanglement using Multi Dark Soliton Correlation for Multivariable Quantum Router, Nova Publisher, New York, 2012. P. Yupapin, M. Jalil, I. S. Amiri, I. Naim and J. Ali, "New Communication Bands Generated by Using a Soliton Pulse within a Resonator System", Circuits and Systems, 1. (2010) M. A. Jalil, I. S. Amiri, C. Teeka, J. Ali and P. Yupapin, "All-optical Logic XOR/XNOR Gate Operation using Microring and Nanoring Resonators", Global Journal of Physics Express, 1. 1. 15-22, (2011) M. Bahadoran, I. S. Amiri, A. Afroozeh, J. Ali and P. P. Yupapin, "Analytical Vernier Effect for Silicon Panda Ring Resonator". In Proceedings of the National Science Postgraduate Conference, NSPC (Universiti Teknologi Malaysia, 15-17 November 2011). C. Tanaram, C. Teeka, R. Jomtarak, P. Yupapin, M. Jalil, I. S. Amiri and J. Ali, "ASK-to-PSK Generation based on Nonlinear Microring Resonators Coupled to One MZI Arm", Procedia Engineering, 8. 432-435, (2011) I. S. Amiri, M. A. Jalil, A. Afroozeh, M. Kouhnavard , J. Ali and P. P. Yupapin, "Controlling Center Wavelength and Free Spectrum Range by MRR Radii". In Proceedings of the Faculty of Science Postgraduate Conference (FSPGC) (UNIVERSITI TEKNOLOGI MALAYSIA, 5-7 OCTOBER 2010). A. Afroozeh, I. S. Amiri, M. Kouhnavard, M. Bahadoran, M. A. Jalil, J. Ali and P. P. Yupapin, "Dark and Bright Soliton trapping using NMRR". In Proceedings of the The International Conference on Experimental Mechanics (ICEM), (Kuala Lumpur, Malaysia, 29 November-1 December, 2010). I. S. Amiri, A. Afroozeh, I. Nawi, M. Jalil, A. Mohamad, J. Ali and P. Yupapin, "Dark Soliton Array for Communication Security", Procedia Engineering, 8. 417-422, (2011) I. S. Amiri, G. Vahedi, A. Nikoukar, A. Shojaei, J. Ali and P. Yupapin, "Decimal Convertor Application for Optical Wireless Communication by Generating of Dark and Bright Signals of soliton", International Journal of Engineering Research and Technology, 1. 5. (2012) M. Kouhnavard, A. Afroozeh, I. S. Amiri, M. A. Jalil, J. Ali and P. P. Yupapin, "New system of Chaotic Signal Generation Using MRR". In Proceedings of the The International Conference on Experimental Mechanics (ICEM), (Kuala Lumpur, Malaysia, 29 November-1 December, 2010). I. S. Amiri, A. Shahidinejad, A. Nikoukar, M. Ranjbar, J. Ali and P. P. Yupapin, "Digital Binary Codes Transmission via TDMA Networks Communication System Using Dark and Bright Optical Soliton", GSTF Journal on Computing (joc), 2. 1. (2012) I. S. Amiri, A. Nikoukar, G. Vahedi, A. Shojaei, J. Ali and P. Yupapin, "Frequency-Wavelength Trapping by Integrated Ring Resonators For Secured Network and Communication Systems", International Journal of Engineering Research & Technology (IJERT), 1. 5. (2012) I. S. Amiri, A. Nikoukar, A. Shahidinejad, J. Ali and P. Yupapin, "Generation of discrete frequency and wavelength for secured computer networks system using integrated ring resonators", IEEE Explore, City, 2012. A. Shahidinejad, A. Nikoukar, I. S. Amiri, M. Ranjbar, A. Shojaei, J. Ali and P. Yupapin, "Network system engineering by controlling the chaotic signals using silicon micro ring resonator", IEEE Explore, City, 2012.

[4] [5] [6] [7]

[8] [9] [10] [11] [12]

[13] [14]

[15]

[16]

[17]

[18]

[19] [20]

[21]

[22]

[23]

[24]

[25]

181

Vol. 5, Issue 1, pp. 176-184

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
[26] [27] I. S. Amiri, A. Nikoukar and J. Ali, "Quantum Information Generation Using Optical Potential Well". In Proceedings of the Network Technologies & Communications Conference (Singapore, 2010-2011). I. S. Amiri, G. Vahedi, A. Shojaei, A. Nikoukar, J. Ali and P. Yupapin, "Secured Transportation of Quantum Codes Using Integrated PANDA-Add/drop and TDMA Systems", International Journal of Engineering Research & Technology (IJERT), 1. 5. (2012) A. Nikoukar, I. S. Amiri and J. Ali, "Secured Binary Codes Generation for Computer Network Communication". In Proceedings of the Network Technologies & Communications (NTC) Conference (Singapore, 2010-2011). N. J. Ridha, F. K. Mohamad, I. S. Amiri, Saktioto, J. Ali and P. P. Yupapin, "Controlling Center Wavelength and Free Spectrum Range by MRR Radii ". In Proceedings of the The International Conference on Experimental Mechanics (ICEM), (Kuala Lumpur, Malaysia, 29 November-1 December, 2010). M. Imran, R. A. Rahman and I. S. Amiri, "Fabrication of Diffractive Optical Element using Direct Writing CO2 Laser Irradiation". In Proceedings of the Faculty of Science Postgraduate Conference (FSPGC) (UNIVERSITI TEKNOLOGI MALAYSIA, 5-7 OCTOBER 2010 ). S. Daud, M. A. Jalil, I. S. Amiri, Saktioto, R. A. Rahman, J. Ali and P. P. Yupapin, "FBG Sensing System for Outdoor Temperature Measurement". In Proceedings of the The International Conference on Experimental Mechanics (ICEM), (Kuala Lumpur, Malaysia, 29 November-1 December, 2010). S. Daud, M. A. Jalil, I. S. Amiri, Saktioto, R. A. Rahman, J. Ali and P. P. Yupapin, "FBG Simulation and Experimental Temperature Measurement". In Proceedings of the The International Conference on Experimental Mechanics (ICEM), (Kuala Lumpur, Malaysia, 29 November-1 December, 2010). A. Afroozeh, I. S. Amiri, J. Ali and P. Yupapin, "Determination Of FWHM For Soliton Trapping", Jurnal Teknologi, 55. 7783, (2012) A. A. Shojaei and I. S. Amiri, "DSA for Secured Optical Communication". In Proceedings of the The International Conference for Nanomaterials Synthesis and Characterization (INSC) Conference (kuala lumpur MALAYSIA, 4 5th July 2011). A. Afroozeh, M. Kouhnavard, I. S. Amiri, M. A. Jalil, J. Ali and P. P. Yupapin, "Effect of Center Wavelength on MRR Performance". In Proceedings of the Faculty of Science Postgraduate Conference (FSPGC) (UNIVERSITI TEKNOLOGI MALAYSIA, 5-7 OCTOBER 2010). I. S. Amiri, J. Ali and P. Yupapin, "Enhancement of FSR and Finesse Using Add/Drop Filter and PANDA Ring Resonator Systems", International Journal of Modern Physics B, 26. 04. (2012) N. J. Ridha, F. K. Mohamad, I. S. Amiri, Saktioto, J. Ali and P. P. Yupapin, "Soliton Signals and The Effect of Coupling Coefficient in MRR Systems". In Proceedings of the The International Conference on Experimental Mechanics (ICEM), (Kuala Lumpur, Malaysia, 29 November-1 December, 2010). A. Afroozeh, M. Bahadoran, I. S. Amiri, A. R. Samavati, J. Ali and P. P. Yupapin, "Fast Light Generation Using GaAlAs/GaAs Waveguide", Jurnal Teknologi, 57. 7, (2012) M. A. Jalil, I. S. Amiri, M. Kouhnavard, A. Afroozeh, J. Ali and P. P. Yupapin, "Finesse Improvements of Light Pulses within MRR System". In Proceedings of the Faculty of Science Postgraduate Conference (FSPGC) (UNIVERSITI TEKNOLOGI MALAYSIA, 5-7 OCTOBER 2010). M. A. Jalil, I. S. Amiri, J. Ali and P. P. Yupapin, "Dark-Bright Solitons Conversion System via an Add/Drop Filter for Signal Security Application". In Proceedings of the The International Conference on Experimental Mechanics (ICEM), (Kuala Lumpur, Malaysia, 29 November-1 December, 2010). A. Afroozeh, I. S. Amiri, M. A. Jalil, N. J. Ridha, J. Ali and P. P. Yupapin, "Dark and Bright Soliton trapping using NMRR". In Proceedings of the The International Conference on Experimental Mechanics (ICEM), (Kuala Lumpur, Malaysia, 29 November-1 December, 2010). A. Afroozeh, I. S. Amiri, M. Kouhnavard, M. Jalil, J. Ali and P. Yupapin, "Optical dark and bright soliton generation and amplification", IEEE Explore, City, 2010. I. S. Amiri, A. Shahidinejad, A. Nikoukar, J. Ali and P. Yupapin, "A Study oF Dynamic Optical Tweezers Generation For Communication Networks", International Journal of Advances in Engineering & Technology (IJAET), 4. 2. 38-45 (2012) I. S. Amiri, M. Ranjbar, A. Nikoukar, A. Shahidinejad, J. Ali and P. Yupapin, "Multi optical Soliton generated by PANDA ring resonator for secure network communication", IEEE Explore, City, 2012. A. Afroozeh, I. S. Amiri, M. A. Jalil, M. Kouhnavard, J. Ali and P. P. Yupapin, "Multi Soliton Generation for Enhance Optical Communication", Applied Mechanics and Materials, 83. 136-140, (2011) M. A. Jalil, I. S. Amiri, J. Ali and P. P. Yupapin, "Fast and slow lights via an add/drop device". In Proceedings of the The International Conference on Experimental Mechanics (ICEM), (Kuala Lumpur, Malaysia, 29 November-1 December, 2010). A. Afroozeh, I. S. Amiri, M. Kouhnavard, M. Bahadoran, M. A. Jalil, J. Ali and P. P. Yupapin, "Optical Memory Time using Multi Bright Soliton". In Proceedings of the The International

[28]

[29]

[30]

[31]

[32]

[33] [34]

[35]

[36] [37]

[38] [39]

[40]

[41]

[42] [43]

[44] [45]

[46]

[47]

182

Vol. 5, Issue 1, pp. 176-184

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Conference on Experimental Mechanics (ICEM) (Kuala Lumpur, Malaysia, 29 November-1 December 2010). A. Afroozeh, I. S. Amiri, M. Bahadoran, J. Ali and P. Yupapin, "Simulation Of Soliton Amplification In Micro Ring Resonator For Optical Communication", Jurnal Teknologi, 55. 271277, (2012) A. Afroozeh, I. S. Amiri, A. Samavati, J. Ali and P. Yupapin, "THz frequency generation using MRRs for THz imaging", IEEE Explore, City, 2012. I. S. Amiri, A. Afroozeh, M. Bahadoran, J. Ali and P. P. Yupapin, "Up and Down Link of Soliton for Network Communication". In Proceedings of the National Science Postgraduate Conference, NSPC (15-17 November 2011). A. A. Shojaei and I. S. Amiri, "Soliton for Radio wave generation". In Proceedings of the The International Conference for Nanomaterials Synthesis and Characterization (INSC) Conference (kuala lumpur, MALAYSIA, 4 5th July 2011). N. Suwanpayak, S. Songmuang, M. Jalil, I. S. Amiri, I. Naim, J. Ali and P. Yupapin, "Tunable and storage potential wells using microring resonator system for bio-cell trapping and delivery", IEEE Explore, City, 2010. A. Afroozeh, M. Bahadoran, I. S. Amiri, A. R. Samavati, J. Ali and P. P. Yupapin, "Fast Light Generation Using Microring Resonators for Optical Communication". In Proceedings of the National Science Postgraduate Conference NSPC (Universiti Teknologi Malaysia, 15-17 November 2011). C. Teeka, S. Songmuang, R. Jomtarak, P. Yupapin, M. Jalil, I. S. Amiri and J. Ali, "ASKtoPSK Generation based on Nonlinear Microring Resonators Coupled to One MZI Arm", AIP Conference Proceedings, City, 2011. F. K. Mohamad, N. J. Ridha, I. S. Amiri, Saktioto, J. Ali and P. P. Yupapin, "Effect of Center Wavelength on MRR Performance". In Proceedings of the The International Conference on Experimental Mechanics (ICEM), (Kuala Lumpur, Malaysia, 29 November-1 December, 2010). I. S. Amiri, K. Raman, A. Afroozeh, M. Jalil, I. Nawi, J. Ali and P. Yupapin, "Generation of DSA for Security Application", Procedia Engineering, 8. 360-365, (2011) I. S. Amiri, M. A. Jalil, F. K. Mohamad, N. J. Ridha, J. Ali and P. P. Yupapin, "Storage of Atom/Molecules/Photon using Optical Potential Wells". In Proceedings of the The International Conference on Experimental Mechanics (ICEM), (Kuala Lumpur, Malaysia, 29 November-1 December, 2010). M. Kouhnavard, A. Afroozeh, M. A. Jalil, I. S. Amiri, J. Ali and P. P. Yupapin, "Soliton Signals and the Effect of Coupling Coefficient in MRR Systems". In Proceedings of the Faculty of Science Postgraduate Conference (FSPGC) (UNIVERSITI TEKNOLOGI MALAYSIA, 5-7 OCT. 2010 ). I. S Amiri, A. Afroozeh and M. Bahadoran, "Simulation and Analysis of Multisoliton Generation Using a PANDA Ring Resonator System", Chinese Physics Letters, 28. 104205, (2011) A. Shaham and H. Eisenberg, "Experimental study of the decoherence of biphoton qutrits", Optical Society of America, City, 2012. E. Megidish, T. Shacham, A. Halevy, L. Dovrat and H. Eisenberg, "Resource Efficient Source of Multiphoton Polarization Entanglement", Physical Review Letters, 109. 8. 80504, (2012) D. Bonneau, E. Engin, K. Ohira, N. Suzuki, H. Yoshida, N. Iizuka, M. Ezaki, C. M. Natarajan, M. G. Tanner and R. H. Hadfield, "Quantum interference and manipulation of entanglement in silicon wire waveguide quantum circuits", New Journal of Physics, 14. 4. 045003, (2012) M. Siomau, A. A. Kamli, S. A. Moiseev and B. C. Sanders, "Entanglement creation with negative index metamaterials", Physical Review A, 85. 5. 050303, (2012) I. S. Amiri, M. A. Jalil, F. K. Mohamad, N. J. Ridha, J. Ali and P. P. Yupapin, "Storage of Optical Soliton Wavelengths Using NMRR". In Proceedings of the The International Conference on Experimental Mechanics (ICEM), (Kuala Lumpur, Malaysia, 29 November-1 December, 2010). F. K. Mohamad, N. J. Ridha, I. S. Amiri, Saktioto, J. Ali and P. P. Yupapin, "Finesse Improvements of Light Pulses within MRR System". In Proceedings of the The International Conference on Experimental Mechanics (ICEM), (Kuala Lumpur, Malaysia, 29 November-1 December, 2010). G. Murali, R. S. Prasad and K. V. B. Rao, "Effective User Authentication using Quantum Key Distribution for Wireless Mesh Network", International Journal of Computer Applications, 42. 4. 7-12, (2012) S. Imre and L. Gyongyosi, "Quantum-assisted and Quantum-based Solutions in Wireless Systems", arXiv preprint arXiv:1206.5996, (2012) A. Stute, B. Casabone, P. Schindler, T. Monz, P. Schmidt, B. Brandsttter, T. Northup and R. Blatt, "Tunable ion-photon entanglement in an optical cavity", Nature, 485. 7399. 482-485, (2012)

[48] [49] [50]

[51]

[52]

[53]

[54]

[55]

[56] [57]

[58]

[59] [60] [61] [62]

[63] [64]

[65]

[66]

[67] [68]

183

Vol. 5, Issue 1, pp. 176-184

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

BIOGRAPHY OF AUTHORS
I. S. Amiri received his B. Sc (Hons, Applied Physics) from Public University of Oroumiyeh, Iran in 2001 and a gold medalist M. Sc. in Applied Physics from Universiti Teknologi Malaysia, in 2009. He is currently pursuing his Ph.D. in Nano Photonics at the Faculty of Science, Institute of Advanced Photonics Science, Nanotechnology Research Alliance, (UTM), Malaysia. He has authored/co-authored more than 65 technical papers published in journals/conferences and a book chapter. His research interests are in the field of optical soliton communication, signal processing, communication security, quantum cryptography, quantum chaos, optical tweezers and hybrid computing system.

M. Nikmaram received the B.S degree in computer software engineering from Jahad Daneshgahi University of Yazd, Iran in 2010. She is working toward the M.S degree in computer science (Information Security) at the university Technology Malaysia (UTM), Malaysia. Her research interests include cryptography and network security.

A. Shahidinejad received the B.S. degree in computer hardware engineering from Islamic Azad University of Kashan, Iran in 2008 and the M.S. degree in computer architecture from Islamic Azad University of Arak, Iran, in 2010. He is currently working toward the Ph.D. degree in Computer Science at the Universiti Teknologi Malaysia (UTM), Malaysia. His research interests include optical wireless communications and micro ring resonators.

J. Ali received his Ph.D. in plasma physics from Universiti Teknologi Malaysia (UTM) in 1990. At present, he is a professor of photonics at the Institute of Advanced Photonics Science, Nanotech Research Alliance and the Physics Department of UTM. He has authored/co-authored more than 200 technical papers published in international journal, three books and a number of book chapters. His areas of interests are in FBGs, optical solitons, fiber couplers, and nanowaveguides. He is currently the Head of Nanophotonics research group, Nanotech Research Alliance, UTM. Dr. Jalil Ali is a member of OSA, SPIE, and the Malaysian Institute of Physics.

184

Vol. 5, Issue 1, pp. 176-184

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

COMPARISON OF DIFFERENT STRESS-STRAIN MODELS FOR CONFINED SELF COMPACTING CONCRETE (SCC) UNDER AXIAL COMPRESSION
P. Srilakshmi and M. V. Seshagirirao
Department of CIVIL Engineering, JNTUH College of Engineering, Hyderabad, India

ABSTRACT
Self Compacting Concrete offers many advantages over conventional concrete. It is mainly used when a great fluidity is required (For example sections with high percentage of reinforcement). Self Compacting Concrete has excellent applicability for elements with complicated shapes and congested reinforcement. An experimental study was made on cylinders and specimens with square sections confined by lateral reinforcement to investigate the effectiveness of transverse steel on M 40 grade concrete under monotonically increasing axial compression. The behavior of SCC cylinders confined by circular hoops and square prisms by rectilinear hoops with different volumetric ratios and spacing were compared under axial compression. The effects of the test variables such as volumetric ratio, spacing and shape of cross section on the behavior of SCC specimens are presented and discussed. The results revealed that the more the volume of confinement steel, more the increase in peak stress and deformability. Test results of this study were compared with existing confinement models of Saaticioglu-Razvi, Mendis, Legeron and Paultre, and Mander. The study indicates Manders model is nearer to the test results.

KEYWORDS: Self Compacting Concrete,

confinement,

ductility.

I.

INTRODUCTION

This Casting concrete in sections with high density reinforcement, such as those in beams and columns in moment resisting frames in seismic areas and to make repairs of sections, placement of concrete is very difficult. Having proper consolidation may require external or internal vibration that can be critical in sections with high percentage of reinforcement. The main features of Self Compacting Concrete(SCC) concern the fresh state condition ( high flowability and rheological stability)[1]. SCC has become more popular in the past decade and has excellent applicability for elements with congested reinforcement and complicated shapes. The Compactness of the SCC matrix, due to the higher amount of fine and extra-fine particles, may improve interface Zone properties. SCC consolidates under its own weight without any mechanical vibration (consolidation). In seismic designs requiring heavy reinforcement using this new category of material, engineers must have adequate knowledge of performance of SCC under different loadings. Confinement effectiveness depends on variables related to the geometry of the section, reinforcement and stress in confining steel[2]. The use of lateral reinforcement results in increased strength and ductility of the confined concrete. Square specimens confined by the square ties exhibit less strength and ductility than cylindrical specimens. Therefore, as a part of research combined experimental and analytical study to discover the strength and ductility of M40 grade Self Compacting Concrete, this paper presents the results of concentrically loaded circular and square specimens. Most of the Analytical models developed for the normally vibrated concrete. An attempt has been made evaluate the existing stress-strain models for Self Compacting Concrete by comparing experimentally found

185

Vol. 5, Issue 1, pp. 185-193

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
curves with the ones estimated by the various models. This paper presents the experimental programme which include the materials used and the testing procedure adopted, discussion on test results, verification of experimental results with existing models and conclusions drawn from the study.

II.

EXPERIMENTAL PROGRAMME

An experimental investigation was carried out to study the stress Strain relationship of confined M40 grade Self Compacting Concrete. The was designed with BIS method and was modified as per EFNARC [3] guidelines. Table.1 shows the details of mix proportions used for the present study. A total of 27 specimens with M40 grade SCC were tested in axial compression. They include 9 confined 150mm x 300mm square specimens and 12 hoop confined 150mmx300mm circular specimens. The remaining 6 were prepared as 3 circular and 3 square plain (unconfined) concrete specimens to study the properties of unconfined concrete and to compare the confinement effectiveness of lateral reinforcement. The specimens were cast in total nine sets each set consisted of 3 numbers of same specimen either square or circular. The specimens were studied to investigate the parameters of confinement, including volumetric ratio and spacing of transverse reinforcement in addition to shape of the cross-section. Concrete cover of 10mm was provided in all the confined specimens. Fig.1 show the details of transverse reinforcement provided in the specimens.
Table 1 Mix Proportions for M40 Grade Self Compacting Concrete. Mix Cement kg/m3 Water kg/m3 Coarse Aggregate kg/m3 800 Sand Kg/m3 Fly ash kg/m3 Superplasticizer ltrs/m3 Viscosity Modifying Agent ltrs/m3 0.2 28 Days Cube Compressive Strength MPa 49.15

339

191

710

165

Figure 1. Arrangement of circular hoops

2.1 Material properties


M40 Self Compacting Concrete design mix was used in the study. The materials consisted of 53 grade Ordinary Portland Cement confirming to IS 12269-1987(Reaffirmed 1999), natural river sand belongs to zone II and crushed stone coarse aggregate of maximum size 20 mm confirming to IS 383- 1970, Type II fly ash obtained from Vijayawada thermal Power station confirming to IS: 3812, potable water for mixing and curing, super plasticizer admixtures (SP) and viscosity modifying agent (VMA) with satisfy the adequate fresh properties of SCC. To obtain the desired fresh properties and strength, several trial mixes were made and then the final mix proportion was determined after satisfying fresh and hardened properties. The mix satisfied the EFNARC guidelines. Table.1 shows the mix

186

Vol. 5, Issue 1, pp. 185-193

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
proportions and 28 days cube Compressive Strength. The properties of unconfined concrete were obtained by testing plain Cylindrical and Square specimens. Mild steel of grade 250 MPa with a 6mm diameter was used as lateral reinforcement.

2.2 Testing Procedure


The axial displacement of the specimens was recorded over a gauge length of 200mm using two dial gauges were attached on the two opposite faces of the specimen. The experimental setup is shown in Fig.2. The specimens were loaded in 1000 kN capacity strain controlled universal testing machine. The monotonic concentric compression was applied at a very slow strain rate. The load was applied from zero to failure. The time taken to complete each test ranged from 30 minutes to 50 minutes depending on the degree of confinement.

Figure 2. Experimental Set up

III.

TEST RESULTS AND DISCUSSION

All specimens behaved in a similar manner initially and exhibited relatively linear load deformation behavior in the ascending part. The plain SCC specimens had a brittle failure with very few readings could be taken in the descending portion. The behavior of confined specimens was comparatively ductile and complex unlike plain unconfined specimens. The specimens were characterized sequentially by the development of surface cracks cover spalling and crushing of core concrete. The specimens were analysed to obtain the Stress-Strain curves of confined concrete. Table 2 shows the summary of test results. for various specimens. Fig.3 gives the appearance of specimens after the test.

Figure 3. Appearance of circular specimen after failure.

187

Vol. 5, Issue 1, pp. 185-193

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Designation Shape of the specimen Volume of transverse reinforcement % Table 2: Test Results Peak Ratio of Stress confined Peak MPa Stress to fcc unconfined peak stress 36.16 45.20 47.62 49.56 51.33 34.69 40.22 42.56 45.09 ----1.250 1.317 1.370 1.419 ----1.159 1.227 1.300 Strain at Peak Stress mm/mm cc Ratio of Strain at confined Peak Stress to strain at unconfined peak stress ----1.209 1.302 1.400 1.507 -----1.251 1.359 1.458

M400R M403R4 M404R4 M405R4 M406R4 M400RP M404R4P M405R4P M406R4P

Circular Circular Circular Circular Circular Square Square Square Square

0.000 0.912 1.216 1.520 1.824 0.000 1.216 1.520 1.824

0.00215 0.00260 0.00280 0.00301 0.00324 0.00203 0.00254 0.00274 0.00296

3.1 Volumetric ratio and spacing of transverse steel


The transverse confinement pressure exerted on the concrete core is directly related to the quantity of transverse steel[4]. Fig.4 shows the behavior of confined concrete by the volumetric ratio of the transverse steel. The strength enhancement 25%, 31%, 37%, and 42%, respectively for volumetric ratio of transverse steel 0.912%, 1.216%, 1.520%, and 1.824%. There is an increase in ductility. The ratio of the strain at peak stress of confined concrete to the strain at the peak stress of corresponding unconfined concrete, which is an indicator of ductility, varied from 21% to 51% .

Figure 4. Stress strain curves for various volumetric ratios of confinement.

3.2 Section Geometry:


The specimens of each matched pair have similar concrete strength and volumetric ratios of lateral steel. It is clear from the Fig.5 circular sections with circular hoops as confinement are more effective than square sections with rectilinear confinement as evidenced by the marginal increase in strength and ductility from square to circular sections [5]. The maximum increase in confined strength 42% and 51% increase in ductility was noticed for circular sections, whereas it was noticed that 30% and 46% respectively in square sections at 1.824% confinement. Therefore to have same percentages

188

Vol. 5, Issue 1, pp. 185-193

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
increase in strength and ductility square sections require more volumetric ratio of transverse steel than circular sections.

Figure 5. Effect of section geometry with various confinement ratios.

IV.

VERIFICATION OF EXPERIMENTAL RESULTS WITH PREVIOUS MODELS

4.1 Mander (1988)


Mander [6] proposed the stress-strain model as one curve. This model applied the concept of effective confined area to find out effective lateral confinement pressure from the effective confinement coefficient. Coefficient of effective confinement is the ratio of effectively confined area to core area. The Mander model predicts stress in the ascending portion and descending portions comparatively well.

4.2 Saaticioglu-Razvi model (1999)


Saaticioglu and Razvi [7] proposed a model for confined normal and high strength concrete, using their own test results and as well as the results of other researchers. The parameters included in the model were type, volumetric ratio, spacing, yield strength and arrangement of transverse reinforcement, strength of concrete and section geometry. It is proposed two part Stress-Strain model, parabolic in the ascending branch up to peak and a linear descending branch up to 20 % of the peak stress.

4.3 Mendis(2000)
The model suggested by Mendis [8] has a parabolic ascending portion, a linear descending portion and a horizontal residual stress part. The equation proposed is in the form given by Kent and Park (1971). This model referred to as the modified Scott model, is based on the Scott model (1982) suggested for NSC. The Scott model was based on adjusting the value of softening slope parameter (Zm), the residual stress parameter (R) and the strain at maximum stress of unconfined concrete (c). The level of confinement and lower dilation of HSC is reflected through a parameter known as Confinement index (K).

4.4 Legeron and Paultre (2003)


Cusson and Paultre [9] proposed a stress strain model where in the stress in confining steel would be calculated by an iterative approach. Later it was modified by Legeron and Paultre which considers a direct approach. Legeron and Paultre [10] proposed stress-Strain confinement model for normal and high strength concretes based on the test results of square, rectangular and circular columns tested by

189

Vol. 5, Issue 1, pp. 185-193

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
themselves and other researchers. The model incorporates almost all the parameters of confinement. The stress-Strain relationship was same as proposed by cusson and Paultre, but the parameters of the model were re calibrated on the basis of large number of test data. To investigate the relative performance of existing models(as mentioned above) for SCC the stress strain curves of present test results were compared with the various existing models. Fig.6 to Fig.11 illustrate the comparisons of the experimental and predicted stress strain curves. Almost all the models are able to estimate correctly the ascending part of stress strain curve. But, there are wide variations in the descending portion of the stress strain curve. The comparisons for circular sections indicate Mander slightly over estimate the peak strength, while the other three under estimate the peak strength. Legeron and Paultre and Razvi over estimate the strain at peak stress and Mendis under estimate the strain at peak stress. A review of the above models indicate Mendis, Legeron and paultre, Saaticioglu and Razvi under estimate the test curves. Though the Mander model slightly over estimate the peak stress closely follows the experimental stress strain curves for most of the specimens. Therefore, it can be concluded that Mander model can be employed to predict the uni axial response of M40 SCC circular specimens with reasonable degree of accuracy.

Figure 6. Comparison of Confined axial stress strain curves for 0.912% confined circular specimen

Figure 7. Comparison of Confined axial stress strain curves for 1.216% confined circular specimen

190

Vol. 5, Issue 1, pp. 185-193

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Figure 8. Comparison of Confined axial stress strain curves for 1.520% confined circular specimen

Figure 9. Comparison of Confined axial stress strain curves for 1.824% confined circular specimen

Figure 10. Measured Peak stress to Predicted Peak Stress with Various models

191

Vol. 5, Issue 1, pp. 185-193

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Figure 11. Measured Strain at Peak stress to Predicted Strain at Peak Stress with Various models

V.

CONCLUSIONS

This paper presets the results of confined circular and square M40 grade SCC specimens subjected to axial compression. A comparative study of existing stress-strain models for normal strength concrete is also reported. The following conclusions are derived based on the experimental results. 1. An increase in volume of transverse reinforcement directly improves both the strength and ductility of confined SCC. The increase in strength was found to be 24% at 0.912% volume of transverse steel and 42% at 1.824% volume of transverse steel for circular sections. 2. The ductility i.e. the ratio of the strain at peak stress of confined concrete to the strain at peak stress of corresponding unconfined concrete, varied from 21% to 51% for 0.912% to 1.824% of volumetric ratio for circular specimen. 3. Circular sections with circular hoops as confinement are more effective than square sections with rectilinear confinement. The percentage increase in strength was 42% and 30% for circular sections and square sections respectively at 1.824% volume of transverse steel. 4. The percentage increase in ductility was 51% and 46% for circular sections and square sections respectively at 1.824% volume of transverse steel. So, to have same percentages increase in ductility square sections require more volumetric ratio of transverse steel than circular sections. 5. From the studies on the applicability of various existing models to predict the actual experimental behavior, it is concluded that almost all the models are able to estimate correctly the ascending part of stress strain curve. But, there are vide variations in the descending portion of the stress strain curve. From the present study it is concluded that the Mander model can be used to predict the uniaxial response of confined M40 SCC specimens with reasonable accuracy.

ACKNOWLEDGEMENT
The authors would like to thank The Department of Civil Engg., JNTUH College of Engineering for providing the lab facility to carry out this work.

REFERENCES
[1]. Okamura, H., and Ozawa, k., (1995) Mix design of self-compacting concrete, Concrete library of JSCE, 25, pp. 107-120 [2]. Han, B. S., Shin, S. W., and Bahn B.Y., (2006) Confinement effects of High-Strength Reinforced concrete tied Columns, International Journal of Concrete Structures and Materials, Vol.18, No.2E, pp. 133-142. [3]. The European Guidelines (EFNARC) for Self Compacting Concrete, Specifications, Production and Use May 2005.

192

Vol. 5, Issue 1, pp. 185-193

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
[4]. Mander, J.B., Priestly, M.J.N.,and Park, R., (1988) Theoretical Stress Strain model for confined concrete, ASCE Journal of Structural Engineering, , Vol.114, No.8, pp. 1804-1826. [5]. Razvi, S. and Saaticioglu, M., (1994), Strength and deformability of confined high strength columns, ACI Journal, Vol. 91, No.6, pp 678-687. [6]. Mendis, P., Pendyala, R., Setunge, S., ( ) Stress strain model to predict full range moment curvature behavior of high strength concrete sections, Magazine of Concrete Research, 52(4), pp. 227-234. [7]. Cusson, D., and Paultre, P., (1995) Stressstrain model for confined high strength concrete, ASCE Journal of Structural Engineering, 1995, Vol.121, No.3, pp.468-477. [8]. Legeron, F., and Paultre,P., (2003) Uniaxial confinement model for normal and high strength concrete columns, ASCE Journal of Structural Engineering, 2003, Vol.29, No.2, pp. 241-252. [9]. Sharma, U. K., Bhargava, P., Kaushik, S. K.,( ) Stress-Strain model for spiral confined fibre reinforced high strength concrete columns, The Indian Concrete Journal, May 2009, pp. 45-55. [10]. Chandrasekhar, M., Seshagirirao, M. V., Janardhana, M., (2011) Studies on stress-strain behavior of SFRSCC and GFRSCC under axial compression, International Journal of Earth Sciences and Engineering, ISSN 0974-5904, Vol.4, No.6 SPL, pp. 855-858.

AUTHORS
P. Srilakshmi, Specialized in Structural engineering with 13years of teaching and 20 years of industrial experience. She is Assoc. Prof. in Civil Engineering at JNTUH College of Engineering, Hyderbad. She has experience in structural designs, material testing, Non Destructive Testing and evaluation of existing structures. Her research interests are Finite element analysis, analysis and design of special structures and bridges. Research interests include SCC and special concretes.

M. V. Seshagiri rao, Specialized in Structural Engineering and Software Engineering with 34 years of teaching, research and consultancy experience. He is Professor of Civil Engineering at JNTUH College of Engineering, Hyderabad. The interesting fields of research are Special concretes, High Performance Concrete, use of rice husk ash and high volumes of fly ash in concrete, Self Compacting Concrete, Bacterial Concrete and Reactive Powder Concrete. He has guided 17 Ph.D students. He has published over 136 research papers in International and National journals and conferences.

193

Vol. 5, Issue 1, pp. 185-193

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

A STUDY ON ELECTRONIC DOCUMENT MANAGEMENT SYSTEM INTEGRATION NEEDS IN THE PUBLIC SECTOR
Toms Leikums
Faculty of Information Technologies,Latvia University of Agriculture, Jelgava, Latvia

ABSTRACT
Same as other organizations the public sector institutions use a multitude of various information systems. However, unlike in the private sector, the most critical are not the systems for resource planning, finances, or supply chain management. Since the key business processes in governmental institutions are related to creating and processing of documents, the main and most critical information system is document management system. Laws and regulations are prepared therein, letters and answers to the petitions by citizens are being written, and often it is used for processing contracts and financial documents. Nowadays the number of different information systems in one institution is constantly increasing and correspondingly emerges the need for integration between them. One choice is to obtain an expensive Enterprise Resource Planning (ERP) and Enterprise Content Management (ECM) system and embark on their adaptation and acquisition. Another choice would be conducting a research in order to identify which systems reciprocally need interfaces for data exchange. This article deals with various types of information systems and their relations to document management system. In the aspect of public sector, the article contains analysis about the necessity of system integration and possible problem cases during the process of integration.

KEYWORDS: document management system, information system integration, enterprise content management,
electronic document workflow.

I.

INTRODUCTION

Any organization in the public sector deals with large amounts of different documents on a daily basis letters, petitions, applications, suggestions, contracts, acceptance certificates, invoices, staff documents, orders and regulations, instructions, rules, technical documentation, standards, legislations, laws, protocols and numerous others. Often some unofficial documentation as e-mail correspondence, meeting notes and similar texts are also considered parts of document circulation in an organization. Most governmental institutions use document management systems (DMS) in order to maintain the multitude and amount of documents and organize the document circulation by making the documents indexable and searchable. However, as it can be seen from the vast amount of document types, they refer not only to records management and its processes, but to several other areas of activities in an organization, for instance, finances, human resources, audit etc. Therefore the institution has sooner or later come to a decision about the integration of document management system to other information systems (IS). In this article, integration is not to be understood as a complete merging or consolidation of systems. Integration is a process that makes systems able to cooperate and exchange data according to business processes of the institution. In the context of electronic document management, integration is to be understood as the promotion of interoperability for different types of information systems, in order to ensure optimal work directed towards processes of document circulation during their whole life cycle, for all types of electronic documentation[1].

194

Vol. 5, Issue 1, pp. 194-205

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
The author of the article is himself working at a governmental institution and is responsible for the circulation of electronic documents and the document management system, including administration, development planning and collaboration with other information systems at the institution. During the last two years there has been an extensive work of implementing a new document management system, also involving integration with other information systems Customer relationship management system (CRM), accounting IS, human resource IS, electronic services IS and others. Therefore the main method used in this research is experience and observations during the integration process of information systems. Conclusions from practical research are complemented with opinions of another specialists and a research of the market for document management systems in order to learn how many developers pay attention to such questions as integration with other systems. The article deals with diverse information systems used in the public sector: user directory, finance management system, human resource management system, workflow management system, enterprise resource planning system, office software, CRM system, web portal, Enterprise Content Management (ECM) system. Section 2 looks at the substantial aspects of system integration and main problem questions of this process. Section 3 inspects alternatives of integrating DMS with different systems the usage of ECM and ERP systems. Section 4 pays attention to the most popular information systems and software types in the public sector (user directory, finance management system, web portal, human resource management system, workflow system, CRM system, office software) and the possibilities of their integration with DMS. Section 5 consists of conclusions, authors suggestions on system typology for the integration with DMS and the scope for further research. Section 6 indicates the main spheres for further research.

II.

SYSTEM INTEGRATION

When deciding to conduct a system integration of any kind, the organization has to be ready for an extensive work both from the side of the institution and the system developers. The real challenges arise when a companys information systems need integration. The advantages are, no doubt, numerous: from the reduction in the costs of maintenance of several information systems to simplification of work flow. [2] Among challenges there are both the integration costs and the time necessary for the project. One has to take into account that until the moment when systems start working in the new shape, they still have to operate in their base mode. Also the data migration might be necessary. Same as with new systems, it is anticipated that not all the co-workers will be satisfied with the change of their responsibilities and business processes in the institution. Therefore in a governmental institution it is essential not only to issue an order by the management for implementing a new system or integration platform, but also to conduct some explanatory campaigns which would improve both the work quality and the productivity. The integration of the information systems for a company, today, it is necessary more than ever before, because in the companies, there are tens or hundreds of separate applications, which involves high costs and long time to matching the information. Therefore, the integrated information systems must inter-connect and inter-communicate as a complex, complete and coherent system and all systems parameters should interfere in order to assure compatibility and combined inter-operability. [2]Many developers of information systems have recognized the importance of integrity with other IS and are emphasizing that they are working towards this goal. For instance, FileHoldpays special attention to integration with Active Directory services in order to improve the management of users, their access rights and roles in the DMS. docassist focuses on business systems in particular and offer integration solutions with ERP systems, human resource management systems and CRM systems. There are also some developers who turn to some seldom needed integration tasks. For example, docSTAR offers options how to integrate DMS with geographical IS. When considering the integration of DMS with other systems, most often expressed considerations are about the influence of the integration on document management and circulation. However, it is as important to consider its influence on other systems as well. The integration process has to involve both the specialists for document management and experts of the systems and spheres to be integrated. When you want to connect a business application with a document management system,

195

Vol. 5, Issue 1, pp. 194-205

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
companies are forced to change both in the DMS and in the business applications, so as to prepare both ends of the channel to communicate. [3] With regard to integration of information systems, there are numerous sceptical opinions. For example, S.J. Bakulin, specialist at the Moscow Engineering Physics Institute (State University) claims that only every tenth integration project is successful: It is known that 80% of integration projects are failures. The reasons for that are mainly typical poor risk management, unsuccessful setup of specifications and other. [4]Bakulin emphasizes that integration process is gradually becoming more complicated to perform because of the increasing system complexity: When the complexity level of a project is rising, it also involves increased complexity of data schemes. The conditions of data exchange constantly become more complicated. Simple data conversion turns into elaborate business process involving data not only in several different systems, but also requiring user involvement in decision making in order to transform the data into useful information, i.e. the data would acquire context and meaning. [4] However, without integration of systems it is often not possible to electronize and improve business processes in an organization; therefore the integration option has to be chosen. Ideally DMS integration options with other IS should be considered and applied already in the development phase of the document management system. Olga Skiba, specialist at the IT company InterTrust promotes not putting back the issue of potential integration and deal with it during the phase of DMS implementation planning: When starting to plan strategies for further development (integration with other systems, ensuring the document circulation internally and inter-branch etc.) it is necessary to assess the approximate scope of the project and evaluate the options. [5] During this phase it is essential to consider potential information systems that are planned to start working with by the organization. However, the goal of this article is to deal with ready and working systems integration options.

III.

ECM AND ERP SYSTEMS

Starting from the beginning of 2000s the main task of IS developers has been the integration of different systems and the opportunity to develop one sole product that would include all the functionalities needed in an organization. This has been emphasized also by specialists at the Moscow State University of Economics, Statistics and Informatics, A.V. Boychenko and V.K. Kondratyev: Main directions in the development of information technologies in every circumstances are nowadays: development of integrated corporate information systems, ensuring that these corporate information systems are able to cooperate between themselves and with other information resources, creation of unified informational environment.[6]Two major types of information systems have gained great popularity during the last few years - Enterprise Content Management systems and Enterprise Resource Planning systems. Many large organizations tend to use this software in order to replace their specific information systems and support systems, for instance, DMS. Enterprise Content Management (ECM) system is the best alternative for integration different systems, for it comprises the functionalityof a document management system and functions needed for the document circulation inside the organization. However, ECM systems is not a universal means of replacing all the IS in the organization they cannot fully process and analyse structural data in the organization (for instance, financial data). The Association for Information and Image Management clearly and accurately defined the terms Enterprise Content Management as the technology that captures, stores, preserves, manages and deliver collected data and relevant information that directly impact the processes of the organization. Further, ECM includes sort of tools and techniques that potentially allow the company management for some unstructured information in the organization wherever these pieces of unstructured information emanate. Moreover, it has been said that the technological components that the Enterprise Content Management possesses in this date are practically coming from the software products that the Electronic Document Management Systems or the EDMS used to have. [7]As one can see, ECM systems are able to ensure not only the circulation of documents and non-documents, but also the complete circulation of information in an organization. Nevertheless, they lack some functions that are sphere-specific and essential in the public sector. These are in particular finance or human resource management, client data storage and management, business intelligence tools etc.

196

Vol. 5, Issue 1, pp. 194-205

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
In order to secure the management of resources, needed and used by an organization, several different Enterprise Resource Planning (ERP) solutions are available in the IT system market. Between the most popular there areEpicor, Infor, Mycrosoft Dynamics, and SAP. Unlike other ECM systems, ERP systems are meant exactly for processing of structural data. ERP isan integrated information system built on a centralized database and having a common computing platformthat helps in effective usage of enterprises resources and facilitates the flow of information between all business functions of the enterprise (and with external stakeholders). [8]Expert opinion on ERP is also not unambiguous. Its wide functionality is generally appreciated, but the high costs and inability to replace all the IS in an organization count against it. Replacing the entire information systems portfolio of an enterprise with ERPs is neither possible nor economical due to the following reasons. Firstly, there is no ERP solution that will provide all the functionality an organization requires and therefore some custom applications will be present among the candidate applications for integration. Even if it were possible through a combination of different ERP packages to complement the inadequacy of coverage, not every enterprise would be willing to depend on such solutions. Secondly, there is always a tendency to maximize the returns of the past investments on information systems. Throwing away some expensive items for some fancy reasons is not an attractive action in business, which is usually aimed to reduce its spending unless the spending promises profitable outcome. [9] ECM in combination with ERP can ensure creation, processing and storage of all the necessary data. However, in this case the main drawback is the high costs. Both ECM and ERP solutions are expensive, massive and their hosting and maintenance require many resources. Public sector institutions, especially in developing countries, usually do not have such resources; hence several smaller systems are used for document and resource management. In the last years the global financial crisis has negatively influenced the solvency of the public sector as well, therefore one can assume that governmental institutions will henceforward choose to maintain several smaller systems (often the ones they already possess) and gradually upgrade them in order to ensure optimal interoperability. Obviously the ideal solution would in this case be justone information system that would enable storing corporate information and managing the access rights for users from different levels and areas. Such information system would also have to ensure the functionality of circulation and management of all the documented information in the institution. Yet the documentation in an organization is currently that diverse that it is not realistic. Nevertheless, many developers are working on this task.[1] IV. DMS INTEGRATION WITH DIFFERENT TYPES OF SYSTEMS

4.1.Integration with AD
Almost all organizations with the number of employees above 20 use a mechanism for structure, authentication and authorisation of network users. One of the most popular mechanisms is Microsoft Active Directory, designed for Windows domain networks. Nevertheless, often such alternatives as Fedora Directory Server, OpenDS, Apache Directory Server, Oracle Internet Directory, Novell eDirectoryand many others are used. Many developers of document management systems choose to base the system user maintenance model upon one of the abovementioned user directory service mechanisms. DMS basic functions usually enable integration with Active Directory. The institution has to decide whether to maintain their own list of users and their access rights with all the necessary metadata in the document management system or to acquire the data from the directory. Most often it is the combination of both options user authentication is based on the directory, whereas the access rights are assigned and further actions are registered in the user base of the system itself. The main benefit of DMS integration with user directory is the employees satisfaction with the process of authorization. If document management system is a part of IT systems in the institution that use user authentication based on user directory, then it is not necessary to input user name and password (or they are identical with the basic user credentials). Several systems also have an option of managing user rights in the DMS from the user directory, using security group mechanism. When users have been divided into groups, one only has to define analogue groups in the document management system, synchronise them with the user directory and thus every user acquires a user

197

Vol. 5, Issue 1, pp. 194-205

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
rights package assigned to the particular group. This model ensures a more convenient way of managing users and their rights it can be done in one place. Main drawbacks of the DMS integration with user directory are renewability in case of a disaster, unstable connection and safety. In case of a disaster, if the user directory is destroyed and it is not possible to restore it, the document management system practically has no users and all the tasks and user history are lost. Unstable connection between the user directory and the DMS can become a problem if user cannot authenticate into system. Finally, the question about safety. In case of a united authentication for all systems it is possible that the user enters the password only once when turning on the computer. Accordingly all the IS of the institution are now available from this computer and the physical safety of the computer itself can become an issue. However, it is not a problem question of system integration, but rather of work organization.

4.2.Integration with finance management system


Although for governmental institutions finance management systems are not that vital as for the business companies, there is a certain amount of actions in the public sector that are based on the support of finance management systems. It is particularly important at the end of the year, when the planning of next years state budget is at its peak and every institution has to provide financial calculations about the previous year and provide data about resources needed for the next year. And, during the year, contract checking and purchase organization is also necessary. Financial Management Systems: Information systems that support financial managers in the financing of a business and the allocation and control of financial resources. Includes cash and securities management, capital budgeting, financial forecasting, and financial planning.[10] Though at the first glance it seems that finance management systems do not have many common fields and features with the records management process and document management in an institution, the author of this article holds a view that integration with DMS is mainly necessary and can significantly increase work quality and productivity in financial departments. When integrating document management systems and financial systems, an essential question is how to separate types of documents that are common for both systems and which system to store them in. A good example could be contracts. Same as submissions, letters, regulations etc., contracts are stored in accordance with requirements of records management they are placed into numbered folders in compliance with the nomenclature of the institution. If the document management system has a workflow module, contracts are then given away to further processing or reconciliation directly in the document system. One has to take into account the fact that after a while contracts are mainly passed on to the archive together with their respective nomenclature folders. However, this is the only relation contracts have to the document management system. The actual work with contracts is carried out in the finance management system: contract registering, filling in the metadata, implementation control, changes to contracts, acceptance reports etc. Thus on the whole contracts as documents live two separatelives in the document management system where they haveoriginally been registered and first reconciliation and processing aredone, and in the finance management system where contractsare being used by the factual users of them the accountants. In this case it seems easier that contract as a document type would be best processed only in the finance management system. However, let us not forget that contracts, invoices and acceptance reports are above all documents and thus they have to comply with particular document storage requirements and be placed in a common document storage location. After surveying governmental institutions in Latvia about their methods of processing financial documents, one has to conclude that most of them do not follow the suggestions for good practice and simply register a duplicate copy of the document in every one of the systems. Though in this case it is ensured that the document in question is found both in the document management and financial system, the integrity of it cannot be guaranteed. If any changes occur to the document in the financial system, it means that the copy (or the original) found in the document management system is already incorrect. Only way to prevent this problem is the integration of document management and finance systems, resulting in an automatic synchronisation of modified items in one or both directions. However, beneath the surface this option has several problems as well, for instance, different bodies of mandatory metadata in both systems. For the document management system, metadata are vitally important (author, date created, number, nomenclature folder, version etc.), whereas the metadata in

198

Vol. 5, Issue 1, pp. 194-205

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
the finance system are completely different (sum, currency, payment time, contract partner etc.). These metadata are mostly defined mandatory on the database level already. Therefore, when exchanging documents between systems, errors may occur, if any of the fields is not correctly filled or left blank. It must be pointed out that a records manager who is usually responsible for the initial input of the document into the document management system has no competency regarding the accounting fields (if applicable in the DMS). An essential question about any system integration is the frequency of data synchronisation. In case of document management and finance systems, initially it might seem that data synchronisation should be done after every change. However, such system would be very resource demanding for the software, computer network and routing devices. Since governmental institutions deal with financial documents relatively less often than companies in the private sector or banking, for a short period of time it is acceptable if there are some data discrepancies between the two systems in question. Most important is the correctness of the data in the system where the calculations are done, respectively, in the finance management system. The document management system, on its turn, can receive the refreshed data once or twice a day or, in case of special requirements, once an hour. As can be seen, when planning the integration of document management and finance systems, there are many factors both technical and organizational that have to be taken into consideration.

4.3.Integration with web portal


These days in developing countries almost every governmental institution has its own web portal used for breaking the news about the industry or inside of the institution, conducting surveys for citizens, summarizing opinions, and for publishing the most current regulations, press releases etc. Usually one or two administrators are responsible for the web portal they perform all the necessary work of inserting, deleting and exchanging the data. The data that have to be published are often received per administrators e-mail or at the best using fileshares. It is obvious how much can be gained (or rather how many resources could be spared) if integrating web portal and the document management system. A well-developed DMS already has all the metadata fields necessary for publishing a document: title, author, date, comment (or abstract). Only one feature would have to be added publish on portal which should be followed by the integration script, when delivering the document in question to the web portal. Subsequently, the administrator would only have to place it into the appropriate section. Unlike many other aforementioned systems, in this case there is no need for reversible data synchronization as in the web portal only the end versions of documents are published and they are not to be changed anymore at all. Looking into a further perspective, it is possible to convert DMS into a tool that enables specialists in public services to communicate with the society with the help of it. In this case it would certainly require some extensive functionality supplements as, for example, creating an opportunity for the citizens to ask questions on the website of an organization, which would then hand them over to responsible executors in the document management system as tasks. Sergei Bushmelev, system analyst at DIRECTUM, one of the largest software development companies in Russia, emphasizes that it is exactly the socialization and data exchange (not only internally) that are the most important development tendencies for DMS: It is the person the user of the system who has gradually become the central element of the DMS instead of the content. It is most important to ensure cooperation with colleagues, work and project group members, remote and field office employees, but moreover with external partners, clients and citizens. From a document archive, the DMS turns into a system for collaboration between the organization and people. It is expected that this tendency will likely carry on developing. The scope of document exchange will henceforth also become wider and different potential socialization mechanisms in DMS will be used; various systems will be integrated as well. All of it will improve the effectiveness of employee interaction, work organization and exchange of information in electronic format. [11] Worth mentioning that until recently it was widely claimed that DMS can be used as the integration platform between different systems in need of interoperability. For instance, in 2005 Valery Lvov, representative of document management system developing company Optima Integration suggests to utilize DMS exactly for this purpose: Not every company is disposed to giving up all the implemented and already customized systems. Therefore one particular system becomes the system for integration and usually it is the DMS. [12] However, one has to take into account that formerly DMS was only a document repository and a

199

Vol. 5, Issue 1, pp. 194-205

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
registration tool, whereas now, at least in the public sector, it has turned into a huge support system and often the DMS itself requires an integration platform or an in-between system in order to cooperate with other systems at the institution.

4.4.Integration with human resource management system


Integrating document management and human resource management systems is one of the most complicated processes when developing system interfaces. Employees, their status (position) and potential absence have huge and direct impact on document circulation in a company; therefore the systems have to work rapidly and without errors. Human Resource Management Information Systems: Because the personnel function relates to all other areas in the business, the Human Resources Management Information Systems play a valuable role in ensuring organisational success. Activities performed by the Human Resources Management Information Systems include work-force analysis and planning, hiring, training and job assignments. [13] Same as with other systems, it is possible to duplicate all the data and maintain a separate list of staff and positions in the document management system. However, maintaining staff data, especially in large organizations, is a huge task and because of the double work human resources are shed unnecessarily. Moreover, this approach may involve time shifts, causing errors in the document circulation process. Human resource management system is always the first one when staff related information is entered - about staff vacations, business trips, changes in positions, departments etc. In this case the administrator of the document management system always has to monitor these changes and immediately (not later than one working day) adapt them into DMS manually. The main part of DMS that needs all the newest data from the HR management system is the workflow module. All up-to-date document management systems contain a more or less advanced workflow mechanism. Assigning tasks and documents to other employees is directly related to position hierarchy at the institution. Tasks as Execute, Review, and Prepare reply can only be given to the subordinates, whereas such tasks as Reconcile, Visa, and Sign are mainly directed towards the superiors. In order to avoid chaos in the document circulation of an institution, it is essential to overtake the most current staff positions data from the human resource management information system correctly. It is also vitally important for the aforementioned workflow module to work properly so that the current status of an employee is clear whether he/she is working or absent (business trip, vacation, illness etc.). If a document or a task is being assigned to an absent person, the document management system has to alert the employee who was assigning the task or even automatically forward it to the employee who is replacing the absent employee. One has to take into account different details as well, for instance, changes in status of an employee during the time period of the active task. If the task deadline is set to 10 days but the assigned employee is going on vacation in 5 days, the assigning person should receive a notification about this issue. All these data should ideally be acquired from the human resource management system. Additionally, when inspecting the integration of DMS and human resource management system, one can emphasize the public availability of staff data within the institution. During the last years it has become popular to include an internal staff portal in the document management system where it is possible to search for documents, contacts and employees within the institution. Thus the portal needs an accurate list of employees and metadata (names, surnames, phone numbers, positions, departments, executive managers). The source of the necessary data for this list should be the human resource management system.

4.5.Workflow system
All up-to-date document management systems contain functions for creating workflows. However, they are not always advanced and customizable enough for institutions with complicated structure or many various types of tasks. The bureaucratic apparatus of state government is known for its hierarchical structure, complicated assignation of tasks and responsibilities and numerous types of assignments. Therefore, when acquiring a document management system, the out-of-the-box workflow system does not always correspond with the requirements of the institution. The main reason for this is the necessity of creating sophisticated automatic workflows with restrictions for their initiation. For instance, when the status of a document is changed from Prepared to Reconciled, the workflow would have to create x tasks for x different employees to Visa the document and x tasks for

200

Vol. 5, Issue 1, pp. 194-205

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
further sign-off of the document with the restriction that all the Visa requirements are successfully fulfilled. For such cases, separate workflow management systems are usually acquired which then have to be integrated with the active document management system. In public sector, almost all tasks arise only from an initiating document. Previously in the article we have looked upon the integration of document management system with human resource management system and basically the principles are the same the workflow system has to receive up-to-date human resource data either from the document management system or from the human resource management system, if the institution has decided to create such an interface. However, in this case the reason of the task is important the document itself. Even if the workflow management is a separate software product, in the task view, it has to be able to display the document with all the metadata, history and content from the document management system. Moreover in this case most of the workers will work exactly with the workflow system and the document management system will remain for the records managers and IT administrators, and for the document search as well. Thus it means that in the workflow system there have to be options of opening the documents, editing it and even creating new, related files; every change also has to be immediately displayed in the document management system. As can be seen, from all the aforementioned systems it is the integration with the workflow system (if external software is used for this task) that is vitally important for the existence of document management system.

4.6.Integration with CRM system


During the last few years Customer Relationship Management systems have become very popular among different business branches and almost every middle-sized or large company maintains their own client database. Concerning this matter, public sector institutions are a bit behind, for they basically have no contacts that would fully fall within the category of client. On the whole, every resident of the country is the client of a governmental institution. Ideally governmental institutions would have to work with the data from the Population Registry and theRegister of Enterprises they are also state-wide systems and contain data about all the residents and companies. However, in the developing countries the integration between different systems is still in process and every governmental institution uses its own IS and data, seldom sharing them with other public sector institutions. In many cases, CRM systems are software applications that integrate sales, marketing, and customer service functions. The main objective of CRM system is to give to all customer interacting persons and departments Access to shared customer data in real time. [14]Even though public sector institutions do not have any clients, in relation to document management systems it is important to store data about correspondents, authors of submissions, other governmental institutions etc. Every document contains metadata with some information about its author (in case if the received document is a submission the metadata would be authors name, surname, personal ID number, address, e-mail address, phone number). Thus, after a while, the system has stored some information about the correspondence partners. If taken into account that both resident and enterprise data are maintained in one system, the institution should consider implementing a CRM system and integrating it with the document management system. While document management system only maintains the basic data about a contact person (or a company), the CRM system is able to store much more data, for example, number of calls, applications for appointments, dates of appointments and similar; moreover, all these data can be used for reports and analysis, thus making the public sector closer and easier accessible for the citizens. Same as with other aforementioned systems, document management system can be integrated with the CRM system or the unhandy alternative can be chosen maintenance of two different client registers in both systems. One of the biggest problems within the organizations not only in the public sector is the overuse of the system after it has been acquired. With overused one has to understand the problem that the relatively main information system in an organization is being artificially adjusted for all the business processes and the functionality of the system is being used not only for its sole purposes. In the context of CRM, if an institution has an active CRM system, it has to gradually become the only source of external contacts. It is the same with the human resource management system that also has to be the only authentic source of data about the employees of the company. Though the document

201

Vol. 5, Issue 1, pp. 194-205

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
management system also contains modules enabling maintenance of contacts and employees lists, in a big organization with specific systems the document management system should be used for its sole purposes: creating, processing, circulating and archiving of documents. If the institution already has an external web portal for informing the citizens, then there is no need of using the function of creating a web portal built-in in many DMS. If the DMS has an in-built Notepad type text editor, the employees should not necessarily use it, for they have a far more comfortable option of creating documents with the help of office software and then transferring them to the system. Going back to the question about the integration of DMS with the CRM system, we can conclude that the integration is highly preferable as in case of need, the data from the CRM system can be used by other business systems that need cooperation with clients or the contacts data base of the organization. CRM-type of systems have been and continue to be a success, contributing significantly to the increasing performance of a business, is a well-known fact, but what is less known and publicized is the effort behind integration of these systems with other systems of a company. [2]

4.7.Integration with office software


Comfort is particularly important when creating and processing documents. Employees in the public sector are familiar with the office software, have a good knowledge of it and achieve best work results when using the most accustomed office tool. Therefore one of the main tasks of the document management system developers is to ensure a most extensive integration with the office software. First of all, one has to take into account that one of the most important work tools in the public sector is now the e-mail. Employees have used to opening it in the morning and closing only in the evening, in case of need checking it from home, on a business trip etc. Assigning tasks and exchanging documents through e-mails is a common praxis already for years. Therefore it is not sure that, if implementing a document management system, the habits of the employees would change. At least in the beginning some of the employees will not recall checking the document management system on a regular basis as this would be the first place where the newly created tasks and documents to view are found. Therefore integrating DMS with e-mail is a primary need. The minimum would be to send a notification via e-mail to the person receiving a new task. However, ideally the document management system should notify (per e-mail) both the assigned person (receipt of a new task, warning about a nearing deadline, delay notification) and the person assigning the task (successful task received/read confirmation, task completed, task delayed). However, the main drawback of such integration is that employees do not get accustomed with working with the DMS; they connect to it only when an e-mail notification has been received. Therefore many employees are not even aware of all the useful functions and user friendliness of the system. Many document management system developers are trying to integrate DMS functions into such office software as text editors (as MS Word) or spreadsheet editor (MS Excel, for instance). However, the result is not always as good as shown in the advertising materials. The experience of the author of this article indicates that such integration can cause problems in usability, administration and stability. Firstly, standard users are accustomed seeing their office software the way it is and any additional fields or toolbars, in their opinion, only decrease the work quality. Secondly, problems arise because of administrating different versions of office software. If everyone at the institution is using one particular software package of any developer (for example, MS Office 2010), then everything is more or less alright. But, if there are different software versions even from the same developer or different products by several developers, then the integration with the document management system is not always running as planned. Last, the question about stability. Even Microsoft representatives have admitted that the instability of their software can be mainly caused by 3rd party add-ons: Windows Error Reporting data has shown that add-ons are a major cause of stability issues in Internet Explorer. These add-ons significantly affect the reliability of Internet Explorer. These add-ons can also pose a security risk, because they might contain malicious and unknown code. [15]The stability of Microsoft Office also depends on different 3rd party add-ons. The experience of the author of the article on integration of different document management systems with the office software indicates that add-ons from the DMS often slow down the work of the office software significantly or even cause it to crash. To conclude we can say that integration with the office software is presented as a huge advantage by many DMS developers, promising more comfort and benefits for work. However, the user of the

202

Vol. 5, Issue 1, pp. 194-205

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
document management system has to carefully assess if full integration is really necessary. While email notifications are necessary for ensuring the usage of the system, additional functions in the office software are not always beneficial.

V.

CONCLUSION

Fewer and fewer information systems now work fully independently and gradually all organizations choose to start integration. Processes meant to improve by digitalization are often overlapping and therefore information systems need interfaces enabling collaboration. Document management system in the public sector is the one IS of central basic actions and almost all the business processes are more or less related to document management. Therefore there are two ways to follow either to implement one large system that is capable of including, optimizing and digitalising all the processes in the organization or to integrate the existing systems. First option is significantly more expensive; besides, there is no such system that could contain all the necessary functions for a governmental institution. Whereas a huge business corporation can implement ERP, it is not enough for governmental institutions, as there is a need for a high level document management system. ECM systems are on their turn not suitable for finance operations, calculations and analysis daily and frequent activities in the public sector. Therefore it is the option of integration chosen most often by carefully analysing and choosing which IS should be collaborating. After long practical and theoretical research and review of different types of governmental institutions and their needs, the author of the article has come to a conclusion that systems whose integration with the DMS is possible, can be divided into three groups: 1) integration with these systems is mandatory or very necessary; 2) integration with these systems if advisable and would improve their administration and usage, and enhance processes within the institution; 3) integration with these systems is necessary only in particular cases or for particular institutions. Integration is mandatory for: Finance management system; Human resource management system; Workflow system. Integrationis advisable for: AD or its analogue; CRM. Integration is optional for: Geographic information system; Office software; ERP; Web portal. Disregarding the fact which type of integration has been chosen and how many systems are obtaining new interfaces, the institution has to be aware that changes of this scope has to receive a full support from the upper level management and cannot be only an initiative of the IT department. The integration of information systems is always resulting in significant changes for business processes and might even cause some staff changes; moreover, it is not to forget that employees require instructions, trainings and improvement of professional skills. Any system integration can be carried out as a project with all the main phases: assessment, planning, development, testing, and implementation. All phases require active collaboration not only by external system developers, but also by specialists from the governmental institution itself, as only they fully comprehend the processes in the particular institution.

VI.

FUTURE RESEARCH

This article inspects potential integration of document management systems with various different information systems used in governmental institutions. However, the article only deals with the most widely used and popular systems in detail. Yet the governmental institutions, dependent on their work specifics, often use a great deal of other information systems, for instance: payment systems related to banking sector;

203

Vol. 5, Issue 1, pp. 194-205

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
geographical and Global Positioning Systems if the employees of the institution work out of office on different objects, important on state level; data repositories; medical and bioinformatics information systems. If preferred by the organization, the document management system can be integrated with almost any other IS used in the organization. However, specific and organization-dependent IS require special approach and analysis for the integration process. Future studies could carry out an in-depth review on DMS integration possibilities with other types of information management systems that are not closely related to document circulation and record keeping. Future research can also attend to different case studies about document management system integration with other IS both in public and private sector. Additionally to already mentioned research fields, a much wider scope could be approached a study on compatibility, integration and customization of Enterprise Resource Planning and Enterprise Content Management systems.

REFERENCES
[1]. Nazarenko A. DMS IntegrationasMeansofIncreasingCompanyEffectiveness. Current Technologies ofRecordKeepingandDocumentCirculation, vol. 5, pp. 35-43, May 2012. (Originallanguage A. . ). [2]. Litan D., Velicanu M., Copcea L., Teohari M., Mocanu A.M., Surugiu I., Daduta O. Business NewRequirement: InformationSystemsIntegration Methodsand Technologies. InternationalJournalofComputersandCommunications, vol 5, pp. 133-145, 2011. [3]. BusinessAppsIntegrationWithDocumentManagement. Internet: http://blog.yerbabuenasoftware.com/2012/02/business-apps-integration-with-document.html, Feb. 6, 2012 [Sep. 20, 2012]. [4]. Bakulin S. J. AnApproach to ApplicationIntegrationBasedonDataCoordination. Proceedingsofthe MEPHI (MoscowEngineeringPhysicsInstitution) 2007. CompilationofScientificWorks, vol. 2, pp 1516, 2007. (Orig. . . , . -2007.). [5]. Skiba O. ImplementationofElectronicDocumentManagementSystem: Risks andRiskTreatments. Internet: http://www.intertrust.ru/press_center/articles/view/751-vnedrenie-sistemy-elektronnogodokumentooborota-riski.htm, [Oct. 10, 2012]. (Orig. O. : ). [6]. A.V. Boychenko A. V., Kondratyev V. K. ModelsofProfiledChoicesfortheIntegrationofOpenInformationSystems.Proceedingsofthe MEPHI (MoscowEngineeringPhysicsInstitution) 2004. CompilationofScientificWorks, vol. 2, pp 140-141, 2004. (Orig. .., .. . 2004.). [7]. Allen D. EnterpriseContentManagementBestPractices: EcmStrategy 100 MostAskedQuestions.EmereoPublishing, 2008, pp. 188. [8]. Ray R., EnterpriseResourcePlanning. NewDelhi: TataMcGrawHillEducationPrivateLimited, 2011, pp. 602. [9]. Wangler B., Paheerathan S.J. HorizontalandVerticalIntegrationofOrganizational IT Systems. inInformationSystemsEngineering: Stateofthe Art andResearchThemes, 2000, pp. 6. [10]. Gupta A., Malik A. ManagementInformationSystems. NewDelhi: FirewallMedia, 2005, pp. 400. [11]. Kolyesov A. RussianMarketof DMS: What It WasandWhat It WillBe. PC Week/RE, vol. 2, pp. 1516, Feb. 2012. (Orig. - A. : . PC Week/RE). [12]. Voskanyan M. ElectronicDocumentCirculation: CollateralAdvantages. Internet: http://www.iemag.ru/analitics/detail.php?ID=16054, Aug. 9, 2005 [Oct 10, 2012]. (Orig. - M. : ). [13]. Ray R. Corporate Planning and Strategic Human Resources Management. Maharashtra: Nirali Prakashan, 2007, pp. 7.84. [14]. Havaldar K.K., Business Marketing: Text & Cases, 3E. New Delhi: TataMcGrawHillEducationPrivateLimited, 2010, pp. 572.

204

Vol. 5, Issue 1, pp. 194-205

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
[15]. Internet Explorer Add-on Management and Crash Detection. Internet: http://technet.microsoft.com/en-us/library/cc737458%28v=ws.10%29.aspx?ppud=4%29, [Sep. 22, 2012].

AUTHOR
Toms Leikums was born in Latvia, in 1984. He received a professional Bachelor degree in Programming at the Latvia University of Agriculture in 2006, and a professional Master degree in International Project Management at the Latvia University of Agriculture in 2008. Currently he is a 3rd year PhD student at the Latvia University of Agriculture, researching electronic document management.

205

Vol. 5, Issue 1, pp. 194-205

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

SPECIAL DYNAMICAL SOLUTIONS OF THE WHEELS OF NEW DESIGNED CAR BODY


1.a
1,2

Khashayar Teimoori* and 2.b Muhammad Hassani

Department of Mechanical and Aerospace Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran a President, Head of the Non-linear Sciences Department, Backstretch * b Team Consultant and Head of the Designing Department, Backstretch
Kh.teimoori@backstretch-Team.com M.Hassani@backstretch-Team.com

ABSTRACT
This article is related to the new design of the BMW car according to the base of BMW 6 Series 2012 coupe with BMW I8 Concept. The proportion is a typical BMW proportion and the surfaces are about the fluent of muscular form and the whole interplay creates the great stands of the car on the wheels. It has a new headlights LED (light emitting diode), grill, green house, character line, reflections, designing and so on. At the end of this article new design of BMW is considered and the information of some views as front perspective as well as the sides is illustrated. For the mechanical concepts of this study, from the vibration, the simplest model representing a system is a linear, lumped parameter, discrete system model, which requires considerable analytical and computational effort for systems with more than two degrees of freedom. The equations of motion for the problems (in design i.e. design of the facelift of the car or the tires, too), all textbook authors mentioned above use Newtonian mechanics, whereas this study is introduced to the readers to analytical approach of Lagrange's equations.

KEYWORDS: Render, Side, Facelift, Reflection, Overhang, Lagrange's Equations

I.

INTRODUCTION

According to the different issues between Mechanical engineers and Designers in the world of science, one of the major problems to connect them is the limitations of the implement of engineering to modeling the extraordinary thoughts from the capabilities of the human brains that is named, Design. Any design that is emanated from the brain of the designer has to be seen and considered in all aspects. The necessity of clarifications of details is to be conceivable to the experts. [1] The scientific utilization of technical apparatus is not wholly practical for a complete design due to the lack of human local capabilities. The work of these studies are introduced a new designed car according to the frame of BMW I8 concept compound with the I8 6 series of 2012 designs with analysis of the infrastructure of the main frame and also the consideration of vibrations of governing motion equations to the crankshaft, axils which are ended up to the wheels. These outcomes are deeply supported by precise calculations of Lagrange's equations. [2] The major conspicuous item within our survey is the DESIGN at which later came up with the precise calculations among the vibration equations, consequently ended up in attaining the coefficients of C in vibration equations which were derived from the system itself, nonetheless at the beginning it couldnt have been technically and precisely expected. The hierarchy chart bellow illustrates the actual taken steps sequentially.

206

Vol. 5, Issue 1, pp. 206-217

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Idea Survey
Design

Finalizing survey
Searching and Seeking the Differences

Analysis
Analytical Vibrational calculations on the system

Contrast to the Original Design

Rectifying
Analytical Accurate Research on the Latest look from the design

Chart 1. Hierarchy sequences on promoting the latest BMW outfit

II.

PRIMARY DESIGN

This project is based on BMW 6 series 2012 coupe, designed by Mr. Nader Faghih zadeh and is inspired with BMW I8. According to the new BMW designs, such as 3 series 2012 saloon and I8, there is an empty space between headlight and grill (It is noticeable that the mentioned work is in the observance of this article). Continuous distant of the headlights and grills are in the observance of this proposal, too. Character lines are taken from I8 and 6 series. Main purpose of the design is 6 series facelift, in order to designing the new one as I8; therefore it can be ignore the use of conceptual forms (i.e. Glass doors, Glass bonnet).

Figure 1. BMW main lines (Designed by: BMW.Co.)

Through the BMW boundary conditions and considerations such as lines which have been shown on Figure 1, form of the grills, lights, shark form of the BMW cars and proportions, this work presented a new design of BMW. From the margin side, front of the car is alike the I8 and also backside of the car is alike the 6 series concept. To satisfying Aerodynamics considerations, main lines of the designed car are basically inspired from the 6 series. This can be followed by the next view of the car in Figure 2.

207

Vol. 5, Issue 1, pp. 206-217

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Figure 2. Considering BMW main lines in project


Designed by: M.Hassani, Backstretch, Department of Designing and Analyzing, 2012

Moreover, rate of length respect to the width of car is similar to the 6 series and is set as 0.58. In addition to the facts are mentioned on top, size of the intake air valve is equivalent with 6 series, this means that, this car was designed for eight and ten cylinder motors. Primary approach to the new car body designed is illustrated as Figure 3. Furthermore, the structure is designed in a way that compartment intrusion is rarely seen in modern cars.

Designed by: Muhammad Hassani, Backstretch , 2012

Designed by: Muhammad Hassani, Backstretch , 2012 Figure 3. Primary approach to the design
Designed by: M.Hassani, Backstretch, Department of Designing and Analyzing, 2012

Figure 3 is integratively illustrated the primary approach of the designing process which should be considered to clarifying the readers through the design process. From here, appropriate actions have

208

Vol. 5, Issue 1, pp. 206-217

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
been taken through the design and a proposed merger is between the I8 and 6 series of BMW design. That is followed by Figure 4.

Figure 4. Start of designing with BMW 6 series 2012 coupe blueprints.


Designed by: M.Hassani, Backstretch, Department of Designing and Analyzing, 2012

Through the primary design, Grills are wider and the distance between them are longer than usual. Due to making a little more shark shape form, I8 significant lines are conservatively lapped together, which is shown on Figure 5.

Figure 5. Shark shape form


Designed by: M.Hassani, Backstretch, Department of Designing and Analyzing, 2012

Applying this methodology, engineers can define the frame of a new car body, respecting the company standards. The principal benefit is the reduction of the design development time as the modification process is optimized. Furthermore the design history is recorded and the same parametric models can be re-used for several vehicle projects. The front end has cleaner lines, which converge towards the prominent front grille. The methodology was based, not only on Primary design, but also on the concept of archetype and modular platform. Vibration analysis of the wheels in the designed car body is under analysis and the Lagrange's Equations are derived through that. Also it is

209

Vol. 5, Issue 1, pp. 206-217

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
impossible to deny Aerodynamics concepts. Car body design is a very critical and time-consuming activity because it is deeply linked to body style. The most frequent design modifications during car development process, in fact, depend on style changes. For this purpose BMW design seems to be very attractive for designers because it enables to speed up the process of geometric features updating, according to the new style changes. In this way, designers dont waste time in boring and repetitive activities without adding value. The project goal was to develop a new BMW design methodology so as to: Reduce modification time during the development of a new car body; Reuse the same parametric models for future platforms. The archetype is a set of logical and parametric features of an object or system. With reference to these archetypes, it is technically possible to build the model of a single component or a whole system.

III.

FINAL ANALYZING OF THE DESIGNED BODY

The argumentative process of the designing has reached to Figure 6. As shown bellow, the final design typically located to illustrate the final perspective.

Figure 6. Complete view of the designed car with the concepts.


Designed by: M.Hassani, Backstretch, Department of Designing and Analyzing, 2012

All the specifications considered in 6 Series and I8 on the next figure. [3] With reference to BMW makes, all the aerodynamic lines in sides, back and front as well as dimensions were accurately calculated. Next views are clarifying all of the differences between designed car and BMW design. This is considered on Figure 7 which is shown below:

Figure 7. a) BMW Design and b) New design

210

Vol. 5, Issue 1, pp. 206-217

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Figure 8. Illustration of the main Line that is related to the Aerodynamics Considerations.

The vibration analysis of our designed system will remarkably be noticeable on the next stage and the coefficients of the Lagrange's Equations were calculated and computed with MATLAB program. All the programmed codes are classified in Appendix. Also mechanical concepts with comparisons will be completed on this reciprocal research paper.

III.A

VIBRATION CONCEPTS IN MECHANICAL ENGINEERING SCIENCE

The modular platform is a platform that can be used for several types of vehicles with specific modifications. A car body, in fact, has usually made for only one vehicle with its specific platform, while a modular platform can be used for several cars (usually in the same car segments) and can be adapted to the current style. As mentioned in the last section, base of the project is 6 series 2012 coupe. Wheel base distance, height, length of the car overhangs and A-line are quite similar. [4]

Figure 9. Illustration of the side Meshing


Designed by: M.Hassani, Backstretch, Department of Designing and Analyzing, 2012

Figure 9 is clearly illustrated the formation of the body side. Due to the final design, meshing of the body side has the best view of the process of final design. [5]

211

Vol. 5, Issue 1, pp. 206-217

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Figure 10. Illustration of the body meshing Designed by: M.Hassani, Backstretch, Department of Designing and Analyzing, 2012

The formation of whole body meshing is completely illustrated on Figure 10, and then the distance between the wheels' axis including the sketch will be illustrated on Figure 11. A great deal of work has been done in developing dynamic and vibration models of vehicle systems and comparing their simulated data between two cases. [6] When the rear tire goes over the bump, the vibrations produced from front tire passage over the bump *(if it has not died out by the time rear tire reaches the bump) are used as initial conditions, and the same SIMULINK model is used to obtain the new oscillation.

Figure 11. a. wheels' Axils Distances b. Oscillatory Vibrations of the wheels

212

Vol. 5, Issue 1, pp. 206-217

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
With neglecting the mass of tires and rolling motion of the vehicle, and combining the stiffness and damping effects of tires and suspension system into an equivalent damping and stiffness system, a preliminary model based on the bounce and pitch motions of the vehicle is considered.

III.B FORMULATION
The governing system of differential equations which describe the bounce and pitch motions of the system shown on Figure 11 is attained by using Lagrange's Equations. The generalized coordinate x(t ) and (t ) are used to describe the bounce and pitch motion of the auto body. The kinetic energy is described in Equation 1 as:

1 .2 1 .2 mx J 2 2
1 1 2 2 k 1( y1 x l1 ) 2 k2 ( y2 x l2 ) 2

.1

The potential energy is described in Equation 2 as:


U

.2

Rayleigh's dissipation function describing viscous dissipation in the dampers is:


Q
. . . . . . 1 1 c1 ( y1 x l1 )2 c2 ( y 2 x l2 ) 2 2 2

.3

The Lagrange L T U evaluated from (1) and (2), and together with (3) substituted in (4) and (5) one obtains equations of motion.

d L L Q . . . dt x x x

.4

d L L Q . . . dt
The application of Equations 4 and 5 yields:
m x (c1 c2 ) x (l2c2 l1c1 ) (k1 k2 ) x (l2k2 l1k1 ) k1 y1 k2 y2 c1 y1 c2 y 2
2 2 J (c2l2 c1l1 ) x (l2 c2 l12c1 ) (k2l2 k1l1 ) x (l12k1 l2 k2 ) k2l2 y2 k1l1 y1 c2l2 y 2 c1l1 y1 .. . . . .

.5

..

.6 .7 .8

The equation of motion can also be shown in matrix form as:


.. . l2c2 l1c1 x(t ) k1 k2 k2l2 k1l1 x(t ) m 0 x(t ) c1 c2 0 J .. l c l c l 2c l 2c . k l k l l 2 k l 2 k (t ) (t ) 2 2 1 1 2 2 1 1 (t ) 2 2 1 1 1 1 2 2 . k2 y1 c1 c2 y1 k1 k l k l y c l c l . 1 1 2 2 2 1 1 2 2 y 2

213

Vol. 5, Issue 1, pp. 206-217

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

IV.

SOLUTIONS

The first attempt is to find the damped natural frequencies and the mode shapes of the damped system. To this goal should be set the right side of equations (6) to zero. [7] Assuming a harmonic response as shown in Figure 11.

Figure 11. Damping effects of tire and suspension system

The characteristic equation for the system is found by setting the determinant of the characteristic matrix to zero.

l1 2m

ms 2 (c1 c2 ) s k1 k2 det (c2l2 c1l1 ) s k2l2 k1l1

0 2 2 Js 2 (l2 c2 l12c1 )s l2 k2 l12k1 (c2l2 c1l1 )s k2l2 k1l1

From now on can use MATLAB to do the algebra and find the characteristic roots. Please refer to the Appendix. The C coefficients were attained by MATLAB program in that section. [8]

V.

ANALYZING THE SPECIAL SOLUTIONS BY MATLAB SOFTWARE

In this study analysis, assumed that the rolling motion compared to the two other types of oscillatory motions is negligible. Neglecting the rolling motion and mass of tires, and combining the stiffness and damping effects of tire and suspension system into an equivalent damping and stiffness system, a preliminary model for automobile's suspension system is presented on the Figure 11. Initial values for the respective inertias, damping coefficients, and spring rates are as the followings:

214

Vol. 5, Issue 1, pp. 206-217

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963 m = 2500kg J = 3500 kg.m2

k1 k2 40000 N

c c 3000 N .s m 1 2 m l1 1m And l2 3m
and the road is as sinusoidal in cross section with

Where m is the auto body's mass, J is its moment of inertia about the center of mass. [9] The car is assumed to be travelling at 50 km

h amplitude of 10mm and the wavelength 7m . Having run the program with MATLAB software,
the results were clearly shown to us the best design of the tires and distances among them. All the parameters are affected in the design of the car and they are as important as our analytical solutions in the mechanical behaviors.

VI.

CONCLUSIONS

Unfortunately there are a lot of distances between the designers and analysts, however the main idea of the design is exclusively inspired and then innovated from the characteristics of the mechanical behaviors on concepts. The designers in fact are the people who seek to design in the best shape possible as well as the best creative model of designing, nevertheless for the analysts encounter a lot of limitations which are derived from the conversions of the model into the real world. One of the cases through the thousands of the pursued research is the car and the behavior of it structure. Vibration of the main body of the car is very important issue in order to analyze the design and to make a relationship between the I8 design form and 6 series 2012 coupe is considered and designed as the next generation model of BMW 6 series is remarkably proposed for manufacturing aids.

VII.

RECOMMENDATIONS AND FUTURE SCOPE

Due to the changes, have to do with the events which have taken place over the last ten years, the main reason to diversify this article from the others is the technically assumed relations between the design and science of mechanics. We believe that the major problematic issue among the Mechanical Engineers and analysts is lacking sufficient tools due to the excessive workable ideas contrast with the shortage of necessary applications accordingly. Rectifying the tool imperfections, it is globally believed as time consuming process which is willingly going to be solved as well.

APPENDIX
A.
MATLAB CODE s

Now can be used MATLAB Code to Obtain Damped Natural Frequencies and the Mode Shapes:
% Calculating Eigenvalues and Eigenvectors m = 2000; J = 3500 ; k1 = 30000; k2=30000; l1=1; l2=3; %Establishing Mass Matrix and Stiffness Matrix m=[m 0; 0 J]; k= [k1+k2 k2*l2-k1*l1; k2*l2-k1*l1 k2*l2^2+k1*l1^2]; % Calling Function "eig" to Obtain Natural Frequencies and Mode Shapes [u,lamda]=eig(k, m); fprintf('\n') disp('Natural Frequencies are:') % Print Natural Frequencies w = sqrt(lamda) fprintf('\n') % Print the Mode Shape disp('Mode shapes are:') fprintf('\n') disp('u=') fprintf('\n') disp(u)

215

Vol. 5, Issue 1, pp. 206-217

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963 B. NOTATIONS AND EXCESS FORMULAS
Period t, in which the automobiles front tire is in contact with the bump. This time period is:

0 t t1

Where V, is the speed of the automobile. The other time span is when the rear tire is in contact with the speed bump. This time period is:

t2 t t2
t2 l1 l2 V

Where t2 is the time it takes for the rear tire to reach the trough of the bump. That is:

The non-dimensionalized times, defined as the ratio of the time for each tire to pass the bump over the natural periods of the vibrations, are:

n1 t1 V 2 T1 2 V n1

And

n 2 t1 V 2 T2 2 V n 2

C. MATLAB SIMULINK SOFTWARE SIMULINK MODEL

First, the speed bump is modeled as the algebraic sum of two sine waves one starting at t1 seconds later. The frequency of the sine waves is

t1

. That is:

y(t ) 0.08sin( t ) u (t ) 0.08sin( t ) u (t t1 ) t1 t1


Where u (t) is the step function at t = 0. SIMULINK implementation of this bump signal is shown above.

ACKNOWLEDGEMENT
We do appreciate Mr Ali Sahebi for his contributions both on the revisions of the project and the consultation as the chief executive of international affairs in Backstretch. He is very supportive to associate in thoughts regarding to editing as well as sharing the workable ideas on the project.

216

Vol. 5, Issue 1, pp. 206-217

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

REFERENCES
[1] . SFE-Concept. http://sfe1.extern.tu-berlin.de/concept/concept.html, SFE, GmbH, Voltastrasse 5, D-13355, Berlin, Germany [2] . Inman, Daniel J., Engineering Vibrations, 2/E, Prentice Hall, 2001. [3] . Ullman, DG., The Mechanical Design Process, 2d edt, McGraw-Hill, 1997. [4] . L. Morello, F. DAprile, Associative CAD in Vehicle Development through Simultaneous Engineering, ATA, Firenze, Febbraio 1997. [5] . Bennett, Jeff (14 October 2010). "BMW to Expand Plant in South Carolina". The Wall Street Journal: p. B5. [6] . J. Hoschek, D. Lasser, Computer Aided Geometric Design, AK Peters Wellesley, Massachussets, 1993. [7] . Tongue Benson, Principles of Vibrations, 2/E Oxford, 2002. [8] . Haraguchi et al: Technique for Estimating Road SurfaceInput by ADAMS Simulation, Nissan Technical Review,No. 28, pp. 62 68 (1990) [9] . A.Mohammadzadeh, S.Heidar, 2006-940: Analysis And Design of Vehicle Suspension System using Matlab And Simulink, Grand Valley State University

AUTHORS
Khashayar Teimoori was born in Tehran, Iran in 1992. Currently he is pursuing B.Sc. Technical final year degree in mechanical engineering at IAU, Science and Research Branch, Tehran Iran. Now he is a member in the technical societies as ASME, AMS, ISME, and IMS. Now, he is a president of Backstretch*(Academy of Sciences). His special interests are Analytical and computational methods in chaos, fluid behaviors; solving the problems in nonlinear equations and complicated structures.

Muhammad Hassani was born in Behshahr, Iran in 1990. now he is pursuing B.Tech Degree in the field of Mechanical Engineering at Department of mechanical and aerospace Engineering, IAU, Science and Research Branch. He is currently work as a head of designing department in Backstretch-Team. His special interests are Animation, Photography, Car Design.

217

Vol. 5, Issue 1, pp. 206-217

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

AN ASSESSMENT OF DISTRIBUTED GENERATION ISLANDING DETECTION METHODS


Chandra Shekhar Chandrakar, Bharti Dewani, Deepali Chandrakar
Department of Electrical and Electronics Engineering Chhattisgarh Swami Vivekananda University Raipur (C.G.), India

ABSTRACT
The advancement in new technology like fuel cell, wind turbine, customer demands for better power quality and reliability are forcing the power industry to shift for distributed generations. Hence distributed generation (DG) has recently gained a lot of momentum in the power industry due to market deregulations and environmental concerns. Islanding occurs when a portion of the distribution system becomes electrically isolated from the remainder of the power system yet continues to be energized by distributed generators. An important requirement to interconnect a DG to power distributed system is the capability of the DG to detect islanding detection. Failure to trip islanded generators can lead to a number of problems to the generators and the connected loads. Typically, a distributed generator should be disconnected within 100 to 300 ms after loss of main supply. To achieve such a goal, each distributed generator must be equipped with an islanding detection device, which is also called anti islanding devices. This paper discusses the relevant issues and aims regarding existing techniques used for islanding detection.

KEYWORDS: Islanding detection, distributed generation, remote techniques, interconnected system, non
detection zone etc.

I.

INTRODUCTION

These days, electric power utilities are concerned with distributed generators including photovoltaic, wind farm, fuel cells, micro-sized turbine, and internal combustion engine generators as many good alternatives to solve environmental problems and to cope with rising energy prices and power plant construction costs. Distributed generation (DG) may make a contribution to improve quality of power, minimize peak loads and eliminate the need for reserve margin [1], [2]. Most DGs may be connected in parallel and supply power into power grids as well as local loads. Therefore, DG must be operated in such an inherently safe manner that DG should supply the generated power to the network loads only if the utility power supply is present. If DG is feeding the power to the networks without the utility supply, then it produces several negative impacts on utility power system and the DG itself, such as the safety hazards to utility personnel and the public, the quality problems of electric service to the utility customers, and serious damages to the DG if utility power is wrongly restored [2], [3].Therefore, during the interruptions of utility power, the connected DG must detect the loss of utility power and disconnect itself from Power grid as soon as possible. This paper deals with a particular problem that occurs at the interface between a distributed generation plant and the rest of the power system. The problem can be described as islanding detection in power systems. The problem has been investigated and discussed extensively in the last few years.

218

Vol. 5, Issue 1, pp. 218-226

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Recent interest in distributed generator installation into low voltage busses near electrical consumers has created some new challenges for protection engineers that are different from traditional radially based protection methodologies. This paper includes detail study of different existing techniques used for islanding detection of distributed generation which are broadly classified in remote detection techniques and local detection techniques

II.

ISLANDING

Islanding is the situation in which a distribution system becomes electrically isolated from the remainder of the power system, yet continues to be energized by DG connected to it. As shown in the figure1. Traditionally, a distribution system doesnt have any active power generating source in it and it doesnt get power in case of a fault in transmission line upstream but with DG, this presumption is no longer valid. Current practice is that almost all utilities require DG to be disconnected from the grid as soon as possible in case of islanding. IEEE 929-1988 standard [3] requires the disconnection of DG once it is islanded .Islanding can be intentional or Non intentional. During maintenance service on the utility grid, the shutdown of the utility grid may cause islanding of generators. As the loss of the grid is voluntary the islanding is known. Non-intentional islanding, caused by accidental shut down of the grid is of more interest. As there are various issues with unintentional islanding. IEEE 1547-2003 standard [4] stipulates a maximum delay of 2 seconds for detection of an unintentional island and all DGs ceasing to energize the distribution system,

Figure 1. Scenario of islanding operation

2.1 Issues with Islanding


Although there are some benefits of islanding operation there are some drawbacks as well. Some of them are as follows: Line worker safety can be threatened by DG sources feeding a system after primary sources have been opened and tagged out. The voltage and frequency may not be maintained within a standard permissible level. Islanded system may be inadequately grounded by the DG interconnection. Instantaneous reclosing could result in out of phase reclosing of DG. As a result of which large mechanical torques and currents are created that can damage the generators or prime movers [5] Also, transients are created, which are potentially damaging to utility and other customer equipment. Out of phase reclosing, if occurs at a voltage peak, will generate a very severe capacitive switching transient and in a lightly damped system, the crest over-voltage can approach three times rated voltage [6]. Various risks resulting from this include the degradation of the electric components as a consequence of voltage& frequency drifts. Due to these reasons, it is very important to detect the islanding quickly and accurately.

III.

REVIEW OF ISLANDING DETECTION TECHNIQUES

The main philosophy of detecting an islanding situation is to monitor the DG output parameters and system parameters and/ and decide whether or not an islanding situation has occurred from change in these parameters. Islanding detection techniques can be divided into remote and local techniques and

219

Vol. 5, Issue 1, pp. 218-226

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
local techniques can further be divided into passive, active and hybrid techniques as shown in Figure 2.

Figure 2. Islanding detection techniques

3.1 Remote Islanding Detection Techniques


Remote islanding detection techniques are based on communication between utilities and DGs. Although these techniques may have better reliability than local techniques, they are expensive to implement and hence uneconomical .Some of the remote islanding detection techniques are as follows: 3.1.1 Power Line Signaling Scheme These methods use the power line as a carrier of signals to transmit islanded or non-islanded information on the power lines. The apparatus includes a signal generator at the substation (25+ kV) that is coupled into the network where it continually broadcasts a signal as shown in figure 3. Due to the low-pass filter nature of a power system, the signals need to be transmitted near or below the fundamental frequency and not interfere with other carrier technologies such as automatic meter reading. Each DG is then equipped with a signal detector to receive this transmitted signal. Under normal operating conditions, the signal is received by the DG and the system remains connected. However, if an island state occurs, the transmitted signal is cut off because of the substation breaker opening and the signal can not be received by the DG, hence indicating an island condition.

Figure 3. Distributed Generation power line Signaling Islanding Detection

This method has the advantages of its simplicity of control and its reliability. In a radial system there is only one transmitting generator needed that can continuously relay a message to many DGs in the network. The only times the message is not received is if the interconnecting breaker has been opened, or if there is a line fault that corrupts the transmitted signal. There are also several significant disadvantages to this method, the fist being the practical implementation. To connect the device to a substation, a high voltage to low voltage coupling transformer is required. A transformer of this voltage capacity can have prohibitive cost barriers associated with it that may be especially undesirable for the first DG system installed in the local network. Another disadvantage is if the signaling method is applied in a non radial system, resulting

220

Vol. 5, Issue 1, pp. 218-226

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
in the use of multiple signal generators. This scenario can be seen in Figure 4.where the three feeder busses connect to one island bus. The implementation of this system, opposed to a simple radial system, will be up to three times the cost.

Figure 4. Distributed Generation Multi Power Line Signaling Islanding Detection Issue

Another problem for power line communication is the complexity of the network and the affected networks. A perfectly radial network with one connecting breaker is a simple example of island signaling; however, more complex systems with multiple utility feeders may find that differentiation between upstream breakers difficult. 3.1.2 Transfer Trip Scheme The basic idea of transfer trip scheme is to monitor the status of all the circuit breakers and reclosers that could island a distribution system. Supervisory Control and Data Acquisition (SCADA) systems can be used for that. When a disconnection is detected at the substation, the transfer trip system determines which areas are islanded and sends the appropriate signal to the DGs, to either remain in operation, or to discontinue operation. Transfer tip has the distinct advantage similar to Power Line Carrier Signal that it is a very simple concept. With a radial topology that has few DG sources and a limited number of breakers, the system state can be sent to the DG directly from each monitoring point. This is one of the most common schemes used for islanding detection [7].This can be seen in figure 5.

Figure 5. Distributed Generation Transfer Trip Islanding Detection

The weaknesses of the transfer trip system are better related to larger system complexity cost and control. As a system grows in complexity, the transfer trip scheme may also become obsolete, and need relocation or updating. Reconfiguration of this device in the planning stages of DG network is necessary in order to consider if the network is expected to grow or if many DG installations are planned. The other weakness of this system is control. As the substation gains control of the DG, the DG may lose control over power producing capability and special agreements may be necessary with the utility. If the transfer trip method is implemented correctly in a simple network, there are no nondetection zones of operation.

3.2 Local Detection Techniques


It is based on the measurement of system parameters at the DG site, like voltage, frequency, etc. It is further classified as:

221

Vol. 5, Issue 1, pp. 218-226

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
3.2.1 Passive Detection Techniques Passive methods work on measuring system parameters such as variations in voltage, frequency, harmonic distortion, etc. These parameters vary greatly when the system is islanded. Differentiation between an islanding and grid connected condition is based upon the thresholds set for these parameters. Special care should be taken while setting the threshold value so as to differentiate islanding from other disturbances in the system. Passive techniques are fast and they dont introduce disturbance in the system but they have a large non detectable zone (NDZ) where they fail to detect the islanding condition. There are various passive islanding detection techniques and some of them are as follows: (a) Rate of change of output power: The rate of change of output power, , at the DG side, once it is islanded, will be much greater than that of the rate of change of output power before the DG is islanded for the same rate of load change[ 8]. It has been found that this method is much more effective when dp/dt the distribution system with DG has unbalanced load rather than balanced load. [9] (b) Rate of change of frequency: The rate of change of frequency, df/dt, will be very high when the DG is islanded. The rate of change of frequency (ROCOF) can be given by equation (1). [10] (1) Where, P is power mismatch at the DG side H is the moment of inertia for DG/system G is the rated generation capacity of the DG/system Large systems have large H and G where as small systems have small H and G giving larger value for df/dt ROCOF relay monitors the voltage waveform and will Operate if ROCOF is higher than setting for certain duration of time. The setting has to be chosen in such a way that the relay will trigger for island condition but not for load changes. This method is highly reliable when there is large mismatch in power but it fails to operate if DGs capacity matches with its local loads. However, an advantage of this method along with the rate of change of power algorithm is that, even they fail to operate when load matches DGs generation, any subsequent local load change would generally lead to islanding being detected as a result of load and generation mismatch in the islanded system. (c) Rate of change of frequency over power: df/dp in a small generation system is larger than that of the power system with larger capacity. Rate of change of frequency over power utilize this concept to determine islanding condition .Furthermore, test results have shown that for a small power mismatch between the DG and local loads, rate of change of frequency over power is much more sensitive than rate of frequency over time [11]. (d) Voltage unbalance: Once the islanding occurs, DG has to take change of the loads in the island. If the change in loading is large, then islanding conditions are easily detected by monitoring several parameters: voltage magnitude, phase displacement, and frequency change. However, these methods may not be effective if the changes are small. As the distribution networks generally include singlephase loads, it is highly possible that the islanding will change the load balance of DG. Furthermore, even though the change in DG loads is small, voltage unbalance will occur due to the change in network condition. [12-13] (e) Harmonic distortion: Change in the amount and configuration of load might result in different harmonic currents in the network, especially when the system has inverter based DGs. One approach to detect islanding is to monitor the change of total harmonic distortion (THD) of the terminal voltage at the DG before and after the island is formed [14].The change in the third harmonic of the DGs voltage also gives a good picture of when the DG is islanded. 3.2.2. Active Detection Techniques With active methods, islanding can be detected even under the perfect match of generation and load, which is not possible in case of the passive detection schemes. Active methods directly interact with the power system operation by introducing perturbations. The idea of an active detection method is that this small perturbation will result in a significant change in system parameters when the DG is islanded, whereas the change will be negligible when the DG is connected to the grid.

222

Vol. 5, Issue 1, pp. 218-226

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
(a) Reactive power export error detection: In this scheme, DG generates a level of reactive power flow at the point of common coupling (PCC) between the DG site and grid [15] or at the point where the Reed relay is connected [16]. This power flow can only be maintained when the grid is connected. Islanding can be detected if the level of reactive power flow is not maintained at the set value. For the synchronous generator based DG, islanding can be detected by increasing the internal induced voltage of DG by a small amount from time to time and monitoring the change in voltage and reactive power at the terminal where DG is connected to the distribution system. A large change in the terminal voltage, with the reactive power remaining almost unchanged, indicates islanding. [17]The major drawbacks of this method are it is slow and it cannot be used in the system where DG has to generate power at unity power factor. (b) Phase (or frequency) shift methods: Measurement of the relative phase shift can give a good idea of when the inverter based DG is islanded. A small perturbation is introduced in form of phase shift. When the DG is grid connected, the frequency will be stabilized. When the system is islanded, the perturbation will result in significant change in frequency. The Slip-Mode Frequency Shift Algorithm (SMS) [18] uses positive feedback which changes phase angle of the current of the inverter with respect to the deviation of frequency at the PCC. A SMS curve is given by the equation (2).

(2)

Where m is the maximum phase shift that occurs at frequency fm. fn is the nominal frequency and is the frequency at previous cycle. A SMS curve is designed in such a way that its slope is greater than that of the phase of the load in the unstable region. A SMS curve, with m =10 and fm = 53 Hz, is shown in Figure 6. When the utility is disconnected, operation will move through the unstable region towards a stable operating point (denoted by black dots in Figure 6. Islanding is detected when the inverter frequency exceeds the setting.

Figure 6. Phase response of DG and local load This detection scheme can be used in a system with more than one inverter based DG. The drawback of this method is that the islanding can go undetected if the slope of the phase of the load is higher than that of the SMS line, as there can be stable operating points within the unstable zone [19].

3.3 Hybrid Detection Schemes


Hybrid methods employ both the active and passive detection techniques. The active technique is implemented only when the islanding is suspected by the passive technique. Some of the hybrid techniques are discussed as follows: (a) Technique based on positive feedback (PF) and voltage imbalance (VU): This islanding detection technique uses the PF (active technique) and VU (passive technique). The main idea is to monitor the three-phase voltages continuously to determinate VU [20] which is given by equation (3). (3) V+Sq and V-Sq are the positive and negative sequence voltages, respectively. Voltage spikes will be observed for load change, islanding, switching action, etc. Whenever a VU spike is above the set

223

Vol. 5, Issue 1, pp. 218-226

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
value, frequency set point of the DG is changed. The system frequency will change if the system is islanded. (b) Technique based on voltage and reactive power shift: In this technique voltage variation over a time is measured to get a covariance value (passive) which is used to initiate an active islanding detection technique, adaptive reactive power shift (ARPS) algorithm given by equation(4).[21]. (4) Tav' is the average of the previous four voltage periods, Uav is the mean of Tav', Tv is the voltage periods, UV is the mean of TV The ARPS uses the same mechanism as ALPS, except it uses the d-axis current shift instead of current phase shift. The d-axis current shift, or reactive power shift is given by equation (5). ( ) (5)

Kd is chosen such that the d-axis current variation is less than 1 percent of q-axis current in inverter's normal operation. The additional d-axis current, after the suspicion of island, would accelerates the phase shift action, which leads to a fast frequency shift when the DG is islanded. There is no single islanding detection technique which will work satisfactorily for all systems under all situations. The choice of the islanding detection technique will largely depend on the type of the DG and system characteristics. Recently, hybrid detection techniques have been proposed and it seems that the hybrid detection technique is the way to go with passive technique detecting the islanding when change in system parameter is large and initiating the active technique when the change in system parameter is not so large for the passive technique to have an absolute discrimination.

IV.

COMPARISONS OF ISLANDING DETECTION TECHNIQUES


Table 1. Comparisons of Islanding Detection Techniques.
Islanding Detection Techniques Advantages Disadvantages Expensive to implement specially for small system Short detection time Do not perturb the system Accurate when there is a large mismatch in generation and demand in the islanded system. Difficult to detect islanding when the load and generation in the islanded system closely match Special care has to be taken while setting the thresholds If the setting is too aggressive then it could result in nuisance tripping Introduce perturbation in the system Detection time is slow as a result of extra time needed to see the system response for perturbation Perturbation often degrades the power quantity and if significant enough, it may degrade the system stability even when connected to the Examples Transfer trip scheme Power line signaling scheme Rate of change of output power scheme Rate of change of frequency scheme Rate of change of frequency over power scheme Change of impedance scheme Voltage unbalance scheme Harmonic distortion scheme

Remote Techniques

Highly Reliable

Local Techniques a) Passive Techniques

b)

Active techniques

Can detect islanding even in a perfect match between generation and demand in the islanded system (Small NDZ

Reactive power export error detection scheme Impedance measurement scheme Phase (or frequency) shift schemes (like SMS, AFD, AFDPF and ALPS

224

Vol. 5, Issue 1, pp. 218-226

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
grid c) Hybrid Techniques Have small NDZ. Perturbation is introduced only when islanding is suspected. Islanding detection time is prolonged as both passive and active technique is implemented Technique based on positive feedback and voltage imbalance Technique based on voltage and reactive power shift.

V.

CONCLUSION

This paper describes and compares different islanding detection techniques. Fast and accurate detection of islanding is one of the major challenges in todays power system with many distribution systems already having significant penetration of DG as there are few issues yet to be resolved with islanding. Islanding detection is also important as islanding operation of distributed system is seen a viable option in the future to improve the reliability and quality of the supply.

ACKNOWLEDGEMENT
A heartily thanks to my Guide Prof Bharti Dewani, who not only helped me in my research but also enhance my knowledge in the field of power System and encourage me for preparing and publishing this paper. I also like to thank to all my colleagues and friends for their kind support and cooperation.

REFERENCES
[1] P. P. Barker and R.W. de Mello, Determining the Impact of Distributed Generation on Power Systems: Part1-Radial Distribution Systems,IEEE, 0780364201/00, 2000. [2] T. Ackermann, G. Andersson, and L. Soder, Electricity Market Regulations and their Impact on Distributed Generatiron, IEEE, 078035902-X/00, 2000. [3] O. Usta and M. A. Refern, Protection of Dispersed Storage and Generation Units Against Islanding, IEEE, 0780317726/94, 1994 [4] Recommended Practice for Utility Interconnected Photovoltaic (PV) Systems, IEEE Standard 929-2000, 2000. [5] IEEE Standard for Interconnecting Distributed Resources into Electric Power Systems, IEEE Standard 1547TM, June 2003. [6] R. A. Walling, and N. W. Miller, Distributed generation islanding implications on power system dynamic performance, IEEE Power Engineering Society Summer Meeting, vol.1, pp. 92-96, 2002. [7] A. Greenwood, Electrical Transients in Power Systems, New York: Wiley, 1971, pp. 83. [8] Ward Bower and Michael Ropp. Evaluation of islanding detection methods for photovoltaic utilityinteractive power systems. Report IEA PVPS Task 5 IEA PVPS T5-09: 2002, Sandia National Laboratories Photovoltaic Systems Research and Development, March 2002. [9] M. A. Redfern, J. I. Barren, and O. Usta, A new microprocessor based islanding protection algorithm for dispersed storage and generation, units, IEEE Trans. Power Delivery, vol. 10, no. 3, pp. 1249-1254, July 1995. [10] J. Warin, and W. H. Allen, Loss of mains protection, in Proc. 1990 ERA Conference on Circuit Protection for industrial and Commercial Installation, London, UK, pp. 4.3.1-12. [11] F. Pai, and S. Huang, A detection algorithm for islanding-prevention of dispersed consumer-owned storage and generating units, IEEE Trans. Energy Conversion, vol. 16, no. 4, pp. 346-351, 2001. [12] S. I. Jang, and K. H. Kim, A new islanding detection algorithm for distributed generations interconnected with utility networks, in Proc.IEEE International Conference on Developments in Power System Protection, vol.2, pp. 571-574, April 2004. [13] S. I. Jang, and K. H. Kim, An islanding detection method for distributed generations using voltage unbalance and total harmonic distortion of current, IEEE Tran. Power Delivery, vol. 19, no. 2, pp. 745-752, April 2004. [14] S. Jang, and K. Kim, Development of a logical rule-based islanding detection method for distributed resources, in Proc. IEEE Power Engineering Society Winter Meeting, vol. 2, pp. 800-806, 2002. [15] J. Warin, and W. H. Allen, Loss of mains protection, in Proc. 1990 ERA Conference on Circuit Protection for industrial and Commercial Installation, London, UK, pp. 4.3.1-12. [16] P. D. Hopewell, N. Jenkins, and A. D. Cross, Loss of mains detection for small generators, IEEE Proc. Electric Power Applications, vol. 143, no. 3, pp. 225-230, May 1996. [17] J. E. Kim, and J. S. Hwang, Islanding detection method of distributed generation units connected to power distribution system, in Proc. 2000 IEEE Power System Technology Conference, pp. 643-647.

225

Vol. 5, Issue 1, pp. 218-226

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
[18] G. A. Smith, P. A. Onions, and D. G. Infield, Predicting islanding operation of grid connected PV inverters, IEE Proc. Electric Power Applications, vol. 147, pp. 1-6, Jan. 2000. [19] M. E. Ropp, M. Begovic, A. Rohatgi, G. Kern, and R. Bonn, Determining the relative effectiveness of islanding detection methods using phase criteria and non-detection zones, IEEE Transaction on Energy Conversion, vol. 15, no. 3, pp. 290-296, Sept. 2000. [20] V. Menon, and M. H. Nehrir, A hybrid islanding detection technique using voltage unbalance and frequency set point, IEEE Tran. Power Systems, vol. 22, no. 1, pp. 442-448, Feb. 2007. [21] J. Yin, L. Chang, and C. Diduch, A new hybrid anti-islanding algorithm in grid connected three-phase inverter system, 2006 IEEE Power Electronics Specialists Conference, pp. 1-7. [22] El-Arroudi, K. Intelligent-Based Approach to Islanding Detection in Distributed Generation, Power Delivery, IEEE Transactions on, Volume: 22 , Issue: 2 , April 2007 [23] Jun Yin ,Liuchen Chang ; Diduch, C. Recent developments in islanding detection for distributed power generation, Power Engineering, 2004. LESCOPE-04. 2004 Large Engineering systems Conference on 28-30 July 200424]Mahat, P. , Review of islanding detection methods for distributed generationElectric Utility Deregulation and Restructuring and Power Technologies, 2008. DRPT 2008. Third International Conference on 6-9 April 2008, [25] Cheng-Tao Hsieh ,Jeu-Min Lin ,Shyh-Jier Huang, Enhancement of islanding-detection of distributedgeneration systems via wavelet transform-based approaches, International Journal of Electrical Power & Energy Systems,Volume 30, Issue 10, December 2008\ [26] Xiaolin Ding , Peter.A.Crossley , D.John.Morrow , Islanding Detection for Distributed Generation, Journal of Electrical Engineering & Technology Vol.2 No.1 , 2007.3, 19-28

AUTHORS
Chandra Shekhar Chandrakar was born in Raipur, Chhattisgarh on 6th of october 1987.He received his B.E. in Electrical and Electronics Engineering from DIMAT Raipur, Chhattisgarh ,India in the year 2009 and currently he is a M-tech student in Disha institute of management and technology, Raipur, Chhattisgarh. His special field of interest is power system.

Bharti Dewani received her B.E. (Electrical) Degree from NIT, Raipur, India in 2007 and M.E. (Power System Engg.) from SSCET, Bhilai in year 2010.She is working as Sr. Lect. in deptt. of Electrical & Electronics engg. (DIMAT, Raipur) since 2007.She is currently pursuing Ph.D from Dr. C.V. Raman University. Her field of interest is power system restructuring and power system optimization.

Deepali Chandrakar was born in Raipur, Chhattisgarh on 28th of October 1988.She received her B.E. in Electrical and Electronics Engineering from Government Engineering college Raipur, Chhattisgarh, India in the year 2010 and currently she is a M-tech student in Disha institute of management and technology. Raipur, Chhattisgarh. Her special field of interest includes control system and power electronics.

226

Vol. 5, Issue 1, pp. 218-226

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

STUDY OF WIDELY USED TREATMENT TECHNOLOGIES FOR HOSPITAL WASTEWATER AND THEIR COMPARATIVE ANALYSIS
Jafrudeen1 and Naved Ahsan2
Ph.D. Scholar, Department of Civil Engineering, Jamia Millia Islamia, New Delhi, India Director- Technical, Rivulet Engineering Consultancy Private Limited, New Delhi, India Technical Associate (PHE, FPS, WWTS), ABL Hospitech Private Limited, New Delhi, India 2 Associate Prof., Department of Civil Engineering, Jamia Millia Islamia, New Delhi, India
1

ABSTRACT
Hospital wastewater may contain various potential hazardous materials. Indeed hospital wastewater may have an adverse impact on environments and human health. Therefore, the selection of suitable treatment technology and proper treatment of hospital wastewater is essential. Various study and research work reveals that quality of hospital wastewater is similar to medium strength values of the domestic wastewater, The discharge standards for hospital wastewater should conforms to EPA 1986 (Source :GSR 7 dated Dec.22, 1998). The tolerance limit for sewage effluent discharged into surface water sources will be as per BIS standards IS: 4764:1973. According to WHO guidelines, treated wastewater should not contain no more than one helminths egg per litre and no more than 1000 faecal coli forms per 100 mL, if is to be used for irrigation. A study of various treatment technologies has been carried out along with their advantages and disadvantages. The comparison of widely used treatment technologies will help designers, engineers, architects, economists in selection of treatment technologies in terms of their efficiency, energy, operation , performance, land requirement, cost etc. Effluent discharge or re-use after suitable treatment protects environment and public health, government shall have to adapt integrated wastewater management approach, monitor and enforce existing present standards and also if require can generates new guidelines or policies or standards.

KEYWORDS: Wastewater, Hospital, BOD5, Hospital Wastewater, SBR, Treatment Technologies, SAFF.

I.

INTRODUCTION

Wastewater composition refers to the actual amounts of physical, chemical and biological constituents present in wastewater. Depending upon the concentration of these constituents, domestic wastewater is classified in strong, medium or weak. Various study and research work reveals that monitoring of pH, BOD, COD, TSS and total coli forms indicated that the quality of hospital wastewater is similar to medium values of the domestic wastewater. Hospital wastewater may contain various potential hazardous materials including, microbiological pathogens, radioactive isotopes, disinfectants, drugs, chemical compounds and pharmaceuticals. Indeed the hospital wastewater may have an adverse impact on environments and human health. Therefore, the selection of suitable treatment technology and proper treatment of hospital wastewater is essential. There is need to develop a comparison of

227

Vol. 5, Issue 1, pp. 227-240

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
widely used treatment technologies for hospital wastewaters with respect to their design, land requirement, efficiency, operation & maintenance (O&M), fixed and variable costs, advantages & disadvantages etc. On- site treatment of hospital wastewater will produce a sludge that contains high concentrations of helminths and other pathogens. According to the relevant WHO guidelines, treated wastewater should not contain no more than one helminthes egg per litre and no more than 1000 faecal coli forms per 100 mL if is to be used for irrigation. Under the Environmental (Protection) Act 1986, the effluent limits are applicable to those hospitals which are either connected to sewer without terminal sewage treatment plant or not connected to public sewer. The discharge standards for hospital wastewater should conforms to EPA 1986 (Source: GSR 7 dated Dec.22, 1998).The tolerance limit for sewage effluent discharged in India is as per BIS standards IS: 4764:1973. Most of the countries have their own standards for sewage disposal & reuse of reclaimed wastewater after suitable treatment. There is need to encourage environmental engineers, technologist, economists should develop & analysis for different treatment technologies on efficiency, design aspects, operational aspects, financial aspects and overall risks associated with treatment technologies. The various treatment technologies have been developed so far for treatment of hospital wastewater. Some of these treatment technologies include Activated Sludge Process (ASP), Extended Aeration (E.A.), Sequential Batch Reactor (SBR), Fluidized Bed Reactor (FBR), Submerged Aeration Fixed Film (SAFF) Rector and Membrane Bio-Reactor (MBR). ASP is very old technology and development of more users friendly similar kind of technology has made activated sludge process an obsolete technology for the treatment of sewage. E.A. is exactly the similar kind of treatment technology like ASP except more hydraulic retention time to give extended aeration for the complete digestion of organic matter. SBR is also similar treatment technology like E.A. system but in one tank only where both biodegradation as well as settling of solids and removal of sludge is done from same tank. It is also known as a draw-and-fill activated sludge treatment system. FBR is the latest advance in attached as well as suspended growth aerobic biological treatment technology. Influent is treated through a bed of small ring pac media at a sufficient velocity to cause fluidization in a reactor. SAFF is also a latest advancs in attached growth process and has been implemented in recent years as fixed film media into activated sludge reactors to improve performance of sewage treatment plants. MBR is also used to treat wastewater which works on the principle of filtration of activated sludge through the concept of using flat sheet type or hollow fibre type submerged membrane modules in bioreactors. A brief description of each technology, advantages, disadvantages and their comparative analysis is given below:

1.1. Activated Sludge Process


This is a conventional process to treat the hospital wastewater. In this process the wastewater is treated in open tank called aeration tank and air is supplied either through fixed / floating surface aerator or air blower to provide oxygen for the aerobic microbes. Around 12-15 hour hydraulic retention time is provided for the treatment of wastewater. The microorganisms utilize the oxygen in the air and convert the organic matter into stabilized, low-energy compounds such as NO3, SO4, and CO3 and synthesize new bacterial cells. The overflow carried to the adjacent clarification system contains active microbes and is required to be recycled back to aeration tank to maintain specified critical performance rating parameter like MLSS and F/M ratio. A number of different modifications or variants of the activated sludge process have been developed since the original experiments of Arden and Lockett in 1914. These variants, to large extent, have been developed out of necessity or to suit particular circumstances that have arisen. Generally two type of mixing regime are the major interest in the activated sludge process; plug flow and complete mixing. In the first one, the regime is characterized by orderly flow of mixed liquor through the aeration tank with the no element of mixed liquor or mixing along the path of flow. In the complete mixing, the contents of aeration tank are well stirred and uniform throughout. Thus at steady state, the effluent from the aeration tank has the same composition as the aeration tank contain.

228

Vol. 5, Issue 1, pp. 227-240

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
The biological component of the activated sludge system is comprised of microorganisms. The composition of these microorganisms is 70 to 90 percent organic matter and 10 to 30 percent inorganic matter. Bacteria, fungi, protozoa, metazoa and rotifers constitute the biological mass of activated sludge. However, the constant agitation in the aeration tanks and sludge recirculation are deterrents to the growth of higher organisms. The species of microorganism that dominates a system depends on environmental conditions, process design, the mode of plant operation, and the characteristics of the secondary influent wastewater. The microorganisms that are of greatest numerical importance in activated sludge are bacteria. Some bacteria are strict aerobes (they can only live in the presence of oxygen), whereas others are anaerobes (they are active only in the absence of oxygen). The preponderance of bacteria living in activated sludge are facultativeable to live in either the presence or absence of oxygen, an important factor in the survival of activated sludge when dissolved oxygen concentrations are low or perhaps approaching depletion. While both heterotrophic and autotrophic bacteria reside in activated sludge, the former predominate. Heterotrophic bacteria obtain energy from carbonaceous organic matter in influent wastewater for the synthesis of new cells. At the same time, they release energy via the conversion of organic matter into compounds such as carbon dioxide and water. Important genera of heterotrophic bacteria include Alcaligenes, Arthrobacter, Citromonas, Flavobacterium, Pseudomonas, and Zoogloea. ASP is very old technology and has been used extensively in all the places for the treatment of wastewater because of non-availability of any other technology. The development of more users friendly similar kind of technology has made activated sludge process an obsolete technology for the treatment of sewage. 1.1.1. Advantages BOD5 removal efficiency > 90%. User friendly. Require less skilled labour for operation and maintenance (O&M). Oxidation and nitrification achieved without chemicals. Maximum removal of suspended solids up to 97%. Most widely used wastewater treatment process because of non-availability of any other technology. Moderate land area. Ability to handle peak load and dilute toxic substances. 1.1.2. Disadvantages More sludge volume without getting well settled. Inefficient in color removal from wastewater and may increase the color through formation of highly coloured intermediates through oxidations. Poor effluent quality with odor problem. More sensitive to shock loading and temperature. Inefficient in the nutrients removal from wastewater and tertiary treatment is required for further polishing. Larger volume and high aeration costs. Not much operational flexibility. Biomass instabilities like sludge bulking. High effluent TSS & chlorine demand. High energy used. Considered an obsolete technology because of advancement of other user friendly treatment processes.

1.2. Extended Aeration (E.A.) Process


It is exactly similar kind of treatment technology like activated sludge process except more hydraulic retention time to give extended aeration for the complete digestion of organic matter. Normally 18-24 hour retention time is provided in the aeration tank for the complete aerobic biodegradation. The main

229

Vol. 5, Issue 1, pp. 227-240

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
objective behind increasing the hydraulic retention time is to reduce the odor problem because of semi digested sludge and reducing the percentage of sludge recycling to the aeration tank. This technology is ideally suited to the large installation where space is not a constraint. Extended aeration is a reaction defined mode rather than a hydraulically defined mode, and can be nominally plug flow or complete mix. This process can be sensitive to sudden increase in flow due to resultant high MLSS loading to final clarifier, but is relatively insensitive to shock loadings in concentration due to buffering effect of the large biomass volume. In this process at a low organic loading, long aeration time, high MLSS concentration and low F/M, the BOD removal efficiency is high because of high detention in the aeration tank, the mixed liquor solid undergo considerable endogenous respiration and get well stabilized. The excesses sludge dose not requires separate digestion either can be directly dried on sand beds or treated through centrifuge / filter presses. 1.2.1. Design Consideration The design considerations in the design of extended aeration process are design parameters / operating characteristics, aeration tank capacity and dimensions, aeration facilities etc.

1.2.1.1. Design Parameters / Operating Characteristics


The design parameters and operating characteristics includes BOD5 removal efficiency, F/M ratio, SRT, detention time, O2 requirements, MLSS, waste sludge etc.
Table-1: Common Design Parameters and Operating Characteristics of Extended Aeration Technology S.No. Design Parameters / Operating Characteristics Range 1. F/M (lb BOD5/lb MLSS.d) 2. SRT (days) 3. lbBOD5/1000cu ft.d 4. BOD5 Removal (%) 5. Aerator Detention Time (h) 6. Nitrification Occurs 7. O2 Requirements ( lb/ lb BOD5 Removed)** 8. Re-circulated Underflow Solid Rate ( % Q ) 9. MLSS (mg /l) 10. O2 Uptake (mg/g.h MLSS) 11. Waste Sludge (lb/lb BOD5 Removed) Note- * Additional oxygen must be added if nitrification takes place. ** Density of O2 @ 00C and 760 mm = 0089 lb/cu ft (1.429 g/l). MLSS = 1000 mg/l; MLSS x 0.8 = MLVSS lb /1000 cu ft x 4.883 = g/m2 < 0.05 >30 10-15 90+ 16-24 Yes 1.4-1.6* 100-300 2000-6000 3-8 0.15-0.3

1.2.1.2. Aeration Tank Capacity and Dimensions The volume of aeration can be calculated from the following equation: F/M = [Q (So-S)] / [XV] Where, Q = Volumetric flow rate (m3/day) X = MLSS concentration (mg/1) V = Volume of tank (m3) F/M = Food to mass ratio. So, S = Soluble food concentration in the effluent and rector respectively (mg/l). The dimensions of the aeration tank depend on the type of aeration equipment employed. The depth controls the aeration efficiency and usually kept between 3 to 10 m. The width controls the mixing and usually kept between 1.2 to 2.2 m. The length should be not be less than 30 or not ordinary longer than 100m. The horizontal velocity should be around 1.5m/s. tank free board is generally kept

230

Vol. 5, Issue 1, pp. 227-240

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
between 0.3 and 0.5m. The inlet should be design to maintain a minimum velocity of 0.2 m/s to avoid the deposition of solids. 1.2.1.3. Oxygen Requirement and Aeration Facilities Oxygen is required in the extended aeration process for the oxidation of a part of the influent organic matter and also for the endogenous respiration of micro-organism in the system. The total oxygen requirement of the process may be formulated as follows: O2 = [Q (So Se)] / f] [1.42 Qw Xr]------ (i) Where, f = ratio of BOD5 to ultimate BOD Q = Volumetric flow rate (m3/day) So, Se = Soluble food concentration in the influent and effluent respectively (mg/l). Qw = Waste activated sludge rate (m3/day) Xr = MLSS concentration in return sludge (mg/1) and 1.42 = oxygen demand of biomass (g/g). The above equation (i) may be expressed as O2 = [Q (So Se)] /f][1.42(VX / c)] ---- (ii) Where, X = MLSS concentration in reactor (mg/1) V = Volume of tank (m3) c = SRT or Mean cell resident time (day) Note- The formula does not allow for nitrification but allows only for carbonaceous BOD removal. The aeration facilities are designed to provide the calculated oxygen demand of the wastewater against a specific level of dissolved oxygen in the waste water. Various air diffusing device have been classified as either fine bubble or coarse bubble, with the fine bubble are more efficient in transferring oxygen. A diffused-air system consists of diffuser that are submerged in the waste water, header, pipe, air mains, and the blower and appurtenances through which the air passes. It consists of a tank with perforated pipes, tubes or diffuser plates, fixed at the bottom to release fine air bubbles from compressor unit. Aerator has following advantages: Aerator are rated based on the amount of oxygen they can transfer to the water under standard condition of 20C, 760 mm Hg barometric pressure and zero D.O. Aeration removes odour and tested due to volatile gases like hydrogen sulphide and due to algae and related organisms. Aeration also oxidize Fe and Mn, increase dissolved oxygen contain in water, remove CO2 and reduces corrosion and remove methane and other flammable gases. 1.2.2 Advantages Low sludge yields. Operation is rendered simple due to elimination of primary settling and separate sludge digestion. Easy to install. Require less skilled labour for operation and maintenance (O&M). Surface aerators in open tanks with long detention periods are not advisable for severe climates Efficiency and effluent quality is better than conventional ASP process. Odor free. 1.2.3 Disadvantages Oxygen requirement for the processes is higher and running cost is also therefore high. Loss of pinpoint floc and the tendency to loss solids following low loadings.

231

Vol. 5, Issue 1, pp. 227-240

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Unable to achieve de-nitrification or phosphorus removal. Limited flexibility in response to changing effluent requirements. Long aeration time combined with long sedimentation rate may also result is rising sludge in the sedimentation tank. Low temperature insensitive if heat loss is not controlled. Larger footprint / land area required. Large energy requirements. 1.3. Sequential Batch Reactor (SBR) The first notable, but short lived, resurgence of interest in biological treatment occurred in the early 1950s when Porges (1955) and his co-workers first studied batch operation of ASP system for treating wastewaters. The second resurgence occurred in the 1970s with the effort of Irvine and with his coworkers investigating the suitability of batch biological processes (Denneis et all, 1979; Irvine et al, 1977; Irvine and Richter, 1976). Around the same period, interest in the batch operated biological treatment systems surfaced also in Australia (Goronszy, 1979). It is the same treatment technology like Extended Aeration (E.A.) system but in one tank only where both biodegradation as well as settling of solids and removal of sludge is done from same tank. It is also known as a draw-and-fill activated sludge treatment system. The wastewater flows from one tank to another tank on a continuous basis and virtually all tanks have a predetermined, periodic operating strategy. Therefore, the SBR is considered a time-oriented and batch process system. The essential difference between the SBR and the conventional continuous flow activated sludge system is that SBR carries out functions such as equalization, aeration and sedimentation in a timer rather in a space sequence. It consists of a single or multiple reactor tanks operating in parallel. Each operating cycle of a SBR reactor comprises five distinctive phases, referred to as: FILL, REACT, SETTLE, DRAW and IDLE phases. The overall control of the system is accomplished with level sensors and timing device or microprocessor. One advantage of this orientation is flexibility of operation. The total time in the SBR is used to establish the size of system and can be related to the total volume of a conventional continuous-flow facility. As a result, the fraction of time devoted to a specific function in SBR is equivalent to some corresponding tank in a space oriented system. Therefore the relative tank volumes dedicated to , say aeration and sedimentation in the SBR can redistributed easily by adjusting the mechanism which controls the time ( and , therefore share the total volume) planned for either function. In conventional ASP, the relative tank volume is fixed and cannot be shared or redistributed as easily as in SBR. Because of the flexibility associated with working in time rather that is space, the SBR can be operated either as labour intensive, low- energy, high sludge yield can also be traded off with initial capital costs. The operational flexibility also allows designers to use SBR to meet many different treated objectives, including one objective at the time of construction (e.g. BOD and suspended solids reduction) and another at a later time ( e.g. nitrification / de-nitrification in addition to BOD and suspended solids removal). 1.3.1 Advantages Single tank for reaction and settling. True batch mode of operation & can be operated as a time-based control system allowing continuous inflow of wastewater during all phases of the cycle. Respond to flow and load variations. Quiescent settling and no sludge storages. Ability to achieve biological oxidation, nitrification, de-nitrification, phosphorus removal and solid/liquid separation. Large operational flexibility and automatic possible. Minimal sludge bulking. Computer interface technologies, and advanced monitoring instrumentation capability, and ability to be operated remotely. Eliminates primary, secondary clarifiers and return sludge pumps Small footprint required.

232

Vol. 5, Issue 1, pp. 227-240

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Less labour required when operated automatically and computer controlled. Odor free technologies. 1.3.2 Disadvantages Higher energy consumptions Difficulty to adjust cycle time. Frequent sludge disposal. Special decanting and aeration equipments ( cant use diffusers in reaction tank) Need to recycle early decant if solids in weir trough. Setting system sequences can be complex, especially if anoxic de-nitrification is required. Use of an anaerobic chamber, which is a potential odor source and is an area where corrosion may occur, even in a concrete tank. Higher cost because of automation involved. Skilled labour is required. 1.4. Fluidized Bed Reactor (FBR) The FBR process is the latest advance in attached growth aerobic biological treatment technology. FBR employs RING PAC MEDIA, neutrally buoyant bio film carrier elements, to achieve outstanding BOD/COD removal productivity from a compact bioreactor. In Fluidized Bed Reactors, the liquid to be treated is pumped through a bed of small media at a sufficient velocity to cause fluidization. In the fluidized state the media provide a large specific surface for attached biological growth and allow biomass concentrations in the range 10-40 kg/m3 to develop (Cooper and Sutton, 1983). For aerobic treatment processes the reactor is aerated. This is done by recalculating the liquid from the reactor to an oxygenator where air, or possibly oxygen, is bubbled (Cooper, 1981). To overcome problems related to high re-circulation rates, needed when there is high oxygen demand in the reactor, the reactor might be aerated directly. The basis for the use of fluidized bed systems is the immobilization of bacteria on solid surfaces. Many species of bacteria (and also other microorganisms) have the ability for adhering to supporting matrices. In this process, a volume of Ring Pac media is immersed in water and is fluidized (kept in constant motion) through the movement of gas and liquid in the treatment reactor. As the media supports a biomass concentration several times that achievable in activated sludge systems, treatment is significantly more productive. Refer figure-1 for ring pac bio-media.

Figure-1: Ring Pac Bio-Media for FBR reactors

The neutrally buoyant plastic media within each aeration tank provides a stable base for the growth of a diverse community of microorganisms. PVC media has a very high surface-to-volume ratio, allowing for a high concentration of biological growth to thrive within the protected areas of the media. The FBR process enables self-sustaining biological treatment; the need to periodically waste sludge and the requirement to supply a dilute return activated sludge to maintain an appropriate foodto-microorganism (F/M) ratio is eliminated. In addition, the excess biomass is automatically sloughed off in the process, maintaining a highly active biomass. 1.4.1. Advantages The FBR requires very less hydraulic retention time (HRT) compared to an extended aeration or activated sludge process to perform the same BOD reduction duty. High resident biomass concentration, intense mass transfer conditions and aggressive biomass-sloughing action enable the process to rapidly respond to variations in process load.

233

Vol. 5, Issue 1, pp. 227-240

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
The mechanical simplicity, flow-through nature of the process and no sludge problems all result in an almost operator-free process. FAB reactor is hybrid reactor where attached growth and suspended growth activity take place simultaneously. The BOD removal rate continues to increase with loading rate even at loading rates in excess. Less operation and maintenance cost during plant operations. Less footprint area required for installation. Efficient and reliable technology. 1.4.2 Disadvantages Less effective during large variation in influent wastewater. Constant monitoring of MLSS is required. More chances of septic conditions due to power failure. Moderate power consumption. 1.5 Submerged Aeration Fixed Film (SAFF) Reactor An innovation that has been implemented in recent years is the fixed film media into activated sludge reactors to improve performance and in some cases to minimize expansion of existing facilities. In plants where nitrification and de-nitrification is practiced, nitrification is usually the rate-limiting step and the media is placed in the aerobic zone to enhance nitrification at low temperatures. It is based on aerobic attached growth process and used in the secondary treatment of wastewater treatment plant. In this process raw sewage is introduced into the SAFF Reactor where for attached growth process takes place containing contains polymer based bio-media. The aerobic environment in the SAFF is achieved by the use of fine bubble diffused aeration equipments, which also serves to maintain the mixed liquor in a completely mixed regime. The mixture of new cells and old cells overflows into a secondary sedimentation tank where the cells are separated from the treated wastewater. A portion of the settled cells is recycled using the horizontal, non-clog, and flooded pumps to maintain the desired concentration of organisms in the SAFF reactor and the remaining portion is wasted to aerobic sludge digester-cum-thickener tank for further sludge treatment. Refer figure-2 for general sketch for SAFF reactor system.

Figure-2: General sketch for SAFF reactor

SAFF technology for optimum performance and dependability. Using reliable, cost effective and energy efficient blower for aeration are with an integral flow management system and enter the biological treatment stage where it is aerated with fine bubble membrane diffuser. The continuous supply of oxygen together with the incoming food sources encourage microorganism to grow on the surface of the submerged media, convening the waste water in to CO2 and water in the process. Media of SAFF is providing more surface area for microorganism to grow. Excess micro-organism (known as humus solids) that flows out of the biological treatment stage is separated from the final effluent in another settlement stage. 1.5.1. Advantages No constant monitoring of MLSS required thus making it user friendly. No chance of occurring septic conditions due to power failure as it sustains microbial growth under irregular power supply conditions. Reduced overall volume due to multistage.

234

Vol. 5, Issue 1, pp. 227-240

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Reduced civil constructions. Less maintenance as there is no moving parts. Low power consumption due to high oxygen transfer. Better oxygen transfer. Less sludge generation hence reduced problem of sludge disposal. Low operating costs due to absence of sludge recycling. The fixed-film process will continually slough off outer layer(s) of dead bio film and continue to produce new microorganisms to meet the organic load. 1.5.2. Disadvantages Little higher footprint compared to FBR technology. Excess sludge in the SAFF reactor can clog the bio-media, therefore continuous monitoring of MLSS is required. 1.6. Membrane Bio-Reactor (MBR) It is latest technology has been using very widely to treat domestic wastewater. In this process the treatment is using by synthetic membranes or diffusion process through membrane. It is proposed to use Membrane Bio-Reactor (MBR) system working on the principle of filtration of activated sludge through the concept of using flat sheet type or hollow fibre type submerged membrane modules in bioreactors. The membranes in a MBR system are made from polymeric organics (PVDF, PE or PES) and assembled into units (modules, cassettes, stacks) with high packing density. Raw wastewater pre-treatment is important to sustain stable MBR performance and fine screening is essential operation. The use of Membrane Bio-Reactors (MBRs) in municipal wastewater treatment has grown widely in the past decades. The MBR technology combines conventional activated sludge treatment with lowpressure membrane filtration, thus eliminating the need for a clarifier or polishing filters. The membrane separation process provided a physical barrier to contain microorganisms and assures consistent high quality reuse water. The ability to treat raw wastewater for reuse provides a new, reliable, drought-proof supply of water that can be benefit to communities.

Figure-3: General sketch for MBR reactor

1.6.1 Advantages MBR is capable of meeting the most stringent effluent water quality standards. Membrane modules are back-flushable Requires cleaning only once in 3 to 6 months. Yield 60-80% less sludge than conventional system. Compact design- footprint 75% smaller than conventional. Possibility of direct and indirect water reuse. Highly space efficient. High quality effluent in a greatly simplified process. No secondary clarifier, virtually no effluent suspended solids, no RAS recycling. Maintain high MLSS Easily automated and instrumented to measure performance. It allows systems to be remotely operated and monitored, thus significantly reducing operator attendance.

235

Vol. 5, Issue 1, pp. 227-240

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Sludge can be wasted directly from aeration tank. 1.6.2 Disadvantages Limited tolerance for abrasive and stringy materials, such as grit, hair and fibrous material. Accumulation of solids and sludge between membrane fibres and plates can clog/damage the membrane tube openings. Membrane fouling. Higher energy consumption to overcome trans-membrane resistance and to prevent fouling using aeration etc. Very high aeration requirements. Dual aeration system for mixing and to prevent fouling. Time-consuming membrane cleaning procedure. High capital costs for membrane system. Extra power requirements for vacuum on micro filter. Waste activated sludge is not thickened- larger volume to solids processing. Broken membranes result in low effluent quality.

II.

COMPARISON OF TREATMENT TECHNOLOGIES

Wastewater treatment technologies can be classified under three categories based on performance parameters, land requirements, energy demand: 2.1. Category-I: Good performance, low energy requirement, low resource requirement and associated costs, high land requirement (BOD < 30, TSS <30). 2.2 Category-II: Good performance, high energy requirement, high resource requirement and associated costs, moderately low land requirement (BOD < 30, TSS <30). 2.3. Category-II (Improved Version): Very good performance, very high energy requirement, very high resource requirement and associated costs, low land requirement (BOD < 20, TSS <20). 2.4. Category-II (Further Improved Version): Very good performance, very high energy requirement, very high resource requirement and associated costs, low land requirement (BOD < 10, TSS <10). 2.5 Category-III: Moderate performance, moderate energy requirement, moderate resource requirement and associated costs, moderate low land requirement (BOD < 30, TSS <30). The comparison of widely used treatment technologies for hospital wastewater has been summarized below:
S.N . 1. Table-3: Comparison of Widely Used Treatment Technologies for Hospital Wastewater Item ASP E.A. SBR FBR/FAB SAFF MBR Description Type of process Suspende Suspended Suspended Suspended Attached Suspended d growth growth growth and attached growth growth process process process growth process with solidprocess liquid separation process Typical influent pH : 6.5 - 8.5 ; BOD5 : 150 350 mg/l ; COD : 250-800 mg/l ; characteristics TSS : 150-400 mg/l, E-Coli : 106-1010 MPN/100ml for hospital wastewater Discharge pH : 6.5 9.0 ; BOD5 < 30 mg/l ; COD < 250 mg/l ; standards for TSS < 100 mg/l; E-Coli < 103 MPN/100ml hospital wastewater ( Sources EPA

2.

3.

236

Vol. 5, Issue 1, pp. 227-240

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
4. 1998) Discharge standards for hospital wastewater ( MoEF in India) Requirement of bio-media / diffusion membrane & their types Treatment for laundry and laboratory effluent Treatment for oil and grease from kitchen /cafeteria Pre-treatment and primary treatment for influent wastewater Secondary clarifier / tube settler tank Requirement of Equalisation tank Tertiary treatment system for further polishing treated wastewater Expected quality of treated wastewater after tertiary treatment. BOD5 removal efficiency Remote monitoring of plant performance Sludge digestion Required power Required operator pH : 6.5 8.5 ; BOD5 < 10 mg/l ; COD < 100 mg/l ; TSS < 10 mg/l; E-Coli < 103 MPN/100ml

5.

No

No

No

Yes & Floating Type

Yes & Fixed Type

Yes & membrane module

6.

Yes

Yes

Yes

Yes

Yes

Yes

7.

Yes

Yes

Yes

Yes

Yes

Yes

8.

Yes

Yes

Clarifier / tube settler can be eliminated No

Clarifier / tube settler can be eliminated Yes

9.

Yes

Yes

Clarifier / tube settler can be eliminated Yes

Clarifier / tube settler can be eliminated No

10. 11.

Yes Yes

Yes Yes

Can be avoided Yes

Yes Yes

Yes Yes

Yes No

12.

Fair

Good

Better

Much better

Much better

Excellent

13. 14.

90% No

95% No

95-97% Yes

95-98% No

95-98% No

99% Yes

15. 16. 17

18.

19. 20. 21.

Ease of operation and maintenance problems Effects of climates Required chemicals Need for lab control

Less Medium A few staff with medium skill level Easy

Less Very high A few staff with high skill Easy

High Medium A few staff with very high skill Difficult to control Small Essential every hour

High Low A few staff with medium skill Medium

High Very low A few staff with medium skill Easy

High Very high A few staff with very high skill Difficult to control Very small Essential every hour

High Few or none every month

Medium Few or none every day

Small Essential every day

Small Few or none every day

237

Vol. 5, Issue 1, pp. 227-240

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
22. 23. Facing shock loads Electromechanical Cost (Lac. /m3/d) Power cost (Kwh / ML treated) O & M cost (Rs. million / year /mld) Land requirement (m2 / KLD) Application for re-use of treated sewage water. No problem 0.10-0.11 Affected highly 0.12-0.13 Affected highly 0.16-0.18 Some problem 0.13-0.15 No problem 0.13-0.14 No problem 0.25-0.30

24. 25.

150-200 0.2-0.4

180-225 0.3-0.5

200-250 1.0-1.75

170-200 0.6-0.75

175-225 0.75-1.14

225-275 1.5-2.0

26.

1.5 - 2.5

2 - 3.5

0.5-0.6

0.6-0.7

0.6-0.7

0.5

27.

Irrigation /horticultu re

Irrigation / horticulture

Irrigation /horticultur e, flushing water, cooling tower water make-up etc.

Irrigation /horticulture, flushing water, cooling tower water make-up etc.

Irrigation /horticultu re, flushing water, cooling tower water make-up etc.

Irrigation /horticultur e, flushing water, cooling tower water make-up etc.

III.

OBJECTIVE OF FUTURE RESEARCH


To assess the sources of wastewater in hospitals, influent characteristics, current practices adopted for treatment of wastewater. To compare different treatment techniques and technologies and to identify the best suitable options for treatment of hospital wastewater along with its recycling and reuse. To define an objective economic index derived from cost functions, including both investment and variable operating costs over the life of the treatment and recycling and reuse. Finally, to develop a generalized framework for recycling and reuse of wastewater in hospitals through minimization of cost of treatment and maximum reclamation of treated sewage.

IV.

CONCLUSIONS
There are large number of techniques and technologies available for treatment of wastewater. The hospital wastewater may contain various potential hazardous material require special attention , hence proper treatment and disposal is essential. It has been observed from the field visit to various hospitals that the commonly used treatment technology included ASP, EA, SBR, FBR, SAFF and MBR. Effluent discharge or re-use after suitable treatment protects environment and public health, government shall have to adapt integrated wastewater management approach, generates new guidelines or policies or standards (if require) and monitor and enforce existing present standards. Although each of these techniques/ technologies have their own advantages and disadvantages. An attempt has been made for selection of suitable treatment technology among the widely used technologies in domestic wastewater including hospital. The comparison of widely used treatment technologies will help designers, engineers, architects, economists in selection of treatment technologies in terms of their efficiency, energy, operation , performance, land requirement, cost etc.

238

Vol. 5, Issue 1, pp. 227-240

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

ACKNOWLEDGEMENT
I sincerely would like to Thanks University Jamia Millia Islamia, New Delhi, India for their support as and when required.

REFERENCES
[1]. American Members Technology Association (ATMA), (2007). FS-13. [2]. Caro Estrada, R. et. al., Comparison Between MBR and Actiavetd Sludge Technologies for Wastewater Reclamation and Reuse. [3]. Chudoba, J.Ottova, V. Nad Mandera, V. (1973). Control of activated sludge filamentous bulking: effect of hydraulic regime or degree of mixing in aeration tank. Waste Research, 8:1163. (this paper details the effect of hydraulic mixing in SBR systems). [4]. Cooper, P.F., 1981. The use of biological fluidised beds for the treatment of domestic and industrial wastewaters.chem.eng.371, 373-376. [5]. Cooper, P.F., Sutton, D.M., 1983. Treatment of wastewaters using biological fluidised beds chem.eng.392, 392. [6]. Deegan, A.M. et. al. (2011), Treatment options for wastewater effluent from pharmaceutical companies. Inst.J. Environ. Sci.Tech., 8 (3), 649-666. [7]. Dennis, R.W. and Irvine, R.L. (1979). Effect of fill: react ratio on sequencing batch biological reactors. Journal Water Pollution Control Federation, 51(2), 255-263. (This paper describes the performance of SBR in various operation models). [8]. Eckenfelder W.Wesley, Industrial Water Pollution Control, Third Edition, Environmental Engineering Series, McGraw-Hill International Editions. [9]. El Nadi El Hosseiny Dr.Mohamed (2005) Wastewater Treatment Design Report, Report No.8, pp-12, 14). [10]. Goronszy, M.C. (199). Intermittent operation of extended aeration process for small systems. Journal Water Pollution Control Federation, 51(2), 274-287. (this paper describes the use of SBR in Australia). [11]. Irvine R.L. and Richter, R.O. (1976).Computer simulation and design of sequencing batch reactors, Proceedings of 31st Industrial Waste Conference, Purdue University, West Lafayette, Indiana, USA, P.182 ( this paper describes the development and operation of SBR). [12]. Irvine, R.L. Fox, T.P., and Richter, R.O. (1977). Investigations of fill and batch periods of sequencing batch reactors, water research, 11,713-717 (this paper describes the development and operation of SBR). [13]. Jaldhara Technologies Private Limited , Technical Seminar (2012) Introduction to Next Generation Technology for Sewage , Wastewater & Effluent Treatment. [14]. Jenkins, D., Richard, M.G., and Daigger, G.T. (1993) Manual on the Causes and Control of Activated Sludge Bulking and Foaming, 2nd ed. Boca Raton: Kewis Publishers. [15]. Jordening, Hans-Joachim, Buchholz, Klaus, 24 Fixed Film Stationary Bed and Fludized Bed Reactors. [16]. Leitao, R.C. et. al.(2006), The effects of operational and envrinmental variations on anaerobic wastewater treatment systems: A review. Bioresource Technology 97. pp. 1105-1118. [17]. Mahvi, A.H.(2008)., Sequencing Baatch Reactor:A promising technology in wastewater treatment:, Iran. J. Environ. Health. Sci. Eng., Vol. 5, No. 2, pp. 79-90. [18]. Melin, T. et. al., (2005), Membrane bioreactor technology for wastewater treatment and reuse.Desalination 187. pp 271-282. [19]. Mesdaghinia AR, Naddfi K, Nabizadeh R, Saeedi R, Zamanzadeh M (2009), Wastewater Characteristics and Appropriate Method for Wastewater Management in the Hospitals, Vol.38, No.1, pp.34-40. [20]. Miron, F. Anton.et. al, (1997) Wastewater Treatment Plant Design by A Joint Committee of the Water Pollution Control Federation and the American Society of Civil Engineers. [21]. Nicolella C., Loosdrecht van M.C.M, Heijmen J.J. (2000), Wastewater treatment with particulate bio film reactors, Journal of Biotechnology 80, pp. 1-33. [22]. Pauwels B, Verstraete W (2006), The treatment of hospital wastewater: an appraisal, Journal of Waste and Health, 044: 406-413. [23]. Porges, N. (1955) Waste treatment by optimal aeration-theory and practice in dairy waste disposal. [24]. Rezaee A, Ansari M, Khavanin A, Sabzali A, Aryan M.M., Hospital (2005) Wastewater Treatment Using an Integrated Anaerobic Aerobic Fixed Film Bioreactor, American Journal of Environmental Science 1 (4): 259-263. [25]. Schroeder, E.D. (1982). Design of sequencing batch reactor activated sludge processes. In Civil Engineering for practicing and design engineers 2:33-34. (this chapter gives design guidance for SBR). [26]. Shah, Shwetal. (2011) Initial Proposal Technical Solution for Sewage Waste Treatment and Reuse.

239

Vol. 5, Issue 1, pp. 227-240

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
[27]. Sperling, Marcos von, Chernicharo, Carlos Augusto de Lemos, I-078- A comparison between wastewater treatment processes in terms of compliance with effluent quality standards. [28]. Tehobanoglous George, Burton Franklin L., Stensel H. D. (2007). Wastewater Engineering: Treatment, Disposal, Reuse, Metcalf and Eddy, Inc. 4th Ed. McGraw-Hill, New York. [29]. Topare, Niraj S., Attar, S.J., Manfe, Mosleh M. (2011), Sewage / Wastewater Treatment Technologies : A Review. Sci. Revs. Chem. Commun. :1 (1), pp. 18-24. [30]. Vigneswaran, S., Sundaravadival, M., and Chaudhary, D.S., Sequencing Batch Reactors: Principles, Design / Operations and Case Studies... [31]. Water Environment Association. (1987) Activated Sludge, Manual of Practice # 9. [32]. Web address: (i) www.amtaorg.org (ii) http://trade.indiamart.com/details.mp?offre=1777676912 (iii)http://trade.indiamart.com/details.mp?offre=1777676912 (iv)http://www.who.int/water_sanitation_health/resourcesquality/wpcchap3.pdf [33]. Waste-Water Treatnment Technologies : A General Review. United Nations, New York, 2003. [34]. Zhou, H., Smith, D. W. (2002), Advanced technologies in water and wastewater treatment.J. Environ. Eng.Sci. 1: pp. 247-264.

AUTHORS
Jafrudeen is currently working as Director - Technical, M/s. Rivulet Engineering Consultancy Private Limited., Delhi, India and Technical Associate (PHE, FPS, WWTS) at M/s. ABL Hospitech Pvt. Ltd., Delhi, India. He also is pursuing Ph.D. (Environmental Science and Engineering) from Jamia Millia Islamia, New Delhi. Prior to this, he was working in the field of design, execution and operations of water and wastewater treatment systems with reputed firm M/s. Enhanced WAPP Systems Pvt. Ltd. He holds Bachelors Degree in Chemical Engineering from C.R.S.C.E. (Now D.C.R.U.S.T), Murthal, Haryana, India. He also holds Masters Degree in Environmental Science and Engineering (Gold Medallist) from Jamia Millia Islamia, New Delhi, India. He has been working in the field of water and wastewater since 2003.

240

Vol. 5, Issue 1, pp. 227-240

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

IMPLEMENTATION OF THE HYBRID LEAN-AGILE MANUFACTURING SYSTEM STRATEGIC FACET IN AUTOMOTIVE SECTOR
Salah A.M. Elmoselhy
MBA Alumnus, Maastricht School of Management, Maastricht, The Netherlands

ABSTRACT
Recently the hybrid lean-agile manufacturing system has been proposed in order to meet the current automotive market winning order criterion of a blend of cost and availability. This study shows how strategically a hybrid lean-agile manufacturing system can be implemented. It shows statistically that almost one third of the variation in successfully dealing with the sources of competitive advantage in automotive sector can be explained by adopting the strategic facet of the hybrid lean-agile manufacturing system. The cost demanded by the implementation of the hybrid lean-agile manufacturing system can be moderated by the gained benefits of reduced operational cost and reduced time to market.

KEYWORDS: Lean Manufacturing; Agile Manufacturing; Manufacturing Strategy; Value Chain

I.

INTRODUCTION

Getting the right product, at the right price, at the right time, in the right place to the consumer is not only the way to achieve competitive advantage, but is also the key to sustainable success in the manufacturing sector. According to Womack [1,2] significant interest has been shown in recent years in the idea of lean manufacturing and the broader concept of the lean enterprise. Yet, the demand in the current automotive market is volatile and the customers requirements for variety are high which together demand for a much higher level of agility. Hence it is not surprising that becoming competitive in terms of cost, a lean attribute, can cause the value chain to become threatened in terms of availability, an agile attribute. A newer manufacturing approach than the lean manufacturing to deal with the change in the manufacturing business environment is agile manufacturing. The concept of agility comprises two main factors which are: (a) responding to change in proper ways and in good time and (b) exploiting changes and taking advantage of them as opportunities [3,4]. Yet, this agile manufacturing approach has exhibited a cost challenge. This research stems from the changes in the manufacturing business environment in the automotive sector that have led to customers requirements of both competitive cost and availability without compromising quality [5]. The research method adopted in the present research starts with literature review of the manufacturing systems which traditionally exist in automotive industry. From this literature review research questions are derived. Answers to some of these research questions are proposed in a form of a research hypothesis. The rest of the research questions are answered from the literature review. In an endeavour to validate the research hypothesis, both interview with executives of and review of annual reports of ground vehicle manufacturing companies and Original Equipment Manufacturers (OEMs) have been conducted. Since case studies are useful in developing solutions to the current manufacturing business problems, the General Motors Production System, as a leader in

241

Vol. 5, Issue 1, pp. 241-258

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
automotive sector that has a global corporate strategy, is examined in light of the proposed HLAMS, in order to verify the relevance of the proposed manufacturing system to the real world of the automotive business. This research paper investigates the implementation of the strategic aspects of the proposed HLAMS. The paper starts with literature review. Research questions are then derived from literature review. Risk management in product design and manufacturing is investigated after that and the present research will identify how HLAMS addresses this aspect. This is followed by presenting the proposed implementation method of the strategic aspects of the proposed HLAMS. Finally, verification, validation, and the limitations of the proposed manufacturing system will be presented.

II.

LITERATURE REVIEW

In order to balance the automotive product portfolio, the engineering resources have to be utilized by global vehicle platforms by shifting product development and manufacturing programs to low-cost manufacturing bases, such as China, India, Mexico, Brazil, and South Africa [6]. The challenge will be in building a global engineering network to support vehicle product development and manufacturing in such multiple regions [7-10]. Therefore, the concept of hybrid lean-agile manufacturing system (HLAMS) has been proposed recently [11]. The rules for competing and surviving in the automotive industry are changing rapidly. Time and knowledge are the essence of winning in the contemporary marketplace [12]. Thus, success in the global automotive market is increasingly linked to the enterprises ability to rapidly turning information into knowledge. The winners will be extended enterprises with the capability to integrate, optimize, and collaborate across their entire value chain faster, better, and more profitably than anyone else. The winning value-chains will be those that strike a balance between cost and availability of products and related services in terms of low costs, short product development and distribution cycles, and smart investments in value-chain business and technology practices. In other words, a blend of leanness and agility is expected be a necessity in meeting such contemporary success criteria [11]. Value engineering can be implemented in the development of any product, such as a car, to optimize its value [13]. Some scientists called for what was called leagility or agilean, but what they proposed is to adopt leanness in the upstream of the value chain before the decoupling point and agility in the downstream of the value chain after the decoupling point [14]. What the HLAMS proposes is a manufacturing system that hybridizes both leanness and agility together in one manufacturing framework to be implemented throughout the value chain [11]. The proposed HLAMS hybridizes the strategic attributes of both the lean and agile manufacturing systems in order to realize both flexibility of production equipment, of chaining plants, and of execution of a production order along with responsiveness to varying customer needs. In the automotive sector, planners have a difficult balancing act. On the one hand, there are benefits from using common vehicle parts. On the other hand, there are more niche demands in the global market. The challenge that faces the entire automotive industry is to balance these two extremes costeffectively and without compromising quality. This challenge is evident in light of recent and frequent safety recalls of millions of vehicles in the automotive sector, even from the lean manufacturing pioneer, Toyota [15]. The current research aims to meet this balancing act by proposing an implementation method to implement the strategic facet of the HLAMS. The implementation of the strategic facet of the HLAMS aims at striking a balance between the main six competitive dimensions of manufacturing in the automotive sector, which are quality, delivery reliability, response time, low cost, customization and product life cycle, in addition to revenue [16]. Research questions can now be derived from this literature review.

III.

RESEARCH QUESTIONS AND HYPOTHESIS

The research questions have been derived from the literature review. Answers to some of these research questions are proposed in a form of a research hypothesis.

3.1. Research Questions


242 Vol. 5, Issue 1, pp. 241-258

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Based on the research problem definition and research objective, the research questions in this research paper are as follows: 1. Does the implementation of the hybrid lean-agile manufacturing system necessitate change in the enterprise organization? If so, how? 2. Can the implementation of the strategic facet of the hybrid lean-agile manufacturing system be valid?

3.2. Research Hypothesis


The research hypothesis that is derived from some of the research questions in this research paper is as follows: Ho: The implementation of the strategic facet of the hybrid lean-agile manufacturing system is not correlated with the manufacturing enterprises manufacturing business success in automotive sector. Ha: The implementation of the strategic facet of the hybrid lean-agile manufacturing system is correlated with the manufacturing enterprises manufacturing business success in automotive sector. In an endeavor to answer these research questions, let us first take a closer look at risk management in product design and manufacturing.

IV.

RISK MANAGEMENT IN PRODUCT DESIGN AND MANUFACTURING

Automotive is a huge sector and it is yet expected to become bigger. The World Trade Organization 2006 annual report predicted that the world trade in automotive products from 2005 to 2015 will increase annually by 7 percent, corrected to inflation [17]. Managing risk in the design and manufacturing processes concerns manufacturing business managers, particularly in the automotive sector. A successful manufacturing enterprise must meet the aggregated value chain metrics that should be met throughout the value chain which are lead time, quality, costs, and associated service with the product [14]. Quality and the associated services with the product have become prerequisites to compete in automotive sector. Cost, a lean metric, and lead-time, an agile metric, are the metrics that manufacturing enterprises compete on with each other in automotive sector. While endeavoring to meet these metrics the manufacturing enterprise may face some uncertainties. Risk exists only when uncertainties exist. There can be some risks associated with realizing the manufacturing competitive dimensions through implementing the strategic facet of the proposed HLAMS. The lean dimension in the hybrid lean-agile risk management addresses and cures risk through the elimination of avoidable risk through eliminating the sources of uncertainty and eliminating the impact of their uncertainty. The agile dimension in the hybrid lean-agile risk management addresses and cures risk through the reduction of unavoidable risk through reducing the impact of the unavoidable sources of uncertainty. Risk can be dealt with through dealing with the level of uncertainty behind that risk and through dealing with the impact of that particular uncertainty. Souder and Moenart [18] found that there are four sources of uncertainty which are consumers, competitors, technology, and resources. Maull and Tranfield [19] found that the competitive pressures that the manufacturing companies, especially Small and Medium Enterprises (SMEs), are often faced with are (1) rapidly decreasing lead time, (2) increasing choices offered by competitors, (3) pricing, (4) new entries to markets, especially from the New Industrialized Countries (NICs). Tatikonda and Montoya-Weiss [20] proved that technological uncertainty moderates the relationship between organizational process factors and operational outcomes, and market and environmental uncertainty moderates the relationship between operational outcomes and market success. The most significant risk because of the technological uncertainty is the risk of failed products [21]. This is an avoidable risk that can be avoided by taking the following measures: (1) killing-off products as soon as they fall short of the set target and seem unsuccessful based on marketing and sales early signals; (2) factoring the costs per unit of stock-outs or market-downs into the production planning process. By taking these measures the high technical uncertainty behind this risk will be mitigated through little financial commitment, and consequently little influence of market uncertainty. There are a couple of most significant risks because of market and environmental uncertainty. The first of these couple of risks is the uncertainty in demand predictions and this uncertainty is related directly to the prediction period so that forecast accuracy degrades to 20% for 2 months future prediction, to 50% for 3 months future prediction, and to 100% for 4 months future prediction [14]. This is also an avoidable risk that can be avoided by taking the following measures: (1) not to make

243

Vol. 5, Issue 1, pp. 241-258

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
demand prediction for more than three months; (2) factoring multiple demand scenarios into production planning; (3) decisions about the items of the most unpredictable demand should be postponed until some market signals such as early sales data become available; (4) for seasonal products, making the items of predictable demand in advance in order to reserve greater manufacturing capacity for making the items of unpredictable demand and shift the production of those items that their demand is relatively unpredictable closer to the selling season [22]. The second of these couple of risks due to market and environmental uncertainty is unsatisfied customers. This is also an avoidable risk that can be avoided by taking the following measures: (1) conducting customer satisfaction survey; (2) conducting Failure Modes and Effects Analysis (FMEA) [23]; (3) acting accordingly on eliminating the causes behind this dissatisfaction. The unavoidable risks happen because of the existence of the four unavoidable uncertainties that are consumers, competitors, technology, and resources. The consumer uncertainty is a sort of market uncertainty that results in the risk of the ever increasing demand for short lead time. In order to reduce both of the level of this uncertainty and its impact, the following measures should be taken: (1) manufacturing should be carried out in the countries in which the cost of manufacturing per unit sold is the cheapest and which are geographically the closest to the location of the market-places of selling the product; (2) machine capacity and type of vehicles to be manufactured, e.g. car, bus, or truck, should be determined based on a global aggregate forecast of demand based on the expected growth in population in the countries that are the market places of selling the product. The competitor uncertainty is another sort of market uncertainty that results in the risks of the demand for variety of choices, low prices, and new entrants. The following measures should be taken to reduce both the level of this uncertainty and its impact: (1) implement the agile dimension of the HLAMS in order to deal with the risks of the demand for variety of choices and new entrants; (2) implement the lean dimension of the HLAMS in order to deal with the risk of the demand for low prices. The third unavoidable uncertainty is the technology uncertainty which is a sort of the technological uncertainty. This uncertainty can result in the risk of obsolescence and lack of efficiency and can be dealt with by adopting scalable and upgradeable technology. The fourth unavoidable uncertainty is the resources uncertainty which is a sort of the technological uncertainty. This uncertainty can result in the risk of incomplete tasks and consequently long lead time. The following measures should be taken in order to reduce both the level of this uncertainty and its impact: (1) sharing resources throughout the entire value chain so that loading gets leveled; (2) having resources of flexible attribute in its operating method and in its construction/architecture so that bottlenecks get resolved. The present research proposes a risk management action plan to minimize risk in the product design and manufacturing processes. The proposed risk management action plan consists of the following three phases which are (1) before the beginning of the product design process, (2) during the product design process, (3) before the beginning of and during the manufacturing process. 1. Before the beginning of the product design process phase: Conducting Strengths, Weaknesses, Opportunities, and Threats (SWOT) analysis for the manufacturing enterprise; Establishing strategic partnership with key suppliers, technology providers, and retailers. 2. During the product design process phase: 2.1. Proving value of design concept to customers at the end of each design phase through market research and close-contact with key customers. 3. Before the beginning of and during the manufacturing process phase: 3.1. Conducting FMEA. Having investigated this, let us now explore the proposed implementation method of the strategic aspects of the proposed HLAMS.

V.

IMPLEMENTATION OF THE MANUFACTURING SYSTEM

PROPOSED

HYBRID

LEAN-AGILE

The implementation of the strategic facet of the HLAMS consists of a short-term phase and long-term phase. In the short term, the assessment of the current state of the manufacturing system with respect to the HLAMS is implemented, a change plan towards the HLAMS is set, and the Five-S method is

244

Vol. 5, Issue 1, pp. 241-258

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
applied throughout the entire value chain. In the long term, the change plan towards the HLAMS is carried out and the HLAMS should be fully implemented. A proposed implementation plan of the HLAMS is illustrated in Table 1, and is proposed for each of the cases of either a firm has already established its manufacturing business or is going to establish its manufacturing business. For the already established manufacturing firm, the change program is four-fold. At the system engineering level, requirements are reviewed with the marketing team and key customers in order to eliminate those requirements which are unnecessary and costly. In addition, a product design review checklist is developed and reviewed since most costs are assigned when a product is designed and often design engineers specify what they are familiar with rather than what is most efficient [24]. Since a manufacturing strategy refers to an approach that starts with corporate, business, and marketing strategies and then establishes a designs of manufacturing system to support them [25-26], the following change program is proposed for the manufacturing strategy [27] : (1) gain top management commitment in both time and resources; (2) evaluate the strategic position of the company; (3) review in a discussion group the existing primary and secondary requirements of the manufacturing strategy in light of the corporate strategy; (4) brainstorm issues surrounding these requirements; (5) categorize these issues in terms of people, machine, process, and plan; (6) carry out a cause-and-effect analysis; (7) prioritize the identified causes; (8) set initiatives to address prioritized issues; (9) form teams of inspired people and implement these initiatives; (10) measure the new processes and compare the results against the expected results to spot and make up for any differences; (11) analyze the differences to determine their cause; (12) evaluate the strategic position of the company. For the manufacturing activities improvement, the following change program is proposed [28]: (1) for each manufacturing activity the following questions are asked: 1.1. What is value added? 1.2. What activities can be joined? 1.3. What activities can be discarded? 1.4. What activities can be done in parallel? (2) remove the non-value added manufacturing activities; (3) joint the possibly joined activities; (4) have the activities that can be done in parallel done in parallel if the available resources permit.
Table 1. Proposed implementation plan of the hybrid lean-agile manufacturing system A firm already has established its manufacturing A firm is going to establish its manufacturing business business 1. Assess the enterprises lean capabilities against 1. Set business values and business objectives to the lean capabilities mentioned in the table of realize the hybrid lean-agile manufacturing the leanness assessment, Appendix A [29, 30]; attributes; the lean capabilities are assessed in terms of 2. Establish and build the lean capabilities and agile eleven capabilities that are inventory, team capabilities mentioned in the tables of the lean approach, processes, automation, maintenance, assessment and agile assessment; layout & handling, suppliers, set-ups, quality, 3. Establish and build the hybrid lean-agile retailers, and scheduling & control; manufacturing attributes; 2. Assess the enterprises agile capabilities against 4. Prioritize the process implementation initiatives the agile capabilities mentioned in the section of based on their effectiveness according to the the agility assessment, Appendix B [31]; the (80/20) Pareto rule; value chain agility is assessed in terms of goals, 5. Use posters and signs as a way of engaging design, and managerial measurements with employees and maintaining standards [32]; respect to organization, process, technology, and 6. Empower enthusiastic workforce for this people jobs; implementation; 3. Address the emerging points of drawbacks and 7. Motivate neutral workforce for this bridge the gap, if any, through setting a change implementation. management plan towards the hybrid lean-agile 8. Assess the enterprises performance; manufacturing attributes mentioned in section 9. Amend the enterprises objectives and strategies 8.3., based on incremental change that has the based on the feedback of actual results. following four pillars: a. Revising and changing incrementally the business values and business objectives of the firm to address the points of drawbacks; b. Prioritizing the process improvement initiatives based on their effectiveness according to the

245

Vol. 5, Issue 1, pp. 241-258

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
(80/20) Pareto rule; Using posters and signs as a way of engaging employees and maintaining standards [32]; d. Empowering enthusiastic workforce for this change; e. Motivating neutral workforce for this change. Assess the enterprises performance; Amend the enterprises objectives and strategies based on the feedback of actual results. c.

4. 5.

Manufacturing team leaders should be the focus of training efforts since they are the change agents to lead improvements in performance as the implementation of the proposed HLAMS should be push implementation by the manufacturing team leaders rather than pull implementation by the team themselves. The implementation of the HLAMS necessitates building culture, structure, and systems. Building culture requires the following: (1) culture development requires leadership with a continuous passion for perfection to create attitudes in all employees so that their behavior positively influences product and service quality; (2) culture development also requires the empowerment of all employees in the pursuit of quality; (3) team work implies that there is an organized, engaged, and selfdisciplined team; (4) it is instilled in all staff that poor quality is a major waste and must be improved to "near perfect", by continuous improvement with employees who are enabled to solve problems using tools such as Five Whys. Building organizational structure comprises the following: (1) low and high level ownership of quality; (2) technical and management support to resolve problems; (3) removal of indirect workers, adopting narrow job classifications, and adopting cross training; (4) short feedback loops based on a flat organization structure; (5) mechanisms for continuous improvement with routine daily stand-up team meetings to flush out problems; (6) managers act as facilitators and provide mentoring. There are two types of quality systems: problem preventive system and problem corrective system. While the problem preventive quality system prevents problems from happening in the first place, the problem corrective quality system deals with problems only when they arise. The HLAMS adopts hybridization of these two quality systems in terms of (1) instilling flexibility into the design and manufacturing processes for embracing change; (2) adopting robust design of product using Quality Function Deployment (QFD) to satisfy customers and stakeholders and using Design for Manufacture in order to provide the manufacturing and transportation processes with what these processes need; (3) adopting robust design of processes using Five Ss and Poka Yoke; (4) adopting systematic procedures of doing things using ISO and QS standards; (5) detecting problems that can arise as early as possible using Statistical Process Control, Management By Walking Around, customer satisfaction surveys, staff surveys, quality standards audits, Kaizen continuous improvement events, product strip down, and inspection and testing; (6) analyzing the root causes of those problems and removing those root causes using Pareto analysis, Ishikawa/fishbone diagrams, Five Whys, value stream mapping, and FMEA. The pragmatic reader might now well ask: How valid is the strategic facet of the proposed HLAMS? The next section will answer this question.

VI.

VERIFICATION AND VALIDATION OF THE IMPLEMENTATION OF THE STRATEGIC FACET OF THE PROPOSED HYBRID LEAN-AGILE MANUFACTURING SYSTEM

The implementation of the strategic facet of the proposed HLAMS is verified in this section by testing the research hypothesis. In an endeavor to verify and validate the implementation of the strategic facet of proposed HLAMS, product and service managers of three Ground vehicle manufacturing companies and OEMs were interviewed and their annual reports were reviewed with regards to the strategic facet of the proposed HLAMS. In addition, the annual reports of additional twenty seven Ground vehicle manufacturing companies and OEMs were reviewed in this regard. Historically, Taiichi Ohno and Shigeo Shingo developed the Toyota Production System from which

246

Vol. 5, Issue 1, pp. 241-258

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
the lean manufacturing principles were derived over a period of 20-30 years [33]. Thus, in an endeavor to double check the relevance of the implementation of the strategic facet of the proposed HLAMS to the real world of the automotive business, the General Motors Production System as an automotive sector leader, is reviewed in this section as a typical case study on the proposed HLAMS.

6.1. Verification of the Implementation of the Strategic Facet of the Proposed Hybrid Lean-Agile Manufacturing System
The sources of competitive advantage in automotive sector are: (1) market position; (2) competitive resources in terms of brand equity, systems, skills, market share, and patents; (3) learning organization [34-35, 16]. Assuming equal weight of each of these sources of competitive advantage in automotive sector, the proposed manufacturing system thus can improve on 39% of them since it improves on systems (6% of the sources of competitive advantages) as well as on learning organization (33% of the sources of competitive advantages). Therefore, almost one third of the variation in successfully dealing with the sources of competitive advantage in automotive industry can be explained by adopting the technical facet of the HLAMS. Therefore, the alternate hypothesis is true and the implementation of the strategic facet of the HLAMS is positively correlated with firms manufacturing business success in automotive sector. Causality in this study is determined according to the percentage of variation in manufacturing business success in automotive sector due to a variable (r2) of a correlation coefficient (r). For whether or not correlation does not necessarily mean causality, the following measure were taken: (1) the percentage of variation in manufacturing business success due to the variable (r 2) has been calculated; (2) reliability (Cronbach's Alpha) analysis of the data collected has been performed with a result that satisfies the minimum acceptable value of Cronbach's Alpha which is 0.7; (3) both sampling design and the sample size are important to establish the representativeness of the sample for limited generalizability; in the sample design, thus, probability sampling design of simple random sampling was used for its cost-effective and fair statistical results with more than 50% of the ground vehicle manufacturing companies included in the statistical sample; in addition, those included automotive OEMs and manufacturing companies hold collectively more than 50% of the global market share in the automotive market sector; (4) in order to further establish the representativeness of the sample for limited generalizability, sample size of 30 is adopted since it is the minimum statistically representative sample size [36, 37]. This leads us to elaborate on the case studies investigated in this research.

6.2. Validation of the Implementation of the Strategic Facet of the Proposed Hybrid Lean-Agile Manufacturing System and Case Studies
The validity of the research results has been tested in terms of four key validity types; firstly, in terms of statistical conclusion validity, since the resulted relationships are meaningful and reasonable; secondly, in terms of internal validity, since the results are causal rather than being just descriptive; thirdly, in terms of construct validity, since the results represent what is theoretically intended; fourthly, in terms of external validity, since the results can be limitedly generalized to the population of automotive manufacturers since the statistical sample was representative. An important and study-worthy practical example of agility combined with leanness in the global arena is China. Chinas biggest threat to world manufacturing is not only low cost but also quick-to-market. Almost every plant in China boasts that it could design, develop and manufacture products in China faster than it could overseas. Sometimes this occurs because the intense competition for the growing internal Chinese consumer market forces companies to be more nimble and innovative than their competitors. For instance, due to the booming Chinese auto industry, one-third of all growth in annual global auto sales has been almost taking place in China [38]. This is why all of the major auto makers in the world have established a manufacturing presence in China, mostly partnered with one of the state-owned Chinese automotive enterprises that have had strong-enough management to survive the transition to a market economy. A hybrid lean-agile production system should be designed to flow, and automation should be selected after deciding how best to improve flow and boost flow [39-40]. In order to compete in the Chinese market, which was almost doubled recently, many global automakers, such as the General Motors

247

Vol. 5, Issue 1, pp. 241-258

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Corporation, have established their own China-based design studios and tooling facilities in order to speed up their development of auto products customized to the Chinese consumers tastes. Some of these ideas and products can also be used worldwide. This can foster a recent perception that in China nothing is different, but that it happens cost-effectively and five times faster. Such increased speed-tomarket could be because cheap technical labor in China can enable enterprises to put more minds at work on a design problem than they could economically justify in dozens of other countries. For instance, in China an automaker is able to employ an army of highly-skilled sculptors who can quickly design and hand-make prototypes for consumer testing, and a legion of highly skilled machinists who can turn the best designs into injection-mold dies within a few months. As a result, such an automaker can go from concept to production in almost 9 months; a period of time short enough to enable such an automaker to have a comparative advantage in automotive sector. Consequently, strategically integrating comparative advantages can be a sustainable competitive advantage for the enterprise in the marketplace. Apparently, being lean only can harm the availability attributes of products and related services to customers. Meanwhile, being agile only can prevent performance on the reasonable cost attribute of products and related services to customers. The Chinese manufacturing approach in this context gives extra credibility to the validity of the proposed HLAMS in striking such a balance effectively. The General Motors Production System, as an automotive sectors leader, is reviewed as a typical case study of the proposed HLAMS, in order to double check the relevance of the proposed system to the real world of the automotive business. By reviewing The General Motors Production System in light of the leanness assessment table and agility assessment method, Appendix A and Appendix B, respectively, it has been found that General Motors Production System had two shortcomings. The first of these two shortcomings, which is related to leanness, was its deficiency in managing the change towards lean manufacturing. The other shortcoming, which is related to agility, was the lack of strong relationships with suppliers in the General Motors Corporation supply chain. The Corporation has recently recognized these shortcomings and has recently acted to resolve them. In an endeavor to resolve the deficiency in managing change towards lean manufacturing, General Motors Corporation has established in 2003 its first ever lean and flexible plant. This plant is situated in Michigan, USA. Also, the Corporation has recently revisited and reestablished its entire value chain, with great emphasis placed on strong relationships with its suppliers and dealers. For instance, General Motors Corporation has acquired 10% of Mansour's Automotive Company share equity, the exclusive distributor of General Motors vehicles in Egypt, in 2001, as a means of moving forward vertical integration in the General Motors value chain. These findings uphold the proposed hybridization of leanness and agility as a way towards sustainable competitiveness in automotive manufacturing business.

VII.

DISCUSSION AND CONCLUSION

The challenge that faces automakers is to strike a balance between the current order-winning criterion of both cost and availability of products and related services without compromising quality. This research has aimed to help automakers to overcome this challenge through proposing a method to implement the manufacturing system that hybridizes the strategic attributes of both the lean and agile manufacturing systems together in one manufacturing framework that meets the three levels of flexibility and responsiveness in automotive sector. The study has identified the sources of uncertainty in product design and manufacturing which are the root causes of risk in product design and manufacturing and has presented a method to deal with them. In addition, the study has proposed a risk management action plan that consists of three phases: (1) before the beginning of the product design process, (2) during the product design process, (3) before the beginning of and during the manufacturing process. The implementation of the strategic facet of the HLAMS is divided into short-term and long-term strategies. In the short term, the assessment of the current state of the manufacturing system with respect to the HLAMS is carried out, a change plan towards the HLAMS is set, and the Five-S method is applied throughout the entire value chain. In the long term, the change plan towards the HLAMS is carried out and the HLAMS should be fully implemented. In order to facilitate the implementation of the strategic facet of the proposed HLAMS, the study has proposed

248

Vol. 5, Issue 1, pp. 241-258

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
implementation plan of the proposed manufacturing system for both the enterprises which have already established their manufacturing business and for the enterprises which are going to establish their manufacturing business. The study has suggested assessing the lean capabilities of a manufacturing enterprise in terms of eleven capabilities which are inventory, team approach, processes, automation, maintenance, layout & handling, suppliers, set-ups, quality, retailers, and scheduling & control. The agile aspect of the strategic facet of the proposed manufacturing system consists of delivery reliability and agility assessment. The enterprises agile capabilities are suggested to be assessed against the agile capabilities which are organization, process, technology, and people. Each of these four capabilities is assessed based on three aspects: goals, design, and managerial measures. There have been some limitations of this research which are: (1) due to the fierce competition in the automotive market, some information is considered confidential and hence is unavailable; (2) the interviews were conducted using open-ended questions; these interviews were structured and face-toface interviews since the interviewees preferred to be interviewed with open-ended questions; (3) the scope of the study covers only the automotive sector. The study has shown that implementing the hybridization of the lean and agile manufacturing systems together can be strategically and industrially valid. The study has presented that the implementation of the strategic facet of the HLAMS is correlated with the manufacturing enterprises manufacturing business success in automotive sector. It has been found that almost one third of the variation in the manufacturing business success can be explained by adopting the HLAMS. The cost demanded by the implementation of the HLAMS can be moderated by the following benefits: (1) reduced operational cost; (2) reduced time to market.

VIII.

FUTURE RESEARCH

The HLAMS presented in this study exhibits further research. The future research proposed in the present study includes: (1) conducting industrial experiments for further validating the implementation of the strategic facet of the HLAMS, (2) reviewing further relevant industrial case studies for further validation.

ACKNOWLEDGEMENT
Professor Arthur Sybrandy from Maastricht School of Management, The Netherlands, is acknowledged for his insightful contribution to this research work. The people of Maastricht School of Management, The Netherlands, and the people of The Regional IT Institute, Egypt, are acknowledged for their support for accomplishing this research.

REFERENCES
[1]. [2]. [3]. [4]. [5]. [6]. [7]. [8]. [9]. [10]. [11]. [12]. [13]. Womack, J., Jones, D., Roos, D., (1990) The machine that changed the world, Macmillan , New York. Womack, J.P., Jones, D.T., (1996) Lean thinking, Simon and Schuster, NY. Davis, E., (April 1995) What is on American minds? Management Review, pp. 14-20. Davis, T., (July 1993) Effective supply chain management, Sloan Management Review. Harber, J.E., (2005) Building an auto company on common, Manufacturing Engineering, Society of Manufacturing Engineers, vol. 135, no. 3. Czinkota, M.R., Ronkainen, I.A., Moffett, M.H., (2004) Fundamentals of international business, SouthWestern: a division of Thomson Learning, Inc. Brooke, L., (May 2008) Creating a global footprint, Automotive Engineering, pp. 48-49. Daniels, J.D., Radebaugh, L.H., Sullivan, D.P., International business environments and operations, Pearson Prentice Hall, 2004. Byrd, J.B., (2008) Manufacturing's next great leap, Manufacturing Engineering, Society of Manufacturing Engineers, vol. 141, no. 2. Hansen, R. C., (2005) Overall equipment effectiveness (OEE), Industrial Press. Elmoselhy, S.A.M., (September 2007) Hybrid lean-agile manufacturing business model in the automotive sector, MBA thesis, Maastricht School of Management, The Netherlands. Deming, W.E., (2000) The new economics for industry, government, education, The MIT Press. Annappa, C.M., Panditrao, K.S., (2012) Application of value engineering for cost reduction a case study of universal testing machine, International Journal of Advances in Engineering & Technology, vol. 4, no. 1,

249

Vol. 5, Issue 1, pp. 241-258

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
[14]. [15]. [16]. [17]. [18]. [19]. [20]. pp. 618-629. Naylor, J.B., Naim, M.M., Berry, D., (1999) Leagility: integrating the lean and agile manufacturing paradigms in the total supply chain, International Journal of Production Economics, vol. 62, pp. 107-118. Toyota Motor Corporation Head Office, (2006) Toyota corporation annual report 2005, Toyota Motor Corporation Head Office, Aichi, Japan. Kotha, S., (1995) Mass customization: implementing the emerging paradigm for competitive advantage, Strategic Management Journal, vol. 16, pp. 21-42. World Trade Organization, (2007) International trade statistics annual report 2006, World Trade Organization. Souder, W.E., Moenart, K.R., (1992) Integrating marketing and R&D projects: An information uncertainty model, Journal of Management Studies, vol. 29, no. 4, pp. 485-512. Maul, R., Tranfield, D., (1992) Methodological approaches to the regeneration of competitiveness in manufacturing, 3rd International Conference on Factory 2000, IEE, UK, pp. 12-17. Tatikonda, M.V., Montoya-Weiss, M.M., (2001) Integrating operations and marketing perspectives of product innovation: the influence of organizational process factors and capabilities on development performance, Management Science, vol. 47, no. 1, pp. 151-172. Baxter, M.R., (1995) Product design: practical methods for the systematic development of new products, Chapman and Hall. Fisher, M.L., Hammond, J.H., (May-June 1994) Making supply meet demand, Harvard Business Review. Aravinth, P., Muthu Kumar, T., Dakshinamoorthy, A., Arun Kumar, N., (2012) A criticality study by design failure mode and effect analysis (FMEA) procedure in LINCOLN V350 PRO welding machine, International Journal of Advances in Engineering & Technology, vol. 4, no. 1, pp. 611-617. Taguchi, G., Chowdhury, S., Taguchi, S., (2000) Robust engineering, McGraw-Hill Professional. Wheelen, T.L., Hunger, J.D., (2002) Strategic management and business policy, Pearson Education International. Stevenson, W.J., (2002) Operations management, McGraw-Hill. Brassard, M., (1989) The memory jogger plus+ featuring the seven management and planning tools, Goal/QPC, Methuen, MA. Hayes, R.H., Pisano, G.P., (January 1994) Beyond world-class: the new manufacturing strategy, Harvard Business Review. Wrennall, W., Lee, Q., (1994) Handbook of commercial and industrial facilities management, McGraw Hill. Epely, T., Lee, Q., (2007) Value stream and process mapping: the Strategos guide to, Enna Inc. Bolstorff, P., Rosenbaum, R., (2003) Supply chain excellence: a handbook for dramatic improvement using the SCOR model, Amacom. Imai, M., (1997) Gemba kaizen: a commonsense low-cost approach to management, McGraw-Hill Professional, New York. Dennis, P., Shook, J., (2002) Lean production simplified: a plain-language guide to the world's most powerful production system, Journal of Manufacturing Systems, vol. 21, no. 4. Sharifi, H., Zhang, Z., (1999) A methodology for achieving agility in manufacturing organizations: An introduction, International Journal of Production Economics, vol. 62, pp. 7-22. Skinner, W., (1978) Manufacturing in the corporate strategy, John Wiley & Sons, New York. Alder, H.L., Roessler, E.B., (1962) Introduction to probability and statistics, W.H. Freeman and Company. Wackerly, D.D., Mendenhall, W., Scheaffer, R.L., (1996) Mathematical statistics with applications, Duxbury Press. Cooney, S., (March 2006) Chinas impact on the U.S. automotive industry, Federal Reserve Bank of Chicago. Harris, R., Harris, C., (2008) Can automation be a lean tool? Manufacturing Engineering, Society of Manufacturing Engineers, vol. 141, no. 2. Morey, B., (2008) Automating lean tools, Manufacturing Engineering, Society of Manufacturing Engineers, vol. 141, no. 1.

[21]. [22]. [23].

[24]. [25]. [26]. [27]. [28]. [29]. [30]. [31]. [32]. [33]. [34]. [35]. [36]. [37]. [38]. [39]. [40].

APPENDIX A
Leanness Assessment Tool 1.0 Inventory For the categories of Finished Goods, WorkIn-Process (WIP) and Purchased/Raw Materials, what portion of middle and upper managers can state from memory the current Response 0%-20% 21%-40% 41%-60% 61%-80% X

1.1

250

Vol. 5, Issue 1, pp. 241-258

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
turnover and the purpose of each type? What is the overall inventory turnover, including Finished Goods, WIP and Purchased/Raw material? 81%-100% 0-3 4-7 8-12 13-24 25+ <=1.0 1.1-2.0 2.1-4.0 4.1-8.0 8.1+ Response Exploitive Bureaucratic Consultative Participative Highly Participative Individual Incentive Hourly Wage Group Incentive Salary Salary +Annual Bonus Layoffs Every Year Transfers & Retraining Reduce Layoffs Layoffs Are Rare 31%+ 14%-30% 7%-11% 3%-6% 0%-2% <5% 6%-10% 11%-30% 31%-90% 91%-100% <5% 6%-10% 11%-30% 31%-90% 91%-100% Response 9+ 7-8 5-6 3-4 0-2 Large Scale Medium/Mixed Small Scale Very Difficult Moderately Difficult X X

1.2

1.3

What is the ratio of Inventory Turnover to the industry average?

2.0

Team Approach

2.1

What is the organization type?

2.2

How are workers on the factory floor compensated?

2.3

To what extent do people have job security?

2.4

What is the annual personnel turnover

2.5

What percentage of personnel has received at least eight hours of teambuilding training?

2.6

What percentage of personnel are active members of formal work teams, quality teams, or problem-solving teams?

3.0

Processes How many large-scale machines or singleprocess areas are in the plant through which 50% or more of different products must pass? How would you rate the overall scale of the plant's processes? How easy is it to shift output when the product mix changes?

3.1

3.2 3.3

251

Vol. 5, Issue 1, pp. 241-258

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Easy Very Difficult Moderately Difficult Easy 96%-100% 91%-95% 86%-90% 76%-85% 50%-75% Complex Technologies Moderate/Mixed Simple Technologies

3.4

How easy is it to alter the total production rate by +/-15%?

3.5

What is management's target operating capacity for individual departments or machines?

3.6

How would you rate the overall technology level of the plant's processes?

4.0

Automation

Response Nothing

4.1

What must be automated to meet customer demand? (e.g. Load, Cycle, Unload, Transfer)

Cycle Cycle and Unload Load, Cycle, and Unload Load, Cycle, Unload, and Transfer 1-2 3-4 5-6 7-8 9-10 10+ The automation has to be in one machine

4.2

How many functions the machine has to perform?

4.3

Does the automation have to be in one machine or can it be spread over multiple machines?

The automation can be spread over multiple machines

5.0 5.1

Maintenance Describe equipment records and data. Include records of uptime, repair history, and spare parts. Include repair and parts manuals. Excluding new installations and construction projects, what percentage of maintenance hours is unplanned, unexpected, or emergency?

Response Non-Existent Substantially Complete Complete & Accurate 71%-90% 51%-70% 26%-50% 11%-25% 0%-10% No Preventive Maintenance 1%-10% Coverage 11%-30% Coverage 31%-90% Coverage 91%+ Coverage Often Occasionally Frequently Unknown

5.2

5.3

Does maintenance have and follow a defined preventive schedule?

5.4 5.5

Do equipment breakdowns limit or interrupt production? What is the overall average availability of

252

Vol. 5, Issue 1, pp. 241-258

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
plant equipment? 0%-75% 76%-90% 91%-95% 96%-100% Response X

6.0

Layout & Handling

6.1

6.2

6.3

6.4

6.5

71%-100% 46%70% What portion of total space is used for storage 30%-45% and material handling? 16%-30% 0%-15% 71%-100% 46%70% What portion of the plant space is organized 30%-45% by function or process layout? 16%-30% 0%-15% Pallet-size (or larger) loads, long distances (>100'),complex flow patterns, confusion, & lost material How would you characterize material Moderate loads, bus-route transport, & movement? intermediate distances Small loads, short distances (<25'), simple & direct flow pattern Messy, Filthy, Confused How would you rate overall housekeeping Some dirt, Occasional Mess and appearance of the plant? Spotless , Neat, & Tidy Impossible to see any logic or flow sequence. Most processes are apparent with some study. Most sequences are visible. Processes and their sequences are immediately visible. How well could a stranger walking through your plant identify the processes and their sequence?

7.0

Suppliers

Response 5.1+ 4.1-5.0 3.1-4.0 2.1-3.0 1.0-2.0 1-5 6-10 11-15 16-20 21+ 0%-20% 21%-40% 41%-60% 61%-80%

7.1

What is the average number of suppliers for each raw material or purchased item?

7.2

On average, how often are items put up for resourcing?

7.3

What portion of raw material & purchased parts comes from qualified suppliers?

253

Vol. 5, Issue 1, pp. 241-258

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
81%-100% 0%-20% 21%-40% 41%-60% 61%-80% 81%-100% 0%-20% 21%-40% 41%-60% 61%-80% 81%-100%

7.4

What portion of raw material and purchased items is delivered directly to the point of use without incoming inspection or storage?

7.5

What portion of raw materials and purchased parts is delivered more than once per week?

8.0

Setups

Response 61+ 29-60 16-30 10-15 0-9 0% 1%-15% 16%-30% 31%-45% 46%-100% Not at All

8.1

What is the average overall setup time (in minutes) for major equipment?

8.2

What portion of machine operators have had formal training in Rapid Setup techniques?

8.3

To what extent are workers measured and judged on setup performance?

Informal Tracking & Review Setups Performance Tracked

9.0

Quality

Response 0% 1%-10% 11%-30% 31%-70% 71%-100% 0% 1%-10% 11%-30% 31%-70% 71%-100% 0% 1%-10% 11%-30% 31%-70% 71%-100% 0% 1%-10% 11%-30% 31%-70% 71%-100%

9.1

What portion of total employees has had basic Statistical Process Control (SPC) training?

9.2

What portion of operations is controlled by Statistical Process Control (SPC)?

9.3

What portion of the SPC that is done is accomplished by operators rather than Quality or Engineering specialists?

9.4

What is the overall defect rate?

254

Vol. 5, Issue 1, pp. 241-258

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
10.0 Retailers Response 1-5 6-10 11-15 16-20 21+ 1-5 6-10 11-15 16-20 21+ 0%-20% 21%-40% 41%-60% 61%-80% 81%-100% Fragile Moderate Above-Moderate Strong Very Strong Response 0% 1%-10% 11%-30% 31%-70% 71%-100% 0% 1%-10% 11%-30% 31%-70% 71%-100% 0%-50% 51%-70% 71%-80% 81%-95% 95%-100% X X

10.1

What is the average number of retailers for each product category?

10.2 What is the number of products categories?

10.3 What is the percentage of qualified retailers?

10.4 How strong is the relationship with retailers?

11.0

Scheduling/Control

What portion of work-in-process flows 11.1 directly from one operation to the next without intermediate storage?

11.2

What portion of work-in-process is under Pull Kanban control?

11.3 What is the on-time delivery performance?

APPENDIX B
Value Chain Agility Assessment Tool For all the questions in the following four assessment sections of organization, process, technology, and people, the following rubric should be used: 2 Yes 1 Partially 0 Unsure or No. The higher the score your enterprise gets, the better it is on the value chain agility scale.

1. Organization 1.1. Organization Goals:


The organization goals are the facet of value chain strategy that prioritizes organizational performance requirements of delivery reliability, responsiveness, and flexibility with the internal needs of cost reduction, profitability and asset utilization. 1.1.1. Have you defined your value chains in terms of products and customers? 1.1.2. Are your senior managers measured and remunerated on a set of value chain measures? 1.1.3. Do you know where your value chain performance rates against competition?

255

Vol. 5, Issue 1, pp. 241-258

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
1.1.4. Have you prioritized your competitive requirements in light of the comparison with the competing value chains? 1.1.5. Are your performance goals aligned with your suppliers, retailers, and customers contracts? 1.1.6. Are your performance goals aligned with your suppliers and retailers goals?

1.2. Organization Design:


The organization design is the facet of value chain strategy that has to do with mapping the most efficient and effective value stream. It attempts to balance centralization versus decentralization, globalization versus regionalization, and process versus functional focus. 1.2.1. Does your organization structure address the centralization, globalization, and functional aspects? 1.2.2. Are all relevant functions in place? 1.2.3. Are all the functions necessary? 1.2.4. Is the current flow of inputs and outputs between functions the optimum flow? 1.2.5. Does your organization structure support your suppliers and retailers organization structure?

1.3. Organization Managerial Measures:


The organization managerial measures are the facet of value chain strategy that defines your overall value chain metric scheme including definition, data collection, data segmentation, reporting, and defect analysis. 1.3.1. Are you regularly measuring and managing metrics for delivery reliability, responsiveness, and flexibility? 1.3.2. Are you regularly measuring and managing metrics for value chain cost reduction and asset utilization? 1.3.3. Are you regularly measuring and managing shareholder metrics for profitability and return? 1.3.4. Do you have the data analytics capability to support analyzing value chain performance data? 1.3.5. Are your scorecard and metric definition aligned with your suppliers and customers metrics and contractual requirements? 1.3.6. How responsive is the enterprise to changes in its business environment? 1.3.7. How able is the enterprise to make use of unpredicted opportunities in the marketplace?

2. Process 2.1. Process Goals:


The process goals are the facet of value chain strategy that cascades organization goals to your value chain network and processes. The value chain network refers to the physical movement of goods from your suppliers suppliers to your company to ultimately your customers customer. The value chain process refers to the plan, outsource, make, and deliver processes. Factors considered in setting network goals include service level, order fulfillment cycle time, flexibility, Cost of Goods Sold (COGS), and inventory turnover. Factors considered in setting process goals include transactional productivity for sales orders, purchase orders, work orders, and forecasts. 2.1.1. Do your organizational goals cascade to network goals for service level, order fulfillment cycle time, flexibility, COGS, and inventory turnover? 2.1.2. Do your organizational goals cascade to transactional productivity goals for sales orders, purchase orders, work orders, and forecasts? 2.1.3. Have you segmented your network and transactional cost to serve for each of your suppliers? 2.1.4. Are your middle managers measured and remunerated on a network and transactional productivity measures? 2.1.5. Are your network and transactional productivity goals aligned with your suppliers and retailers goals and contractual obligations?

2.2. Process Design:

256

Vol. 5, Issue 1, pp. 241-258

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
The process design is the facet of value chain strategy that has to do with defining your material flow, work flow, and information flow using the assemble-to-order strategy. Process design factors include geographic location of each supplier, industry best practice assessment, and transactional analysis. 2.2.1. Do you have an integrated plan, outsource, make, and deliver processes? 2.2.2. Have you designed or reviewed your material flow network in the past three years? 2.2.3. Does each of your business units adopt the assemble-to-order strategy? 2.2.4. Have your supply chain processes incorporated the industry best practices? 2.2.5. Are your processes aligned with customer requirements and supplier capability?

2.3. Process Managerial Measures:


The process managerial measures are the facet of value chain strategy that defines your site, functional area, and process metric scheme including definition, data collection, data segmentation, reporting, and defect analysis. It cascades from the organization measures. 2.3.1. Are you regularly measuring and managing site and function metrics for delivery reliability, planned lead-time, and flexibility? 2.3.2. Are you regularly measuring and managing site or function metrics for supply chain cost, i.e. order management cost, raw material and goods delivery cost, inventory carrying cost, information technology cost, and planning cost? 2.3.3. Are you regularly measuring and managing transactional productivity, i.e. process efficiency and transactional yield, for purchase orders, work orders, and sales orders? 2.3.4. Do you have site or functional area data analytics capability to support analyzing value chain performance data? 2.3.5. Does your organization scorecard and metric definition cascade to your site and functional areas? 2.3.6. Are your site and functional area metrics aligned with your suppliers and retailers goals and contractual requirements?

3. Technology 3.1. Technology goals:


The technology goals are the facet of value chain strategy that defines value chain system requirements to enable planning and execution of your value chain processes. The factors involved in defining technology requirements include process flows and definitions, transactional productivity targets, data warehouse and archiving needs, master data requirements, and system architecture constraints. 3.1.1. Do you have appropriate technology, i.e. functionality, which supports how you plan, outsource, make, and deliver? 3.1.2. Did you define your To Be processes based on striking a balance between system functionality and industry best practice? 3.1.3. Do you have goals set for master data integrity? 3.1.4. Are your technology managers measured and promoted on transactional productivity measures? 3.1.5. Do you have a collaboration technology plan with suppliers and retailers?

3.2. Technology Design:


The technology design is the facet of value chain strategy that has to do with defining your technological architecture and requirements. Also, it has to do with setting specific configurations for your business based on your process flows defined above. 3.2.1. Did you configure your system based on a To Be process Blue Print? 3.2.2. Are you using all of the functionality that you bought? 3.2.3. Have you realized all the technological benefits that were aimed to be realized? 3.2.4. Do you have appropriate data warehouse and analytical tools to support value chain analysis? 3.2.5. Did you implement your system with less than 10 software code customizations?

257

Vol. 5, Issue 1, pp. 241-258

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

3.3. Technology Managerial Measures:


The technology managerial measures are the facet of value chain strategy that defines your technology performance metric scheme including definition, data collection, data segmentation, reporting, and defect analysis. It cascades from the process measures. 3.3.1. Have appropriate technology sub-goals been set? 3.3.2. Is your technology performance assessed? 3.3.3. Are sufficient resources allocated to support effective use of technology? 3.3.4. Are the interfaces between technologies being managed? 3.3.5. Is your technology performance metrics aligned with your suppliers and retailers performance metrics and contractual requirements, i.e. outward facing Enterprise Resource Planning (ERP) based on Private Trading Exchange (PTX) performance?

4. People 4.1. People Job Goals:


The job goals are the facet of value chain strategy that defines the type of job requirements and goals necessary to execute value chain processes and to manage value chain technology. 4.1.1. Have appropriate job sub goals been set linked to the plan, outsource, make, and deliver processes? 4.1.2. Are job goals cascaded from the organization and process levels?

4.2. Job Design and People:


The job design is the facet of value chain strategy that defines the type of job requirements and goals necessary to execute value chain processes and to manage technology. 4.2.1. Are sufficient resources allocated to support effective use of technology? 4.2.2. Are the interfaces between technologies being managed? 4.2.3. Are the plan, outsource, make, and deliver processes requirements reflected in the relevant jobs? 4.2.4. Are job steps in a logical sequence? 4.2.5. Have supportive policies and procedures been developed? 4.2.6. Is the job environment enabling?

4.3. People Job Managerial Measures:


The job managerial measures are the facet of value chain strategy that defines metrics to measure whether people performance and job requirements and goals meet the goals of executing the value chain processes and of managing technology. 4.3.1. Do the performers understand the job goals and standards they are expected to meet? 4.3.2. Do the performers have sufficient resources, clear signals and priorities, and logical job design? 4.3.3. Are the performers rewarded for achieving job goals? 4.3.4. Do the performers know if they are meeting job goals? 4.3.5. Do the performers have the necessary knowledge, skill, and physical capability to achieve the job goals?

AUTHORS BIOGRAPHY
Salah A.M. Elmoselhy holds MS in mechanical design and production engineering that he received from Cairo University. He holds as well MBA in international manufacturing business that he received from Maastricht School of Management (MSM). He has ten years of industrial experience in CAD/CAM and robotised manufacturing systems. He has been recently a researcher at the Engineering Department and Fitzwilliam College of Cambridge University from which he received a Diploma of postgraduate studies in engineering design. He is currently a PhD Candidate in mechanical engineering working with the International Islamic University Malaysia (IIUM) and the Center for Sustainable Mobility at Virginia Polytechnic Institute and State University (Virginia Tech).

258

Vol. 5, Issue 1, pp. 241-258

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

MECHANICAL EVALUATION OF JOINING METHODOLOGIES IN MULTI MATERIAL CAR BODY


Irfan Dost1, Shoukat Alim Khan2, Majid Aziz3
1

Sarhad University, Peshawar, Pakistan 2&3 Politecnico di Torino, Turin, Italy

ABSTRACT
The economical use of energy and other limited resources and the protection of the environment will be one of the main influencing cornerstones of tomorrow's mobility. Intensive efforts in the automotive industry focus on further reduction of CO2 emissions and higher energy efficiency in all phases of vehicle life cycles. Consequent development of lightweight design plays a major role for further fuel consumption reduction. Innovative and sustainable lightweight structural design can be developed only in an integrated approach through global consideration of intelligent design concepts and material technologies together with applicable manufacturing methods. Innovative approaches have to be assessed across the target conflict of weight reduction needs and economic justification to identify the most suitable solutions within respective requirements. Innovative hybrid materials and intelligent multi-material design show high application potentials for future car body light weight strategies.

KEYWORDS: Multi-Material, Car Body, Joining, Methodologies, Mechanical, Evaluation

I.

INTRODUCTION

As the time progresses, there is a dire need for the reduction of CO2 emissions as the standards are getting more strict than before. Lower fuel consumption is also the main goal of every manufacturer. There has been a lot of research on the engine and CO2 emissions, much technological advancement like EFI (Electronic Fuel Injection) were brought up increase fuel consumption efficiency and lower emissions. There are now numerous attempts being made to modify the body in white structure so to achieve the same tasks. Lighter weight of a vehicle ends up giving us our desired result in this field. For that purpose we will try to illustrate the methods of the weight reduction using multi-materials in the car body. Car bodies are typically made of steel or aluminum. They left a lower choice of flexibility in reduction of weight and design. In contrast to single-material car bodies, multi-material technology allows best material selection in every part of the car for superior product performance and reduced cost. By using multi-materials we can optimize the weight of the car, using the specific material for a specific purpose. This approach gives us lot of choices and also makes our design more efficient. The main driver to develop new automotive construction is to reduce emissions, which have a harmful effect on climate; as well as reduction in fuel consumption. As a result, we have to consider reduction of automotive weight and consequently at different approaches for lightweight manufacturing. Steel construction has already a capability for weight reduction in future but using multi-material design, this capability can be extremely improved. Nowadays research activities are mainly focused

259

Vol. 5, Issue 1, pp. 259-268

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
on multi-material concept, with the target of introducing the material with the best properties for the given requirements in the right position. Based on various methods of lightweight construction, techniques and tools, it is possible to find an optimum between lightweight design and costs. These activities will be illustrated by several research examples.

II.

MATERIALS

Before giving different solutions and approaches, let us look up into certain properties of different materials which are being implied in car design in place of steel alone. There are mainly two materials namely; aluminum and magnesium.

2.1 Aluminum
The European automotive industry has more than doubled the average amount of Aluminum used in passenger cars during the last decade and will do even more so in the coming years. We use the following types of Aluminum in our car body: i. Aluminum sheet ii. Aluminum die cast iii. Aluminum extrusion One of the main advances of aluminum is its availability in a large variety of semi-finished forms, such as shape castings, extrusions and sheet. Such semis are very suitable for mass production and innovative solutions in the form of compact and highly integrated. Aluminum can be up to two-and-a-half times stronger than steel and can absorb twice as much crash energy. Vehicles made lighter with aluminum can have improved acceleration, braking, handling and better fuel economy. Finally, aluminum is easily repaired but it takes special techniques.

2.2 Magnesium
Pure magnesium is about one-third lighter than aluminum, and two-third lighter than steel. Lighter weight translates into greater fuel efficiency, making magnesium alloy parts very attractive to the auto industry. And these lighter parts come with good ductility and elongation properties, giving the materials good dent and impact resistance, as well as fatigue resistance. The alloys also display good high-speed machinability and good thermal and electrical conductivity. Although magnesium alloys can be easily machined into various parts, they really stand out when die cast. They can be formed into complex shapes in one casting, often reducing cost by eliminating several steel stampings and the associated assembly. The magnesium we used in our multi material car body is i. Magnesium sheet. ii. Magnesium Die cast If you were to look at a cross section of a die-cast part, you would see a very thin skin (that's coatable, by the way) covering a fine interior microstructure. Once decried as magnesium's greatest weakness, this microstructure is now recognized as one of magnesium alloy's greatest strengths. It allows the material to be cast with very thin walls, optimizing design and decreasing the component's weight. The microstructure also gives the alloys good sound and vibration dampening qualities. In fact, many luxury cars use magnesium alloys for valve covers and other under-the-hood parts, keeping the ride nice and quiet. Engineers like die casting with magnesium alloys because they can design to specific yield strength, fatigue, and creep criteria. There is a note of caution here, however. There is relatively no creep in magnesium alloys at room temperature, but if higher temperatures are anticipated in the application, the design will need to accommodate the resulting creep factors.

III.

MATERIAL SELECTION METHODOLOGY

The choice of suitable materials can be very difficult for an engineer. For this purpose, the requirements for every part have to be identified and rated. The criteria are: i. Energy absorption ii. Structural integrity

260

Vol. 5, Issue 1, pp. 259-268

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
iii. Stiffness iv. Formability v. Surface quality Same criteria are used to rate the material properties. A comparison of these criteria gives engineer an idea about the selection of possible materials. The next step is to involve additional criteria like: i. Cost ii. Life cycle analysis iii. Simulation iv. Corrosion v. Joining vi. Producibility

3.1 Simulation
After choice of materials, Design of the proposal is checked by simulation.

IV.

JOINING TECHNOLOGIES

One of the most important and difficult steps in giving the real life shape to the multi material car body is the selection and definition of a suitable joining method between two different materials. Various joining techniques can be used by multi material concept in car designing due to diverse joining and geometrical configurations. Such techniques are highlighted as follows.

4.1 The Self-Tapping Screws


Already used in models such as the TT are also suitable for joining aluminum and CFRP parts, such as in the area of the longitudinal member.

4.2 Friction Stud Welding


Another high-end method, friction stud welding, is used to join steel and aluminum.

4.3 Rivet
A steel element, a kind of rivet, penetrates an aluminum panel while rotating at high speed and under great pressure, creating a friction-welded joint with the steel sheet below.

4.4 Resistance Spot Welding


RSW is most commonly used technique for hot pressed steel due to low cost and robustness in process. There are also many promising developments in rivet technology and in aluminum resistance spot welding.

4.5 Roller-Type Hemming


Another innovative joining technique is roller-type hemming. Here rollers secured to a robot arm bend the outer panel over the inner panel and create a powerful connection by the application of a hembonding adhesive. The add-on components on the new A8 (doors, bonnet and tailgate) and the connection of the wheel arch with the side-panel frame are processed in this way.

4.6 Inductive Gelling


In this process, the hem-bonding zones on the add-on components are heated through targeted induction (electric field) that hardens the hem-bonding adhesive. The component is thus stabilized and any slipping of the outer panel to the inner panel is avoided.

261

Vol. 5, Issue 1, pp. 259-268

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963 4.7 Laser Beam-MIG-Hybrid Welding
This is brought about among other things by innovative manufacturing processes. There is thus a combination, for example, of the conventional laser and arc welding process in the laser beam-MIGhybrid welding process (MIG stands for Metal Inert Gas) which is completely new in vehicle construction with aluminum.

4.8 Adhesive Gluing


Continuous joining is important due to static and dynamic requirement of light structure i.e. lower modulus material structure. It eliminates the risk of corrosion posed by the carbon-reinforced materials while simultaneously sealing the joint.

4.9 Friction Stir Spot Welding


This technique is mainly used for spot joining aluminum instead of riveting. Care must be taken to the tool life duration in order to keep the cost under allocated budget.

4.10 Semi Hot Welding


This technique is very promising when used for edge or angle joining with single sided access possibilities. It includes MIG, cold metal transfer (CMT) and laser weld brazing techniques. Semi hot welding enables to use hollow sections in space frame thus reducing overall weight of the car body.

4.11 Cold Joining (Self Piercing Riveting)


It is widely used technique for keeping same geometrical configuration and assembly placement. Self piercing riveting is an important solution for joining the non-weld able materials like magnesium plus aluminum and magnesium plus steel sheet.
Table1. Processes vs. Material Combination

Material Combination Steel to Steel Steel to Al Steel to Mg Al to Al Al to Mg Mg to Mg CFRP to Steel CFRP to Al

Resistance Spot welding

Laser and Arc welding

Self Piercing Riveting

Friction Steel welding

V.

MATERIAL COMBINATION

5.1 Steel to Steel Joining


The material choice strategy for the future is to not only to decrease weight or make the car safer but is also anything that will help build a successful vehicle. If the material is successful in meeting very particular performance goals - noise, vibration, airflow, weight and, of course, cost then it will be used.

262

Vol. 5, Issue 1, pp. 259-268

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
5.1.1 Resistance Spot Welding Resistance Spot Welding is widely adopted steel to steel joining technique due to low cost, easy automatization and robustness. The time cycle for current assembly lay out time is 3 sec. The involved robot produces 20 spots per minute time cycle. First characteristic is spot weld shearing strength increases with assembling thickness from 1 mm to 4 mm currently used in car bodies By keeping the spot diameter greater than 5 times of the assembly thickness, much better results can be obtained. Note: for better quality in results keep Spot diameter >= 5 times Assembly thickness. Another characteristic with respect to shearing strength and pulling strength is that one will decrease and the other will increase accordingly. The ratio between their shear strength and pulling strength decreases from 0.5 to 0.3. This impact is because of reduction in ductility caused by steel increasing harden-ability. However; the results are not much affected because the unibody structure is mainly subjected to shearing stresses instead of pulling stresses.

Fig. 1 Tensile Shear Stress vs. Steel Strength

Arcelor improves Resistance Spot Welding in hot pressed steel is improved by Arcelor. This improvement is caused by increasing induced energy for producing high strength spot welds increasing with assembly sheet thicknesses and by enlarging welded nugget diameter.

Fig. 2 Spot Welding Strength vs. Arcelor

Third characteristic of resistance spot weld strength is superior to self piercing riveting and clinching when assembly thickness increases as shown in Fig 3.

263

Vol. 5, Issue 1, pp. 259-268

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Fig. 3 RSW Characteristic vs. Assembly Thickness

The resistance spot welding gives better results with respect to strength as compared to clinching and self piercing riveting. 5.1.2 Laser and Arc Welding The resistance spot welding is restricted by joining speed average to be 1 m/min .This speed can only be increased by Laser Welding typically until equivalent speed of 5 m/min or 0.5 s per equivalent weld spot.

Fig. 4 Assembly Productivity vs. Laser Welding Systems

Laser welding was primarily applied for continuous overlap joining using CO2 then yttrium aluminum garnet (YAG) lasers. Nowadays many car manufacturers use it for roof -body side joining. This usage was not widely appreciated until T angle joining configuration made it efficient for suppression in roof molding. To get better integration high cadence time cycle one should use 3 kW power laser T weld brazed joining speed is > 2.5 m/min. New lasers sources using SLAB CO2, disk or fiber improve beam quality. This improvement enables longer focusing to perform Laser Remote Welding" and also separation of laser head and clamping tools. This adoption is efficient for assembling "flat" parts with main advantage of improvement in passenger visibility by smaller flanges width. The induced brittleness at grain boundaries is caused by Arc welding aluminum. Resultantly leaving aluminum coating weld brazing is more suited for lap edge joining. Cold metal transfer or pulsed MIG was successfully used with Cupro silicon filler wire for lap edge joining hot pressed parts with high join strength >= 30 daN/mm. In bending and torsion, both material and shape are important parameters for the efficiency of the component to carry the applied load [5].

5.2 Aluminum to Steel Joining


Main solution to save weight is progressive introduction of Aluminum consequently aluminum steel joining is a key issue. Most current solution is cold joining often adhesive gluing with riveting needed for clamping parts before glue curing. Hybrid joining is more costly consequently new simpler processes have been worked out. Aluminum steel is not weld-able by process techniques inducing

264

Vol. 5, Issue 1, pp. 259-268

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
liquid phase because of formation of brittle inter-metallics. Monitored Resistance Spot Welding proceeds by solid state diffusion of aluminum steel have been developed in Japan by Kobelco. 5.2.1 Resistance Spot Welding Resistance spot welding of aluminum-steel uses current control versus impedance to promote just needed fusion for limiting intermetallic. This enables to increases the spot "welded" diameter and spot shearing strength versus welding current.

Fig. 5 Spot "Welded" Diameter and Spot Shearing Strength vs. Welding Current

Friction Stir Spot Welding is another solid state process looks very similar to apply as Resistance Spot Welding. Process parameters are easy to manage as a machine tool. Welding guns produced by Kawasaki are similar to resistance spot welding. Spot weld strength is produced by penetration of tool pin in assembling. Work carried out by Phd Bozzi demonstrates that spot strength is promoted by dimensions of steel hanging zone in aluminum. Typical produced spot strength = 350 daN for 2 mm assembly thickness; is equivalent to self piercing riveting.

Fig. 6 Spot Weld Strength vs. Distance of Weld

Industrialization benefit depends on tool durability which has been recently improved from 450 spots by using W-Re 25% alloy to 2000 spots by using coated CW material from Boehlert. 5.2.2 Arc, Arc Laser (Lap Edge Joining) Fusion welding process produces inter-metallic at Al-steel interface. Thickness increases with overheating and may decrease with cooling rate. Thickness should be less than 20 m to avoid brittleness. AC and DC MIG processes produces satisfactory weld brazed lap edge joints with control of inter-metallic thickness. Central Electricity Authority (CEA) developed MIG flat wire process with possibility to enlarge gap tolerances until 2 mm. Tensile shear strength of overlap specimen meet

265

Vol. 5, Issue 1, pp. 259-268

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
basic aluminum strength by production of a thick diffusion layer = 40 quasi without intermetallic = 1.5 . Tooling coil progress is still needed to acquire robustness.

Fig. 7 Diffusion Layer

To summarize arc and hybrid arc weld brazing joint strength decreases when welding speed increases. Geometrical criteria for high lap edge joint strength includes large interface length > 3 upper sheet thickness for all arc weld brazing processes. Work results can be summarized by weld brazing strength-speed diagram: arc MIG and hybrid arc laser weld brazing joint strength decreases when welding speed increases. 5.2.3 Self-Piercing Riveting Clinching and self-piercing riveting for joining magnesium alloy die-castings to steel and aluminum alloy sheet. Ordinarily, these processes cannot be used for joining magnesium die-castings due to their low ductility. The clinching process for joining magnesium die-castings to steel and aluminum sheet has been used very successfully. Self-piercing riveting also offers the ability to set a threaded or shaped head into magnesium. So, attachments for fixing trim in place or for screw fittings can also be attached to magnesium die-castings. 5.2.4 Frictions Stir Welding Magnesium alloy components present joining difficulties when integrating the die-castings into the rest of the structure, particularly where joints between magnesium and aluminum or steel are required. The only method currently available for completing such joints is bolting, but this adds weight and cost, requires accurate alignment and can create problems of fretting and wear between the bolt and the softer magnesium alloy. Alternative methods of joining magnesium to dissimilar metals are essential if the cost of using lightweight magnesium components is to be reduced and weight savings maximized. Seam welds are possible using friction stir welding.

VI.

EFFECT OF COST

To summarize multi-joining processes data a technical economic synthesis may be tried. Steel-steel joining is less costly. Aluminum-steel joining is possible by several processes depending on priority: joint strength versus joint speed should be a compromise. Effort is still needed to improve joining multi-materials performances for reducing weight saving over cost.

266

Vol. 5, Issue 1, pp. 259-268

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Fig. 8 Comparison of joining cost & speed for multi materials joining

VII.

CONCLUSION

The multi-material concept has established to be very capable in terms of weight reduction and cost feasibility. Based on this, it is the manufacturer who has to decide in the end which approach is most effective for him. It was revealed that different approaches for different lightweight ratios (cost per reduced weight) are possible. The paper also showed that extensive work is still needed on developing joining techniques for parts made of multi-materials. In this context, adhesive bonding has demonstrated its high prospects, but still has to achieve dynamic simulation capabilities. Perfect Joining methodologies are the target for the future, always taking the issue of cost-reduction into account. In order to reduce CO2 emission, weight reduction must be considered. Weight reduction has close connection with technical development. In order to reduce weight, 3 issues need to be considered. i. Structure ii. Material iii. Process In order to reduce the car weight, there is trade off amongst communization strategy, performance, cost, etc. Balancing of those factors is required. For further weight reduction, it is necessary to consider the car in total not in just by one part/unit. Car structure will shift to Multi-Material thus optimization to apply best material and process will be essential. The Light weight car is one of the needs of the present and especially future automotive industry while multi material car body is one of the main solutions. In multi material car body one of the challenging and most important part is the joining techniques between these materials. The swap among different techniques depends solely on materials to be joined (e.g. Al, Mg, steel, plastics), thickness of the work piece in hand, stress value, proper orientation, geometrical assembly configuration (e.g. angle, edge, overlap etc) and most importantly on joining speed effecting cycle time.

REFERENCES
[1]. Martin Goede & Marc Stehlin & Lukas Rafflenbeul & Gundolf Kopp & Elmar Beeh Super Light Carlightweight construction thanks to a multi-material design and function integration, European Conference of Transport Research Institutes (ECTRI) 2008 [2]. Lutz Berger, Micha lesemann, Christian Sahr Super LIGHT CAR The Multimaterial Car Body.7th European LS-DYNA Conference. [3]. Catarina Ribeiro, Jos V. Ferreira and Paulo Partidrio Life Cycle Assessment of a Multi-Material Car Component. [4]. Claes Magnusson, Roger Andersson, Stainless Steel as a Lightweight Automotive Material [5]. Ashby, M. F. (2000) Materials Selection in Mechanical Design Butterworth-Heinemann ISBN 0-75064357-9

267

Vol. 5, Issue 1, pp. 259-268

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

AUTHORS
Irfan Dost is a University Lecturer at the Department of Mechanical Engineering, Sarhad University of Science and Information Technology. He holds degrees from the NWFP University of Engineering and Technology, Peshawar (BSc Mechanical Engineering, MSc Mechanical Engineering). Before joining the Sarhad University, he held positions at MIA Corporation Ltd (Project Engineer) at S Abdulla site Islamabad. His main research interests include manufacturing processes, heat and mass transfer, industrial organization, engineering economic and CAD. He has taught courses on these and other subjects at all levels.

Majid Aziz is student of master degree in politecnico Di Torino. Politecnico Di Torino is well known Italian university. He has done has done his bachelors degree in Automotive engineering from the same university and has already been selected for PhD in Mechanical engineering at Politecnico Di Torino, Italy.

Shoukat Alim Khan is a student of master degree in well known International Italian University, Politecnico Di Torino. He has completed his bachelors degree in Automotive Engineering from same university and now is student of Mechanical Engineering. He has also been selected for the PhD degree in Mechanical Engineering at Politecnico Di Torino.

268

Vol. 5, Issue 1, pp. 259-268

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

IMPROVED AODV BASED ON LINK QUALITY METRICS


Balaji V1, 2, V. Duraisamy3
2

Research Scholar, Maharaja Institute of Technology, Coimbatore, India. Assistant Professor-II, SASTRA University, SRC Campus, Kumbakonam, India 3 Principal, Maharaja Institute of Technology, Coimbatore, India

ABSTRACT
The wireless interfaces in mobile ad-hoc networks (MANET) have limited transmission range; communication traffic is relayed over several intermediate nodes to ensure a communication link between two nodes. Since the destination is reached using multiple hops from the source, routing plays an important role in Ad hoc network reliability. Since the network is dynamic in nature, conventional routing protocol may not perform well during adverse conditions like poor link quality, high mobility. In this paper, a new MANET routing method based on Ad hoc On demand Distance Vector (AODV) and Ant Colony Optimization (ACO) is proposed for networks with varying levels of link quality. ACO is inspired from the biological behaviour of ants. Achievement of complex solutions with limited intelligence and individual capacity within these communities can be emulated in ad hoc networks. A new link quality metric is defined to enhance AODV routing algorithm so that it can handle link quality between nodes to evaluate routes.

KEYWORDS:

Mobile ad hoc network (MANET), Ad hoc On demand Distance Vector (AODV), Ant Colony Optimization (ACO), Link quality, Metrics.

I.

INTRODUCTION

A mobile ad hoc network (MANET) is a decentralized group of mobile nodes which exchange information temporarily by means of wireless transmission. Since the nodes are mobile, the network topology will modify rapidly and randomly over time. Since the topology is not a structured one, the nodes tend to enter or move away the network at there own. The network topology is unstructured and nodes may enter or leave at their will. A node can exchange information to other nodes which are within its broadcast range. Such networks are flexible and suit several situations and applications, thereby allowing the establishment of temporary communication sans pre-installed infrastructure [1]. Because of wireless interfaces limited transmission range communication traffic is relayed over several intermediate nodes to ensure a communication link between two nodes. Hence, such networks are also known as mobile multi-hop ad-hoc networks. Nodes fulfil the functionality of hosts and also have to be routers, forwarding packets for other nodes. The main issue in MANETs is finding of routes between communication end-points, aggravated through node mobility. Literature reveals different approaches that try to handle this problem [2], but there is still no routing algorithm that suits all cases. Routing plays an important role in Ad hoc network reliability. Routing can be classified into proactive and reactive routing protocols. The former routing protocols discover routes for every node pair through continuous updation of routing tables at a specific time intervals irrespective of data traffic between source/destination. A route must be available when communication is proposed between

269

Vol. 5, Issue 1, pp. 269-275

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
source and destination. Popular pro-active routing protocols are Distance Sequenced Distance Vector (DSDV) routing [3], Optimized Link State Routing (OLSR) [4] and Fish Eye State Routing (FSR) [5]. Proactive routing protocol based wireless networks have additional network overheads because of constant updating of route traffic with minimum end to end delay. But reactive routing protocols establish a destination route only when needed. Though, network control packet overheads are lower in reactive routing protocols, end to end delay increases because of route discovery procedures [6]. Common reactive routing protocol includes Ad Hoc on Demand Distance Vector (AODV) routing [7], Dynamic Source Routing (DSR) [8], Associativity Based Routing (ABR) [9]. The routing algorithms for Mobile Ad hoc networks (MANET) are inherited from conventional algorithms which are subject to much criticism as they do not consider all ad hoc network characteristics including mobility and resource constraints. This paper proposes a new MANET routing method inspired from the biological behaviour of ants. Achievement of complex solutions with limited intelligence and individual capacity within these communities can be emulated in ad hoc networks that are composed of small limited capacity nodes moving randomly in unpredictable environments. New metrics have been defined to enhance AODV routing algorithm so that it can handle link quality between nodes to evaluate routes. Performance of the suggested algorithm is compared to AODV. This paper is organized into the following sections. Section 2 looks into related work available in literature, section 3 gives a brief discussion on AODV routing protocol. Section 4 discusses about link metrics with section 5 discussing the proposed method. Section 6 concludes this paper with scope for future work.

II.

RELATED WORKS

For mobile, multi-hop ad-hoc networks, Mesut Gne et al., [10] introduced a novel on-demand routing algorithm. On the basis of swarm intelligence the protocol is developed and the main emphasis is on ant colony based metaheuristic. The result of the capability of swarms to mathematical and engineering problems in order to map it is processed by this proposed approach. The proposed routing protocol is very flexible, effective and also scalable. Decreasing the overhead for routing is the major aim of the proposed protocol. The proposed protocol is termed as Ant-Colony-Based Routing Algorithm (ARA). The better performance of ARA is revealed through simulation experiments. Hsin-Mu Tsai, et al., [11] examines about routing a protocol based on hop count that comprises the quality of the links employed in the protocol. The current ad hoc wireless routing protocols usually select the route with the shortest-path or minimum hop count to the destination. But this choosing of routes in this criterion tends to include longer hop length links. These links that are involved tend to be of bad signal quality. These links possess usually a poor signal-to-noise ratio (SNR) that causes higher frame error rates and lower throughput. The elimination of routing through bad links must be done to enhance the routing protocols. A modification in the Ad hoc On-demand Distance Vector (AODV) routing protocol to evade routing through links that are of bad quality is proposed. The hand-off concept is also incorporated during route maintenance to prevent link breakages in the proposed protocol. From the OPNET stimulation, a promising result is obtained which provides the protocol a much lower routing overhead. But still, in terms of throughput and delay OPNET stimulation provides a better performance than the original AODV protocol. Several problems in system optimization are experienced by designers during the construction of distributed systems. Designers conventionally depended on optimization methods that neither need prior knowledge nor centrally managed runtime knowledge about the systems environment, since these methods are feasible in dynamic networks there is often and unpredictable modifications in topology, resource, and node availability. Jim Dowling et al., [12] proposed a technique that facilitates solving of system optimization problems online in dynamic, decentralized networks called collaborative reinforcement learning (CRL) in order to deal the above issue. In the SAMPLE, an implementation of CRL in a routing protocol for MANET, the performance of the proposed CRL is estimated. The results obtained by simulation shows the role of feedback in selecting the links by routing agents that allows SAMPLE to adjust and optimize the routing behaviour to differing network environment, ensuing optimization of throughput. Emergent properties like traffic flows which utilize

270

Vol. 5, Issue 1, pp. 269-275

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
stable routes and reroute around congestion are displayed by the SAMPLE in the experiments. A complex adaptive distributed system is cited by SAMPLE as an exemplar. In a wireless network when few services are implemented, a node used is necessarily supposed to be in good conditions. Network partitioning, is the first criterion in a wireless environment, while the host of the service it is using cannot be attained by the client. Handling this criterion or QoS attribute efficiently by the use of predicting partitioning and implementing service replication that comprises a novel host election for the service and duplication of it on the provided novel host. By means of service replication, the wireless networks QoS involving many criterias are handled. Then the time at which the replication must occur is the major issue concerned. Michal Hauspie et al., [13] introduced a metric for link quality estimation and a quick and consistent protocol to compute it practically is also presented. Simulations are performed and analysis of the proposed method reveals robustness of the metric of a link that does not brings down the effectiveness of the network like a broadcast storm. Topology is also handled as the algorithm is totally decentralized.

III.

AD HOC ON-DEMAND DISTANCE VECTOR (AODV)

If a node uses Ad hoc On-demand Distance Vector (AODV) [7] routing protocol and has no route to its destination with which it wants to communicate, then that node starts route discovery to locate a destination node. During this process, the source node broadcasts a route request (RREQ) packet to neighbours as seen in Figure 1 (a). When a RREQ packet is received by the node, it is forwarded to neighbours till either the destination node/intermediate node having a route to the destination is finally reached. When the destination node/intermediate node with a route to a destination gets the RREQ message, it replies through route reply (RREP) packet to a source node as revealed in Figure 1 (b). Sequence numbers are used in AODV to enable routes to be loop free.

(a) Propagation of Route Request (RREQ) Packet

(b) Path Taken by Route Reply (RREP) Packet

271

Vol. 5, Issue 1, pp. 269-275

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Figure 1 Route Discovery in AODV

IV.

LINK QUALITY

Link quality is a dominant parameter, as it defines a given links and devices ability to support traffic density for the connected period. Link state between two neighbours is affected by parameters like distance, battery power and mobility. The next parameter in route selection is connections number in the same path, to choose save resources of intermediate nodes over this stretch by distributing network traffic over other nodes. Hence this increases system lifetime and also end to end delay. Link quality between two neighbours is the ability of the link to be stable as long as possible, have less bit errors and reach the destination with high signal strength. Literature evaluates link quality according to received signal strength, as transmission power of a wireless medium is directly proportional to link quality, as a high strength signal is stable and has less bit errors. The following equation gives reception power Pr for a transmitted signal with power Pt for a distance d:

Pr Pt * Gr * Gt *

2 4* * d

(1)

Where Gt is antenna gain of the transmitter, Gr is antenna gain of the receiver, and is wavelength. From this equation, evaluating link quality based on received signal strength is also descriptive for other network factors like: Battery power: This is important as a node with low energy in its battery has limited transmission range affecting its link quality with the neighbourhood. But on the other hand, it cannot forward data for long. When battery level is low transmission power is also low proportionately leading to low reception power. Hence this is not a high quality link.. The distance: Reception power is relative to distance between nodes as when distance increase, link quality decreases. The mobility: Link between two nodes is affected by nodes mobility as link quality decreases when neighbours move away from each other and increases when they come closer.

V.

EXPERIMENTAL SETUP AND RESULTS

The simulations are run to evaluate the performance of the proposed routing on the basis of end to end delay, routing traffic and throughput using NS-2 simulation tools [14]. A network area of 670 by 670 m2 is built on the simulation platform. The simulation is run for 300 sec. The performance of the proposed routing is compared with the traditional AODV. Figures 2, 3 and 4 show the simulation results for end to end delay, routing traffic and throughput.

272

Vol. 5, Issue 1, pp. 269-275

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Figure 2: End to end delay in seconds

From figure 2 it can be seen that end to end delay in the proposed system reduces significantly which has advantages for multimedia traffic. Due to the inclusion of additional overhead parameters, the control packet overheads in finding the optimized route increases as seen in figure 3.

Figure 3: Routing Traffic bits/ sec

In the proposed routing, the ants tend to distribute the traffic through multiple paths ensuring decreased end to end delay. Though the routing traffic is increased in the proposed routing, throughput is better than that of AODV.

Figure 4: Throughput in bits/sec

VI.

CONCLUSION

273

Vol. 5, Issue 1, pp. 269-275

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
In this paper, it was proposed to improve the link quality by incorporating ant colony optimization with ad hoc on demand distance vector routing protocol. A new link quality metric is defined to enhance AODV routing algorithm so that it can handle link quality between nodes to evaluate routes. Simulations were run and the proposed routing was compared with AODV. An 4% improvement in the end to end delay along with an average improvement of 6% in the throughput was observed. Further work needs to be carried out to reduce the control packet overheads.

REFERENCES
[1]. Bibhash Roy, Suman Banik, Parthi Dey, Sugata Sanyal,Nabendu Chaki Ant Colony based Routing for Mobile Ad-Hoc Networks towards Improved Quality of Services, Journal of Emerging Trends in Computing and Information Sciences, VOL. 3, NO. 1, January 2012 ISSN 2079-8407 [2]. C.-K. Toh. Ad hoc mobile wireless networks: protocols and systems. Prentice Hall, 2002. [3]. Perkins.C.E. and P. Bhagwat, 1994.Highly Dynamic Destination-Sequenced Distance-Vector Routing (DSDV) for Mobile Computers.Proc. ACM SIGCOMM94, vol. 24, Oct. 1994, pp. 234244. [4]. T. Clausen and P. Jacquet. Optimized link state routing protocol (OLSR). RFC 3626: Optimized link state routing protocol (OLSR), Oct 2003. [5]. Pei.G, M. Gerla, and T.-W. Chen,2000.Fisheye State Routing: A Routing Scheme for Ad Hoc Wireless Networks.Proc. IEEE ICC 2000, vol. 1, 2000, pp. 7074. [6]. Valerie Alandzi and Alejandro Quintero,2007. Proximity Aware Routing in Ad Hoc Networks. Journal of Computer Science 3 (7): 533-539, 2007. [7]. C. E. Perkins, E. M. Royer and S. R. Das, Ad Hoc On-Demand Distance Vector (AODV) Routing, Proc. Of IEEE Workshop on Mobile Computing Systems and Applications, pp. 90-100, Feb. 1999 [8]. .Johnson, D. B., Maltz, D. A. and Hu, Y. (2003): The Dynamic Source Routing Protocol for Mobile Ad hoc Networks (DSR), IETF MANET. [9]. Toh. C. K, 1997. Associativity-Based Routing For Ad Hoc Mobile Networks. Wireless Personal Communications Journal, Special Issue on Mobile Net-working and Computing Systems, Kluwer Academic Publishers, vol. 4, no. 2, pp. 103139. [10]. M. Gunes, U. Sorges, I. Bouazizi, ARA The Ant-Colony Based Routing Algorithm for MANETs, in: Proceedings of the International Conference on Parallel Processing Workshops (ICPPW02) on Ad Hoc Networks (IWAHN 2002), August 2002, pp. 7985. [11]. Tsai, H.-M.; Wisitpongphan, N.; Tonguz, O. K. Link-quality aware ad hoc on demand distance vector routing protocol. In: International Symposium on Wireless Pervasive Computing 2006, p. 6. [12]. Jim Dowling, Eoin Curran, Raymond Cunningham, and Vinny Cahill, Using Feedback in Collaborative Reinforcement Learning to Adaptively Optimize MANET Routing IEEE Transactions on Systems, Man, And CyberneticsPart A: Systems And Humans, Vol. 35, No. 3, May 2005. [13]. Hauspie, M., Simplot, D., Carle, J.: Replication decision algorithm based on link evaluation services in MANET. CNRS UPRESA 8022 LIFL University Lille (2002) [14]. Dorigo M. and G. Di Caro. Ant colony optimization: a new meta-heuristic. In Proceedings of the Congress on Evolutionary Computation, 1999. [15]. The Network simulator The Network Simulator ns-2. Project web page available at http://www.isi.edu/nsnam/ns/

AUTHORS BIBLIOGRAPHY
V. Duraisamy received his B.E. Degree in Electrical & Electronics Engineering (1991) and M.E., Degree in Electrical Machines (1997) and Ph.D. Degree (2006) from Anna University, Chennai. He has 21 years of teaching experience and currently working as Professor and Principal at Hindusthan College of Engineering and Technology, Coimbatore. He is a life member of ISTE, SSI and member of IE. He has published more than 40 research papers in the Journals and Conferences. His research interest includes Soft computing, Electrical machines, Adhoc Networks. V. Balaji received his B.Tech. Degree in Electronics and Communication Engineering (2003) and M.Tech., Degree in Applied Electronics (2007) from Pondicherry University and Bharath University respectively. He is currently working as Assistant Professor-II SASTRA University SRC Campus Kumbakonam and pursuing his Phd Degree at Maharaja Institute of Technology (Affiliated to Anna University, Chennai) Coimbatore. He is a life member of ISTE, He has published more than 4 research papers in the Journals and Conferences. His research interest

274

Vol. 5, Issue 1, pp. 269-275

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
includes Adhoc Networks, Image Processing.

275

Vol. 5, Issue 1, pp. 269-275

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

IMPROVEMENT OF POWER QUALITY OF A DISTRIBUTED GENERATION POWER SYSTEM


Aruna Garipelly
Electrical and Electronics Engineering, Auroras Engineering College (Affiliated to JNTUH) Bhuvanagiri, Nalgonda, Andhra Pradesh, India

ABSTRACT
The aim of this work is to improve the power quality for Distributed Generation (DG) with power storage system. Power quality is the combination of voltage quality and current quality. Power quality is the set of limits of electrical properties that allows electrical systems to function in their intended manner without significant loss of performance or life. The electrical power quality is more concerned issue. The main problems are stationery and transient distortions in the line voltage such as harmonics, flicker, swells, sags and voltage asymmetries. Distributed Generation (DG) also called as site generation, dispersed generation, embedded generation, decentralized generation, decentralized energy or distributed energy, generates electricity from the many small energy sources. In recent years, micro electric power systems such as photovoltaic generation systems, wind generators and micro gas turbines, etc., have increased with the deregulation and liberalization of the power market. Under such circumstances the environment surrounding the electric power industry has become ever more complicated and provides high-quality power in a stable manner which becomes an important topic. Here DG is assumed to include Wind power Generation (WG) and Fuel Cells (FC), etc. Advantages of this system are constant power supply, constant voltage magnitude, absence of harmonics in supply voltage, un-interrupted power supply. In this project the electric power qualities in two cases will be compared. Case I: With the storage battery when it is introduced. Case II: Without the storage battery. The storage battery executes the control that maintains the voltage in the power system. It will be found that the electric power quality will be improved, when storage battery is introduced. The model system used in this Project work is composed of a Wind Turbine, an Induction Generator, Fuel Cells, An Inverter and a Storage Battery. A miniature Wind Power Generator is represented by WG. A fuel cell module is represented by FC. Transmission lines will be simulated by resistors and coils. The combined length of the lines from synchronous generator to the load terminal is 1.5 km. This model will be simulated using MATLAB/SIMULINK.

KEYWORDS:

Power Quality, Voltage Sag, Energy Storage, VSC, SVPWM, Distributed generation (DG) power system,. Wind generation system, fuel cell modules.

I.

INTRODUCTION

One of the major concerns in electricity industry today is power quality problems to sensitive loads. Presently, the majority of power quality problems are due to different fault conditions. These conditions cause voltage sag. Voltage sag may cause the apparatus tripping, shutdown commercial, domestic and industrial equipment, and miss process of drive system. The proposed system can provide the cost effective solution to mitigate voltage sag by establishing the appropriate voltage quality level, required by the customer. It is recently being used as the active solution Distributed Generation is a back-up electric power generating unit that is used in many industrial facilities, hospitals, campuses, commercial buildings and department stores. Most of these back-up

276

Vol. 5, Issue 1, pp. 276-287

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
units are used primarily by customers to provide emergency power during times when grid-connected power is unavailable and they are installed within the consumer premises where the electric demand is needed. The installation of the back-up units close to the demand center avoids the cost of transmitting the power and any associated transmission losses. Back-up generating units are currently defined as distributed generation to differentiate from the traditional centralized power generation model. The centralized power generation model has proven to be economical and a reliable source of energy production. However, with the lack of significant increase in building new generating capacity or even in expanding existing ones to meet the needs of todays mega cities demand, the whole electrical power industry is facing serious challenges and is looking for a solution. The technological advancements have proven a path to the modern industries to extract and develop the innovative technologies within the limits of their industries for the fulfillment of their industrial goals. And their ultimate objective is to optimize the production while minimizing the production cost and thereby achieving maximized profits while ensuring continuous production throughout the period. As such a stable supply of un-interruptible power has to be guaranteed during the production process. The reason for demanding high quality power is basically the modern manufacturing and process equipment, which operates at high efficiency, requires high quality defect free power supply for the successful operation of their machines. More precisely most of those machine components are designed to be very sensitive for the power supply variations. Adjustable speed drives, automation devices, power electronic components are examples for such equipments. Failure to provide the required quality power output may sometimes cause complete shutdown of the industries which will make a major financial loss to the industry concerned .Thus the industries always demands for high quality power from the supplier or the utility. But the blame due to degraded quality cannot be solely put on to the hands of the utility itself .It has been found out most of the conditions that can disrupt the processes are generated within the industry itself. For example, most of the non-linear loads within the industries cause transients which can affect the reliability of the power supply.

II.

DISTRIBUTED GENERATION (DG) SYSTEMS

Distributed Generation (DG) also called as site generation, dispersed generation, embedded generation, decentralized generation, decentralized energy or distributed energy, generates electricity from the many small energy sources. In recent years, micro electric power systems such as photovoltaic generation systems, wind generators and micro gas turbines, etc., have increased with the deregulation and liberalization of the power market. Under such circumstances the environment surrounding the electric power industry has become ever more complicated and provides high-quality power in a stable manner which becomes an important topic. Here DG is assumed to include Wind power Generation (WG) and Fuel Cells (FC), etc. Wind energy is the worlds fastest-growing energy technology. It is a clean energy source that is reliable, efficient and reduces the cost of energy for homeowners, farmers and businesses. Wind turbines can be used to produce electricity for a single home or building, or they can be connected to an electricity grid for more widespread electricity distribution. They can even be combined with other renewable energy technologies. For utility-scale sources of wind energy, a large number of turbines are usually built close together to form a wind farm. Several electricity providers today use wind farms to supply power to their customers. Fuel cell systems have high energy efficiency. The efficiency of low temperature proton exchange membrane (PEM) fuel cells is around 35-45%. High temperature solid oxide fuel cells (SOFC) can have efficiency as high as 65%. The overall efficiency of an SOFC based combined-cycle system can even reach 70%. Renewable energy and fuel cell systems are environmentally friendly. From these systems, there is zero or low emission (of pollutant gases) that causes acid rain, urban smog and other health problems; and, therefore, there is no environmental cleanup or waste disposal cost associated with them.

Why distributed generation (DG) system:


The five major factors that contribute to the renewed interest in distributed generation (DG) system: Electricity market liberalization Developments in DG technologies

277

Vol. 5, Issue 1, pp. 276-287

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Increased customer demand for highly reliable electricity. Environmental concerns. Constraints on the construction of new transmission lines.

Advantages of DG systems:
Utility perspective: On-site power supply avoids transmission and distribution losses. Increasing the efficiency compared with central power generation. Diversification of power sources. A possible solution to constraints on new transmission lines. Provides cleaner power by using renewable sources such as wind and sun. Better quality of power. Hedge against uncertain load growth and high market. Customer perspective: Improving energy efficiency and reducing green house- gas emission through combined heat and power (CHP) plants and renewable sources. Improved reliability by having back-up generation. Receiving compensation from the utility for making their generation capacity available to the power system in areas with power shortages. Commercial power producer: distributed generation systems with their comparatively small size and short lead times as well as their different technologies, allow players in electricity market to respond in a flexible way to changing market conditions. To sell ancillary services (such as reactive power and stand by capacity etc.)

Why alternative/renewable energy?


The term alternative energy is referred to the energy produced in an environmentally friendly way (different from conventional means, i.e., through fossil-fuel power plants, nuclear power plants and hydropower plants). Alternative energy considered in this dissertation is either renewable or with high energy conversion efficiency. There is a broad range of energy sources that can be classified as alternative energy such as solar, wind, hydrogen (fuel cell), biomass, and geothermal energy. Nevertheless, as mentioned in the previous section, at present the majority of the world electricity is still generated by fossil fuels, nuclear power and hydropower. However, due to the following problems/concerns for conventional energy 5 technologies, the renewable/alternative energy sources will play important roles in electricity generation. And, sooner or later, todays alternatives will become tomorrows main sources for electricity. Conventional generation technologies are not environment friendly Conventional energy sources are not renewable The cost of using fossil and nuclear fuels will go higher and higher Hydropower sources are not enough and the sites are normally far away from Load centers. Political and social concerns on safety are pushing nuclear power away.

Alternative /Renewable powers have the following advantages:


1) Renewable energy resources are not only renewable, but also abundant: For example, according to the data of 2000, the U.S. wind resources can produce more electricity than the entire nation would use. The total solar energy from sun in a day at the earth surface is about 1000 times more than the all fossil fuels consumption. 2) Fuel cell systems have high energy efficiency: The efficiency of low temperature proton exchange membrane (PEM) fuel cells is around 35-45%. High temperature solid oxide fuel cells (SOFC) can have efficiency as high as 65%. The overall efficiency of an SOFC based combined-cycle system can even reach 70%. 3) Renewable energy and fuel cell systems are environmentally friendly: From these systems, there is zero or low emission (of pollutant gases) that causes acid rain, urban smog and other health problems; and, therefore, there is no environmental cleanup or waste disposal cost associated with them. 4) Different renewable energy sources can complement each other: Though renewable energy resources are not evenly distributed throughout the world, every region has some kinds of renewable

278

Vol. 5, Issue 1, pp. 276-287

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
energy resources. At least, sunlight reaches every corner in the world. And different energy resources (such as solar and wind energy) can complement each other. This is important to improve energy security for a nation like the U.S. which is currently dependent on the foreign energy sources. These renewable/alternative power generation systems normally have modular structure and can be installed close to load centers as distributed generation sources (except large wind and PV farms). Therefore, no high voltage transmission lines are needed for them to supply electricity. In general, due to the ever increasing energy consumption, the rising public awareness for environmental protection, the exhausting density of fossil-fuel, and the intensive political and social concerns upon the nuclear power safety, alternative (i.e., renewable and fuel cell based) power generation systems have attracted increased interest. III.

PROPOSED SYSTEM

The block diagram of proposed model system is shown in below fig.1. In this work the electric power qualities in two cases will be compared. Case I: With the storage battery when it is introduced. Case II: Without the storage battery. The storage battery executes the control that maintains the voltage in the power system. It will be found that the electric power quality will be improved, when storage battery is introduced. The proposed model system used in this work is composed of a Wind Turbine, an Induction Generator, Fuel Cells, An Inverter and a Storage Battery. A miniature Wind Power Generator is represented by WG. A fuel cell module is represented by FC. Transmission lines will be simulated by resistors and coils. The combined length of the lines from synchronous generator to the load terminal is 1.5 km. The electric power qualities in two cases will be compared. In first case with the storage battery when it is introduced and second case is without the storage battery. The storage battery executes the control that maintains the voltage in the power system. It will be found that the electric power quality will be improved, when storage battery is introduced. This model will be simulated using MATLAB/SIMULINK.

Figure 1: Block diagram of proposed system

The proposed system consists of following main sections:


1. Wind generator: A wind turbine and an induction generator is used as wind generator. It will convert the wind energy into electrical energy. The induction generator is the most common generator in wind energy system applications due to simplicity and rudeness, more than 50 years life time, same machine can be use as motor or generator without modification, high power unit mass of materials and flexibility in speed range of operation. The main drawbacks in induction generator are its lower efficiency and the need for reactive power to build up the terminal voltage. However the efficiency

279

Vol. 5, Issue 1, pp. 276-287

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
can be improved by modern design and solid-state converters can be used to supply reactive power required. Induction generators are widely used in nonconventional power generation. Self-excited, squirrel cage induction generators are ideally suited for remote, stand-alone applications due to their robust construction. A new closed loop IGBT based PWM controller is proposed for a self-excited induction generator. 2. Fuel cell module: Fuel cell systems have high energy efficiency. The efficiency of low temperature proton exchange membrane (PEM) fuel cells is around 35-45%. High temperature solid oxide fuel cells (SOFC) can have efficiency as high as 65%. The overall efficiency of an SOFC based combinedcycle system can even reach 70%. Renewable energy and fuel cell systems are environmentally friendly. From these systems, there is zero or low emission (of pollutant gases) that causes acid rain, urban smog and other health problems; and, therefore, there is no environmental cleanup or waste disposal cost associated with them. 3.Energy Storage Unit: It is responsible for energy storage in DC form, flywheels, lead acid batteries, superconducting magnetic energy storage (SMES) and super-capacitors can be used as energy storage devices, the estimates of the typical energy efficiency of four energy storage technologies area: batteries 75%, Flywheel 80 %, Compressed air 80%, SMES 90%. 4. SVPWM Inverter: 3-phase voltage source inverter (VSI) is used as inverter. It is used to convert from DC storage to ac by using svpwm technique. Single-phase VSIs cover low-range power applications and three-phase VSIs cover the medium- to high-power applications. The main purpose of these topologies is to provide a three-phase voltage source, where the amplitude, phase, and frequency of the voltages should always be controllable. Although most of the applications require sinusoidal voltage waveforms (e.g., ASDs, UPSs, FACTS, var compensators), arbitrary voltages are also required in some emerging applications (e.g., active filters, voltage compensators). The standard three-phase VSI topology is shown in Fig.2. and the eight valid switch states are given in Table.1. As in single-phase VSIs, the switches of any leg of the inverter (S1 and S4, S3 and S6, or S5 and S2) cannot be switched on simultaneously because this would result in a short circuit across the dc link voltage supply. Similarly, in order to avoid undefined states in the VSI, and thus undefined ac output line voltages, the switches of any leg of the inverter cannot be switched off simultaneously as this will result in voltages that will depend upon the respective line current polarity. Of the eight valid states, two of them (7 and 8 in Table.1) produce zero ac line voltages. In this case, the ac line currents freewheel through either the upper or lower components

Figure 2: Three phase Voltage Source Inverter (VSI)

The remaining states (1 to 6 in Table.1) produce nonzero ac output voltages. In order to generate a given voltage waveform, the inverter moves from one state to another. Thus the resulting ac output line voltages consist of discrete values of voltages that are Vi , 0, and -Vi. The selection of the states in order to generate the given waveform is done by the modulating technique that should ensure the use of only the valid states.

280

Vol. 5, Issue 1, pp. 276-287

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Table 1: Valid switching states for three phase VSI

5. Voltage source converter (VSC): it is used in storage system to inject the ac voltages during the stationery and transient distortions in the line voltage. 6. Dc/dc converter: it is used to convert dc to dc storage. 7. Passive Filters: It is clear that higher order harmonic components distort the compensated output voltage. Filter is used to convert the PWM inverted pulse waveform into a sinusoidal waveform. This is achieved by removing the unnecessary higher order harmonic components generated from the DC to AC conversion in the VSI. 8. 3-phase voltage Injection Transformers: In a three-phase system, three Single-phase transformer units or one three phase transformer unit can be used for voltage.

IV.

SPACE VECTOR PULSE WIDTH MODULATION

The topology of a three-leg voltage source inverter is shown in Fig.3. Because of the constraint that the input lines must never be shorted and the output current must always be continuous a voltage source inverter can assume only eight distinct topologies. These topologies are shown on Fig.4. Six out of these eight topologies produce a nonzero output voltage and are known as non-zero switching states and the remaining two topologies produce zero output voltage and are known as zero switching states.

Figure 3: Topology of a three-leg voltage source inverter

Figure 4: Eight switching state topologies of a voltage source inverter.

281

Vol. 5, Issue 1, pp. 276-287

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963 4.1. Voltage Space Vectors
Space vector modulation (SVM) for three-leg VSI is based on the representation of the three phase quantities as vectors in a two-dimensional ( ) plane. This is illustrated here for the sake of completeness. Considering topology 1 of Fig.4., which is repeated in Fig. 5(a) we see that the line voltages Vab, Vbc, and Vca are given by Vab =Vg Vbc = 0 Vca = -Vg ... (1) This can be represented in the , plane as shown in Fig. 5(b), where voltages Vab, Vbc, and Vca are three line voltage vectors displaced 120 in space. The effective voltage vector generated by this topology is represented as V1 (pnn) in Fig. 5(b). Here the notation pnn refers to the three legs/phases a, b, c being either connected to the positive dc rail (p) or to the negative dc rail (n). Thus pnn corresponds to phase a being connected to the positive dc rail and phases b and c being connected to the negative dc rail .

Figure 5(a): Topology 1-V1 (pnn) of a voltage source inverter.

Figure 5(b): Representation of topology 1 in the ,- plane.

Proceeding on similar lines the six non-zero voltage vectors (V1 - V6) can be shown to assume the positions shown in Fig.6. The tips of these vectors form a regular hexagon (dotted line in Fig.6). We define the area enclosed by two adjacent vectors, within the hexagon, as a sector. Thus there are six sectors numbered 1 - 6 in Fig.6.

Figure 6: Non-zero voltage vectors in the - plane

Considering the last two topologies of Fig.4 which are repeated in Fig. 7(a) for the sake of convenience we see that the output line voltages generated by this topology are given by Vab = 0 Vbc = 0 Vca = 0 .... (2) These are represented as vectors which have zero magnitude and hence are referred to as zeroswitching state vectors or zero voltage vectors. They assume the position at origin in the , plane as shown in Fig. 7(b). The vectors V1-V8 are called the switching state vectors (SSVs).

282

Vol. 5, Issue 1, pp. 276-287

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Figure 7(a): Zero output voltage topologies

Figure7(b): Representation of the zero voltage vectors in the - plane.

4.2. Space Vector Modulation


The desired three phase voltages at the output of the inverter could be represented by an equivalent vector V rotating in the counter clock wise direction as shown in Fig. 8(a). The magnitude of this vector is related to the magnitude of the output voltage (Fig. 8(b)) and the time this vector takes to complete one revolution is the same as the fundamental time period of the output voltage.

Figure 8(a): Output voltage vector in the plane

Figure 8(b): Output line voltages in time domain.

Let us consider the situation when the desired line-to-line output voltage vector V is in sector 1 as shown in Fig.9. This vector could be synthesized by the pulse-width modulation (PWM) of the two adjacent SSVs V1 (pnn) and V2 (ppn), the duty cycle of each being d1 and d2, respectively, and the zero vector ( V7(nnn) / V8(ppp) ) of duty cycle d0: d1V1+d2V2=V=mVgeje. (3) d1+d2+d0=1.... (4)

Figure 9: Synthesis of the required output voltage vector in sector 1

283

Vol. 5, Issue 1, pp. 276-287

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
All SVM schemes and most of the other PWM algorithms use Eqns. (3) and (4) for the output voltage synthesis. The modulation algorithms that use non-adjacent SSVs have been shown to produce higher THD and/or switching losses and are not analyzed here, although some of them, e.g. hysteresis, can be very simple to implement and can provide faster transient response. The duty cycles d1, d2, and d0, are uniquely determined from Fig.9, and Eqns. (3) and (4) , the only difference between PWM schemes that use adjacent vectors is the choice of the zero vector(s) and the sequence in which the vectors are applied within the switching cycle. The degrees of freedom we have in the choice of a given modulation algorithm is: The choice of the zero vector; whether we would like to use V7(ppp) or V8(nnn) or both, Sequencing of the vectors Splitting of the duty cycles of the vectors without introducing additional commutations. Where, 0 m 0.866, is the modulation index. This would correspond to a maximum line-to-line voltage of 1.0Vg, which is 15% more than conventional sinusoidal PWM.

V.

MATLAB MODELLING & SIMULATION RESULTS

1. Simulation Models:

Figure 10: Simulation of final model of my project with storage battery

Figure 11: Simulation of final model of my project without storage battery

284

Vol. 5, Issue 1, pp. 276-287

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Detailed simulations are performed on Improvement of Power Quality of a Distributed Generation Power System using MATLAB SIMULINK. System performance is analyzed for compensating voltage sag with different DC storage capacity so as to achieve rated voltage at a given load. Various cases of different load condition are considered to study the impact DC storage on sag compensation.

2. Output Waveforms
At the time short duration transient 0.15sec to 0.2 sec. duration is shown in figure12, at 8kw load we can clearly observe the voltage phases will com foreword from its zero crossing point and voltage will decrease is shown in fig.13.variations in the load voltage with fluctuating rlc load. Is shown in figure14.Output wave forms of 3-phase load voltages when battery is introduced is shown in figure15. Figure 16: Output wave forms of load currents are shown in figure 16 and figure 17 which are shown below.

Figure 12: At the time short duration transient 0.15sec to 0.2 sec. duration

Figure 14: variations in the load voltage with fluctuating rlc load.

Figure 13: at 8kw load we can clearly observe the voltage phases will com foreword from its zero crossing point and voltage will decrease.

Figure 15: Output wave forms of 3-phase load voltages when battery is introduced

285

Vol. 5, Issue 1, pp. 276-287

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Figure 16: Output wave forms of load currents

Figure 17: Output of three phase load currents

VI.

CONCLUSION

Based on the analysis of test system, it is suggested that voltage sag values are major factors in estimating the DC storage value. Investigations were carried out for various cases of load. The effectiveness of a proposed system mainly depends upon the rating of DC storage rating and the percentage voltage sag. In the test system it is observed that after a particular amount of voltage sag, the voltage level at the load terminal decreases. The major role play is that it can be supply constant power to varying loads, sag is reduced completely, THD of the system is very less. This scheme of implementation can be applied to various kinds of loads and systems in future.

VII.

FUTURE SCOPE
As Future scope of this Project is The storage Battery with good control Structure can be added to a DC bus of Hybrid Distributed Generation (DG) power system. By designing with the high Quality components and good controller the storage battery system can be integrated to the hybrid system to improve power quality which is not covered in this scope of project. This scheme of implementation can also be applied higher ratings of power generation. The major role play is that it can be supply constant power to varying loads, sag is reduced completely, THD of the system is very less. This scheme of implementation can be applied to various kinds of loads and systems in future.

REFERENCES
[1]. Ray Arnold Solutions to Power Quality Problems power engineering journal 2001 pages: 65-73. [2]. John Stones and Alan Collinsion Introduction to Power Quality power engineering journal 2001 pages: 58 -64. [3]. Gregory F. Reed, Masatoshi Takeda, "Improved power quality solutions using advanced solid-state switching and static compensation technologies," Power Engineering Society 1999 Winter Meeting, IEEE. [4]. S. W. Mohod and M. V. Aware, Power quality issues & its mitigation technique in wind energy conversion, in Proc. of IEEE Int. Conf. Quality Power & Harmonic, Wollongong, Australia,2008. [5]. Power quality improvement in electronic load controller for an isolated steam power generation by Bhim Singh IEEE TRANSACTIONS ON ENERGY CONVERSION, VOL. 23, NO.3, SEPTEMBER 2008. [6]. B. Singh, Induction generatorA prospective, Electr Mach. Power Syst., vol. 23, pp. 163177, 1995.

286

Vol. 5, Issue 1, pp. 276-287

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
[7]. R.C. Bansal, T.S. Bhatti, and D. P. Kothari, Bibliography on the application of induction generator in non conventional energy systems, IEEE Trans. Energy Convers., vol. EC-18, no. 3, pp. 433439, Sep. 2003. [8]. G. K. Singh, Self-excited induction generator researchA survey, Electr. Power Syst. Res., vol. 69, no. 2/3, pp. 107114, May 2004. [9]. R. C. Bansal, Three phase isolated asynchronous generators: An overview, IEEE Trans. Energy Convers., vol. 20, no. 2, pp. 292299, Jun. 2005. [10]. O. Ojo, O. Omozusi, and A. A. Jimoh, The operation of an inverter assisted single phase induction generator, IEEE Trans. Ind. Electron., vol. 47, no. 3, pp. 632640, Jun. 2000. [11]. Yazhou Lei; Mullane, A. Lightbody, G. Yacamini, R. Modeling of the Wind Turbine with a Doubly Fed Induction generator for Grid Integration Studies, Dept of Electr & Electron. Eng., Univ. Coll. Cork, Ireland, 21 February 2006, pp.257 264. [12]. Ropp, M.E. Gonzalez, S. Development of a MATLAB/Simulink Model of a Single-Phase GridConnected Photovoltaic System, Dept of Electr. Eng., South Dakota State Univ., Brookings, SD, February 2009, pp. 195 202. [13]. J. Holtz, Pulse width modulation for electronic power conversion, Proc. IEEE, vol. 82, pp. 1194 1214, Aug. 1994. [14]. O. Ogasawara, H. Akagi, and A. Nabel, A novel PWM scheme of voltage source inverters based on space vector theory, in Proc. EPE European Conf. Power Electronics and Applications, 1989, pp. 11971202. [15]. M. Depenbrock, Pulsewidth control of a 3-phase inverter with nonsinusoidal phase voltages, in Proc. IEEE-IAS Int. Semiconductor Power Conversion Conf., Orlando, FL, 1975, pp. 389398. [16]. J. A. Houldsworth and D. A. Grant, The use of harmonic distortion to increase the output voltage of a three-phase PWM inverter, IEEE Trans. Ind. Applicat., vol. 20, pp. 1224 1228, Sept./Oct. 1984. [17]. Analysis, Simulation and Implementation of Space Vector Pulse Width Modulation Inverter E Hendawi, F Khater, A Shaltout - Power, 2006 - wseas.us [18]. Modern Power Electronics and AC Drives, by Bimal K. Bose. Prentice Hall Publishers, 2001 [19]. Power Electronics by Dr. P.S. Bimbhra. Khanna Publishers, New Delhi, 2003. 3rd Edition. [20]. A Power Electronics Handbook by M.H. Rashid. Academic Press 2001. [21]. Non-conventional energy sources by G.D.Rai. Khanna Publishers, New Delhi, 2009. 4th Edition.

Author
Aruna Garipelly was born in 1985 in India. She received B. Tech. degree in Electrical and Engineering in Jawaharlal Nehru Technology University of Hyderabad in India. I am currently pursuing M. Tech (Power Electronics) Electrical and Engineering in Auroras Engineering college(affiliated to JNTUH) in India.

287

Vol. 5, Issue 1, pp. 276-287

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

FINDING CRITICAL BUCKLING LOAD OF RECTANGULAR PLATE USING INTEGRATED FORCE METHOD
G. S. Doiphode1 and S. C. Patodi2
Asst. Prof., Dept. of Applied Mechanics, Faculty of Tech. & Engg., M. S. University of Baroda, Vadodara, India 2 Professor, Dept. of Civil Engineering, Parul Institute of Engg. and Tech., Limda, Vadodara, India
1

ABSTRACT
A method which couples equilibrium equations and compatibility conditions that are developed based on equilibrium equations by using a systematic concatenation procedure is proposed here for the plate buckling analysis. A RECT_9F_12D plate bending element having 9 force unknowns and 12 displacement degrees of freedom is used with the necessary matrix formulation based on the Integrated Force Method (IFM). The geometric stiffness matrix required for buckling analysis is explicitly derived. Matlab software is used to develop compatibility conditions whereas other calculations are carried out in a program developed in VB.NET. A rectangular plate under uniaxial loading is analysed under 7 different boundary conditions. A case of biaxial loading of simply supported plate with loading ratio equals to one is also attempted using the proposed formulation. Results are obtained by considering either 2 x 2 discretization of quarter plate or 4 x 2 discretization of half plate depending upon the type of symmetry available based on support conditions. Results are compared with the available classical solutions to demonstrate the effectiveness of the proposed method; a good agreement is indicated.

KEYWORDS: Buckling Problems, Hybrid Plate Element, Integrated Force Method, Matlab

I.

INTRODUCTION

Linear Elastic Stability Analysis (LESA) is an approach in which calculation is done for the critical intensity of applied in-plane loading. The internal distribution of orthogonal moment induced and possible nodal displacements at any point in the isotropic plate are considered as independent variable in secondary linear analysis, which is operated after calculation of critical loading. Actually practical aspects associated with uncertain buckling based collapse involves a nonlinear aspect of instability which is associated with post buckling behavior having large amount of inelastic deformations. Even in this connection, LESA thoroughly describes the complete circumstances of failure, which are of design importance for number of thin structural forms generally used in Naval and Aeronautical structures. It also furnishes the fundamental basis for large technical content of practical aspects of design methodology, even where nonlinear phenomena must be taken into account to define accurately the magnitude of load that causes failure. Thus, the complete form of the solution is frequently provided by Linear Analysis procedure only. Following are the three major approaches by which classical buckling problem of plates can be formulated.

1.1 EQUILIBRIUM METHOD


In this method, it is assumed that plate has buckled slightly, for which differential equation is written using the buckled form, where bending and stretching are included simultaneously. Alternatively, the method converts the complete problem into eigen value problem, in which one can evaluate the

288

Vol. 5, Issue 1, pp. 288-297

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
multiplying factor (cr) to the external line load applied parallel to neutral plane. The solution involves the homogeneous equation w(x, y), with few arbitrary constants (C0, C1, C2.Cn) which are evaluated using boundary conditions. Equating the determinant of coefficients to zero, a polynomial equation is developed and the critical load is calculated from the following equation [1]: Pcr = cr Po (1) The non trivial solution = 0 for the characteristics equation corresponds to unbuckled state while 0 relates to buckled form.

1.2 ENERGY METHOD


As per energy method, whenever due to loading a plate passes from stable to unstable equilibrium stage, it goes through neutral state of equilibrium, which is characterized by conservation of energy. It also emphasizes that a plate changes from flat to curved shape without gaining and losing energy. The corresponding energy equation is written as [2], + =W W =0 (2)

Since the small bending is caused without stretching or contracting of the middle surface, the work done by the external compressive force is due to inplane displacement produced by bending. Further, it is assumed that during buckling the intensity of the external forces remains the same. The work of the external forces is usually given as a function of the load parameter , as discussed above. It is evident that Eq. (1) can be used only when the expression for the deflection surface contains merely one undetermined coefficient. More often one can formulate the buckling problem using variational principle. In which, plate is said to be in state of equilibrium to which an infinitesimally small disturbance is applied. Since the work of the external and internal forces must vanish, Eq. (2) can be written in the form as =0 (3)

Where denotes the total potential of the plate load system pertinent to the stable stage of equilibrium and is the increment in the total potential, representing the neighboring state of equilibrium, in which the middle surface is slightly curved, due to the small increase in the load. It is evident that for the stable equilibrium condition the total potential must vanish ( . Expressing the incremental part of total potential, by the taylors series of expansion and by minimizing and differentiating higher order terms with respect to arbitrary constants (C0, C1, C2.Cn), smallest non zero solution of is calculated which is same as cr.

1.3 DYNAMIC METHOD


The stability problem can be formulated by dynamic approach [3], in which the analogy of dynamic equilibrium aspect is directly implemented with numerical ease. The characteristic of stable state of equilibrium is such that even after applying small external incremental force the complete system oscillates but it returns to initial position of equilibrium. Another way, if buckling shape and the free vibration modal shapes are the analogous then the lowest natural frequency represents directly the lowest critical load for different variant conditions. Thus in developing differential equation of transverse vibration, the effect of the in-plane forces must be considered. Thus the equation of motion will contain the load factor . The smallest value of , producing lateral deflection that increases without limit, is the critical load factor. In addition to above classical methods, researchers have attempted plate buckling problems by using numerical methods such as finite difference method [4] and finite element method [5-7]. Some of the problems which are difficult to solve by classical methods have also been attempted by using finite element method. Singh et al. [8] presented elastic buckling behavior of simply supported and clamped thin rectangular isotropic plates having central cutouts subjected to uni-axial partial edge compression. It was concluded that the buckling strength of square plates is highly influenced by partial edge compression, as compared to plate subjected to uniform edge compression. Monfared [9] investigated buckling of circular and rectangular plates with different boundary conditions under

289

Vol. 5, Issue 1, pp. 288-297

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
sinusoid and axial compressive loading using differential equivalent direct method and FEM based ANSYS software. Good agreement was found between the analytical and numerical predictions for the critical buckling loads. In the present paper, an approach known as Integrated Force Method (IFM) which has been successfully applied by Patnaik [10] and Patnaik and Yadagiri [11] for static and dynamic analysis of discrete and continuum structures is extended to deal with plate buckling problems of rectangular geometry. The method combines the Equilibrium Equations (EEs) and the Compatibility Conditions (CCs) that are developed based on EEs by using a systematic concatenation procedure. By using this approach, one can calculate the internal moments and then the nodal displacements of isotropic and orthotropic plate bending problems [12-13]. Authors have also developed the formulation based on the IFM for the buckling analysis of a variety of framed structures [14]. Here the same approach is extended to deal with plate buckling problems after development of geometric stiffness matrix for a rectangular element. In the current work, after giving the formulation of element equilibrium matrix, element flexibility matrix, global compatibility matrix and geometric stiffness matrix for a rectangular element having 9 force unknowns and 12 displacement degrees of freedom, the type of uniaxially and biaxially loaded plate problems considered in the paper are described. The steps required for finding the solution are discussed with reference to a simply supported plate and then the results obtained using the proposed integrated force based methodology for 8 different square plate problems are compared with the available classical solutions [15].

II.

FORMULATION FOR BUCKLING ANALYSIS

In integrated force method, the element forces {F} and the external load vector {P} are related as [12] {F} = Or (4) (5)

where [B] is the basic equilibrium matrix of size m x n, [C] is the compatibility matrix of size (n - m) x n and [G] is the concatenated flexibility matrix of size n x n, with m being the force degrees of freedom and n being the displacement degrees of freedom. The nodal displacement vector {} is related to the element force vector {F} as follows: {} = [S-1]T[G]{F} (6) The eigen based stability analysis equation is obtained by usual perturbation theory, which is given by [S]{F} = [Kg][J][G]{F} Or [[S] - [Sb]]{F} = {0} where [Kg] is the geometric stiffness matrix and is the stability parameter. The matrix [Sb] is referred as the IFM stability matrix and [J] consists of number of rows taken from [S -1]T matrix. After calculating the eigen vector of size m, each is substituted in Eq. (7) for the calculation of {F}. Nodal displacements {} are then worked out for each vector by substituting {F} in Eq. (6). The stability based IFM procedure comprises of the development of four matrices in which the equilibrium matrix [B] links internal forces to external loads, compatibility matrix [C] governs the deformations, flexibility matrix [G] relates deformations to forces and geometric stiffness matrix [Kg] is known as a eigen supportive operator for dynamic and buckling analysis. Both the equilibrium and compatibility matrices of the IFM are unsymmetrical having full row rank irrespective of type of problem, whereas the material constitutive matrix, flexibility matrix and geometric stiffness matrices are symmetrical. (7)

2.1 ELEMENT EQUILIBRIUM MATRIX


The element equilibrium matrix written in terms of forces at grid points represents the vectorial summation of n internal forces {F} and m external loads {P}. The nodal EE in matrix notation can

290

Vol. 5, Issue 1, pp. 288-297

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
be stored as rectangular matrix [Be] of size m x n. The variational functional is evaluated as a portion of IFM functional which yields the basic element equilibrium matrix [Be] as follows [12]:
2w 2w 2w U p M x My M xy d x d y 2 2 xy x y D

= {M}T{} ds where, {M}T = (Mx, My, Mxy) are the internal moments and {}T= curvatures corresponding to each internal moment.

(8) represents the

Consider a four-noded, 12 ddof (w1 to 12) rectangular element of thickness t with dimensions as 2a x 2b along the x and y axes as shown in (Figure 1). The force field is chosen in terms of nine independent forces as; )
12 2b 1 3 2 1 2a 10 9 4 11 Y Z X X 4 7 2 3 8 7

(9)

6 5

Fig. 1 Nodal Displacements

Relations between internal moments and independent forces are written as

Arranging in matrix form,

Or

M Y Fe
where {Fe} = [F1, F2, F3.F9]T

(10)

The displacement field satisfies the continuity condition and the selected forces also satisfy the mandatory requirement. Polynomial function for lateral displacement for rectangular element is written as follows:

Or it can be written as w(x,y) = [A]{}

(11) (12)

291

Vol. 5, Issue 1, pp. 288-297

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
where [A] is a row of size 1 x 12, which is a function of x and y and {} is a vector of size 12 x 1, which consists of constants to be calculated. Substituting coordinates, one can find constants and finally the interpolation matrix [N] following the usual finite element procedure. Here each component of [N] is associated with nodal displacements 1, 2, 3 ----------- 12 as shown in Fig. 1. Eq. (12) is now expressed in terms of nodal displacements as follows: w(x,y) = [N]{} (13)

By arranging all force and displacement functions properly, one can discretize the Eq. (8) to obtain the elemental equilibrium matrix as follows. Ue = {}T[Be]{F} where [Be] = s [Z] T[Y]ds (14) (15)

Here [Z] = [L][N] where [L] is the differential operator matrix, [N] is the displacement interpolation function matrix and [Y] is the force interpolation function matrix. Substituting Eq. (11) in Eq. (15) and integrating within necessary limits a non symmetrical equilibrium matrix [Be] can be obtained. The matrix [Be] should have full row rank as mathematical property.

2.2 ELEMENT FLEXIBILITY MATRIX [GE]


The element flexibility matrix for isotropic material is obtained by discretizing the complementary strain energy which gives [13], [Ge]= s[Y] T[D][Y] dxdy (16) where, [Y] is force interpolation function matrix and [D] is material property matrix. Substituting values in Eq. (16) and integrating one can calculate [Ge] matrix. The size of [Ge] is fdof x fdof and is a symmetrical matrix.

2.3 GLOBAL COMPATIBILITY MATRIX [C]


The compatibility matrix is obtained from the deformation displacement relation ({} = [B] T{X}). In DDR all the deformations are expressed in terms of all possible nodal displacements and the r compatibility conditions are developed in terms of internal forces i.e., F1,------ F2n, where 2n is the total number of internal forces in a given problem. The concatenating or global compatibility matrix [C] can be evaluated by multiplying the compatibility matrix [C] and the global flexibility matrix [G].

2.4 GEOMETRIC STIFFNESS MATRIX [KG]


Figure 2 shows a rectangular plate having thickness t which is subjected to in-plane compressive forces xt and yt acting along neutral plane. The geometrical stiffness matrix for the element for the force in x direction can be obtained from [Kge (x-x)] = S [Nx]T[Nx] xt dxdy (17)

2a 2b

t Nx = xt

Ny = yt

Fig. 2 Plate Under In-plane Forces

Where, Nx is a vector of size (12 x 1) developed by differentiating with respect to x, xt is the inplane line load acting along neutral plane. Carrying out the operations as per Eq. (17), the geometric

292

Vol. 5, Issue 1, pp. 288-297

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
stiffness matrix for an element is obtained which is a symmetric matrix of size 12 x 12. The terms of the upper triangular part of the geometric stiffness [Kge] are given below , , , , , , , , , , , , , , , , , , , , , , , , and , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

III.

PLATE BUCKLING EXAMPLES

Total seven examples of uniaxially loaded plate are considered here to validate the proposed method under different boundary conditions i.e., Simply supported (S), Clamped (C), Free (F) and their combinations as shown in Figure 3. All the plates are subjected to inplane force along x-x direction. Each plate is having geometrical dimensions as 4000 mm x 4000 mm x 200 mm. The modulus of elasticity is considered as 2.01 x 1011 N/m2 and Poisson ratio is considered as 0.23. Figures 4 and 5 shows different discretization schemes considering either one- or two- way symmetry depending upon the support conditions. One example of biaxially loaded plate is also included here as shown in Figure 2.

293

Vol. 5, Issue 1, pp. 288-297

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
X 4m Y 4m Pcr 4m 4m 4m 4m Pcr

a) All edges simply supported plate


4m 4m Pcr

e) One edge clamped & other three SS

4m 4m 4m 4m Pcr

b) All edges clamped plate


4m 4m Pcr

f) One edge free & other three SS


4m 4m

c) Two edges clamped and two edges SS


4m 4m 4m 4m Pcr

4m 4m

Pcr

g) Two edges SS one free and one clamped

d) Two edges SS and two edges clamped Fig. 3 Plates with Different Boundary Conditions

2000 mm 1

4 2

3 1

8 2

7 3

6 4

5 2000 mm

2000 mm

4000 mm

Fig. 4 Discretization using Two-way symmetry

Fig. 5 Discretization using One-way Symmetry

IV.

STEPS AND RESULTS

For buckling analysis through IFM, steps used are explained here with reference to a simply supported plate. STEP 1 DEVELOP GLOBAL EQUILIBRIUM MATRIX [B]: A four-nodded rectangular element (2a x 2b) with 12 ddof and 9 fdof is used for discretizing the problem in to four elements. The equilibrium matrix [Be] is obtained by using the method described above. Assembled global equilibrium matrix, for quarter symmetry of S-S-S-S, will be of size 12 x 36. STEP 2 DEVELOP GLOBAL COMPATIBILITY CONDITIONS [C]: The compatibility matrix for all discretized elements is obtained from the displacement deformation relations (DDR) i.e. = [B]T{}. In the DDR, 36 deformations which correspond to 36 force variables are expressed in terms of 12 displacements (1, 2 -------- 12) (Figure 6). The problem requires 24 compatibility conditions [C] that are obtained by using auto-generated Matlab based computer program by giving input as upper part of the global equilibrium matrix.

294

Vol. 5, Issue 1, pp. 288-297

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
5

Pcrit(x-x)

D 7 4 10

8 0.5m H 9 0.5m

(F19 ~ F27) 3 3 4

C
Symmetry Line

(F28 ~ F36) I Fig. 6 Unknowns in a Quarter Plate. 12 F 11 2 (F1 ~ F9) 1 E 1 0.5m 2 B (F10 ~ F18)

0.5m

Fig. 6 Force and Displacement Unknowns [G]: The flexibility matrix for the problem is obtained by diagonal concatenation of the four flexibility matrices as; G e1 G e2 (18) G G e3 G e4 STEP 4 DEVELOP GLOBAL GEOMETRIC STIFFNESS MATRIX [KGC]: The global geometric stiffness matrix is worked out by assembling the four elemental geometric matrices [K g1] to [Kg4]. Using the standard stiffness based assembly procedure a global geometric stiffness matrix is developed of size 36 x 36. STEP 5 CALCULATE BUCKLING LOAD (PCRIT): Concatenating global CC matrix of size (24 x 36) is obtained after normalizing with respect to components of [Be] of size (12 x 36). It is developed by multiplying [C] matrix of size 24 x 36 by global flexibility matrix of size 36 x 36. Substituting all necessary matrices in Eq. (7), one can get solution for eigen vector of size (12 x 1) corresponding to 12 global displacements of quarter plate. The Buckling Load Ratio (BLR) is then calculated using ratio of IFM based critical load and exact solution [15] as reported in Table 1. STEP 6 CALCULATE FORCE MODE SHAPE {F}: The internal unknowns (F1, F2 F36) are auto calculated in Matlab based eigen value analysis i.e. [F, Pcrit] = eig(Smatrix, KJG) where [F] is the matrix of size 36 x 36, Pcrit is the diagonal matrix of size 36 x 36. KJG is the product of global geometric stiffness matrix [Kg], Jmatrix is transpose of [Sinv] and G is the global flexibility matrix [G] of size (36 x 36). Taking sixth column of [F] matrix (corresponding to minimum critical load) and substituting in Eq. (10) moments at points I, F, C, G are worked out Figure. 6. The values are normalized with respect to point C and are depicted in Table 1.

STEP3 DEVELOP

GLOBAL FLEXIBILITY MATRIX

Table1. Results for S_S_S_S Plate Case Normalized Buckling Normalized Moments Load Displacement with Ratio with reference to reference (BLR) 5 to Point C Point I 1 -0.6651 Mx 0.2854 2 -1.136 1.0608 My 0.09244 3 0.992 Mxy 0.8668 4 -0.7718

295

Vol. 5, Issue 1, pp. 288-297

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Point F Mx 0.9935 My 0.6321 Mxy 3.5133 Point C Mx 1.000 My 1.000 Mxy 1.000 Point G Mx 0.5862 My 0.6931 Mxy 3.6198 5 6 7 8 9 10 11 12 1.4313 1.0000 0.8506 -1.0611 -0.7956 0.6553 0.5076 0.6377

Following the above procedure, results obtained for critical load for the remaining six uniaxially plate problems are compared with the classical solution [15] in Table 2. Result obtained for a biaxially loaded plate, having load ratio of 1, is also included in Table 2.
Table 2. Buckling Load Ratio (BLR) Case (Uniaxial) BLR = IFM/Exact [15] C_C_C_C 1.0534 C_S_C_S 1.0252 S_C_S_C 1.0354 C_S_S_S 1.0487 F_S_S_S 1.0634 F_S_C_S 1.0143 Case (Biaxial) BLR S_S_S_S 1.0112

V.

CONCLUSIONS

The development of compatibility conditions is the most crucial part of the IFM formulation. It is facilitated in the present work by developing an algorithm in VB.NET and linking it to the Matlab software. The generation of field and boundary compatibility conditions for any large scale continuum problem, with finer discretization, may require more time where the suggested approach may prove very efficient. For problems involving large number of unknowns, if the numerical difference between the components of the equilibrium and compatibility part is more, it may change the displacement to uncertain values. So before proceeding further a normalization of the compatibility is strongly recommended. It may be noted, however, that it does not make much difference in the calculation of internal force vector. A number of plate buckling problems are attempted under uniform compressive loading in x direction. A variety of support conditions are considered. The result for critical buckling load is found to differ by 1 to 6.34 % from the available classical solution. The maximum percentage difference is found as 6.34% in a case of axially compressed thin square plate having one edge free and three edges simply supported. In case of fully simply supported plate subjected to biaxial loading, with 2 x 2 discretization of quarter plate, the value of critical buckling load using IFM is found to differ from the exact value by 1.12%. Thus, the IFM can be considered as a viable alternative to the popular displacement based finite element method for finding the in-plane critical load of rectangular plates. Also, an extension of this method to buckling analysis of orthotropic rectangular plate problems is straight forward.

REFERENCES
[1]. Szilard. R. (2004) Theories and Applications of Plate Analysis, John Wiley & Sons Inc New Jercy, [2]. Timoshenko. S. P. & Gere J. M. (1961) Theory of Elastic Stability, McGraw-Hill Book Co., New York. [3]. Reddy. J. N, Wang. C .M. & Wang C. Y.( 2005) Exact Solution for Buckling of Structural Members, CRC Press Texas.

296

Vol. 5, Issue 1, pp. 288-297

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
[4]. Iyenger N. G. R and Gupta S. K. (1980), Programming Methods in Structural Design, Affiliated EastWest Press Ltd., New Delhi. [5]. Zienkiewicz. O. C. (1979) The Finite Element Method, Tata McGraw-Hill Publishing Co, Ltd., New Delhi. [6]. Kapur. K. K & Hartz, B. J. (1966) Stability of Thin Plates using the Finite Element Method , Proceeding of American Society Civil Engineering, Journal of Engineering Mechanics Division, Vol. 2, pp. 177-195.. [7]. Carson. W. G & Newton R E. (1969) Plate Buckling Analysis using a Fully Compatible Finite Element, Journal of A. I. A. A., Vol. 8, pp. 527-529. [8]. Singh, S., Kulkarni, K., Pandey, R. & Singh, H. (2003) Buckling Analysis of Rectangular Plates with Cutouts subjected to Partial Edge Compression using FEM, Journal of Engineering, Design and Technology, Vol. 10, Issue 1, pp. 128-142. [9]. Monfared, V. (2012) Analysis of Buckling Phenomenon under Different Loadings in Circular and Rectangular Plates, World, Applied Science Journal, Vol. 17, Issue 12, pp. 1571-1577. [10]. Patnaik, S.N. (1973) An Integrated Force Method for Discrete Analysis, International Journal of Numerical Methods in Engineering, Vol. 451, pp. 237-251. [11]. Patnaik, S. N. & Yadagiri, S (1982), Frequency Analysis of Structures by Integrated Force Method, Journal of Sound and Vibration, Vol. 83, pp. 93-109. [12]. Doiphode, G. S, Kulkarni S. M & Patodi S. C. (2008), Improving Plate Bending Solutions using Integrated Force Method, 6th Structural Engineering Convention, Chennai, pp. 227235. [13]. Doiphode, G. S. & Patodi, S. C. (2011), Integrated Force Method for Fiber Reinforced Composite Plate Bending Problems, International Journal of Advanced Engineering Technology, Vol. II, Issue 4, pp. 289-295. [14]. Doiphode, G. S. & Patodi, S. C. (2012), Integrated Force Method for Buckling Analysis of Skeletal Structures, The Indian Journal of Technical Education, Special Issue of NCEVT12, pp. 143-150. [15]. Pilkey, W. D (2005) Formulas for Stress, Strain and Structural Matrices, 2nd Edition, John Wiley & Sons Inc., New Jercy.

AUTHORS
Ganpat S. Doiphode is currently an Assistant Professor with Applied Mechanics Department, Faculty of Technology & Engineering, M. S. University of Baroda. He received his B.E. (Civil) and M.E. (Structures) Degrees in 1992 and 1996 from M. S. University of Baroda and National Institute of Technology, Surat. respectively. He is pursuing Ph.D. in the field of Integrated Force Method and its Application to Structural Engg. and has published 22 research papers in national and international conferences and journals.

Subhash C. Patodi received his Ph.D. from IIT Bombay in 1976. After serving for 30 years as Professor of Structural Engineering at the M. S. University of Baroda, he is currently working as Professor in Civil Engineering Department at the Parul Institute of Engineering and Technology, Vadodara. He has published 292 research papers in National and International Journals and Conferences. His current research interest includes Cementitious Composites, Numerical Methods and Soft Computing Tools.

297

Vol. 5, Issue 1, pp. 288-297

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

INFLUENCE OF TYPE OF CHEMICAL ADMIXTURES ON SAND AND CEMENT CONTENT OF ORDINARY GRADE CONCRETE
M. K. Maroliya
Assistant professor, Applied Mechanics Dept, Faculty of Technology & Engineering, M. S. University of Baroda, Vadodara, India

ABSTRACT
Tests conducted on concrete addition of chemical admixtures to observe the change in ingredients contents of concrete like sand and cement under the influence of plasticizers and superplasticizers at various dosages level. The result of the treated mix was compared with the control mix. Observations were made on soft phases of concrete, to note the variation in workability at constant and reduce water cement ratio. From the experience and knowledge gained from this course of study both, plasticizers and super- plasticizers not only improved workability at constant water cement ratio but considerably enhanced the compressive strength at reduce watercement ratio however increase in sand content is required to overcame bleeding and segregation, for the same strength it became possible to reduce the cement content is noted

KEYWORDS: slump loss, density, compressive strength, workability, sand, cement content.
I.

GENERAL INTRODUCTION

Many exciting innovations in material and construction procedures have appeared in last few decades. All round the globe effort are being made to make concrete a more exact material .and introduction to Admixtures has been one of the most notable contribution to concrete technology. All round the globe effort are being made to make concrete a more exact material .and introduction to Admixtures has been one of the most notable contribution to concrete technology. Today efforts are made not only to improve concretes compressive strength but also durability. Durability has gained worldwide concern because experts believe that the expenditure in rehabilitation and resurrection of concrete structure in near future going to be equal to the expenditure of new construction. Admixtures are used to change the rheological properties of concrete or mortar to make them suitable for the work at hand, or for economy, or for such other purpose as saving energy. In many instances e.g. very high strength, resistance to freezing and thawing, for retardation and acceleration of setting time, an admixture may be that only feasible means of achieving the desired result. In other instances, certain desired objectives may be best achieved by changes in composition of proportion of the concrete mix, if doing so result in greater economy than by using an admixture. Out of different Types of admixtures used, plasticizers and superplasticizers topped the chart. Hence, some effort was made to understand the effect of both plasticizers and superplasticizers in concrete, in a comprehensive manner. Due to certain limitations more stress was laid on understanding the modifications in workability and compressive strength, because a better understanding of their two properties helps us to gauge their effect on other important properties also.

298

Vol. 5, Issue 1, pp. 298-302

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

II.

EXPERIMENTAL STUDY

Best efforts were made to understand the effects of different types of plasticizers and superplasticizers. A plasticizer calcium lignosulphate (CLS) and superplasticizers sulphonated melamine formaldehyde condensate (SMF) and sulphonated naphthalene formaldehyde condensate (SNF) were used to understand their effect on behavior of concrete and highlight the difference between them. Many times information given by manufactures might appear to be exaggerated. It is quite necessary for a structural engineer to study the quality effects claimed by investigators and manufactures and then quantify the benefits of plasticizers and superplasticizers to produce a novel and economical design of structural units. The main theme behind conducting the series of experiments was to study the modifications in proportion of concrete along with compressive strength due to the presence of plasticizers and superplasticizers. The control mix of proportion 1:1.67:3:3.33 by mass, obtained by nominal mix design procedure was used which gives normal workability (55 to 60mm at 0.54 water cement ratio) and M20 grade concrete. Different types of water reducing admixtures at different dosage level were used at constant and reduced water cement ratio. Due to its narrow range plasticizers were used at dosage level of 0.3, 0.45, and 0.6 percent by weight cement. But fir superplasticizers, 05, 0.75, 1.0 percent dosage levels were selected looking to their high range of dosage application slump and slump loss at different dosage levels were also observed at different interval of time In first step the w/c ratio was kept constant and CLS, SNF and SMF were applied at different dosage level to observe the change in workability with the help of slump test. In second step the plasticizers and superplasticizers were applied at the same dosage level as before, but the w/c ration was reduced so as to keep the slump constant. Once the positive sign of strength gain started to appear, certain quantity of cement was reduced to understand the effect of reduction of cement content on workability and compressive strength. The sole idea behind reducing cement content was to understand the economic benefits of using WRAs. During the course of investigation the effect of WRA s on higher requirement of sand content to overcome bleeding and segregation was noted. III. MATERIALS SPECIFICATIONS Ordinary Portland cement, 53 Grades conforming to I.S.269-1967. River sand (Goma sand) passing through is 4.75 mm sieve. Dried Basalt crushed stones (Kapchi) with maximum size of 20mm.
properties Specific Gravity Chloride Content Air entertainment Self life Standards Table 1.0 Properties of Plasticizers. CLS SMF 1018 0.01 1.22 0.1 Nil (i.e. less than 0.2%) Less than 2% 12 Months IS: 9103 1979 Nil Less than 2% 12 Months ASTM:C 494 SNF 1.22 to 1.225 @ 25 C Nil (BS 5075 and IS : 456) Less than 1% 12 Months IS: 9103 1979, BS: 5.75 -III

IV.

MIX PROPORTIONING

Using sand and gravel conforming to IS 383-1979 cubes were casted using mix proportion of 1:1.67:3.33 by weight, which yields M20 grade concrete on 28 days curing. When cement was reduced by 10% the proportion changed to 1:1.86.3.72. With the increase in sand content mix of proportion 1:2.03:303 (40% sand of total aggregate) and 1 : 2.28 : 2.78 (45% sand of total aggregate) was used. Sample were weighted to an accuracy of 50 grams (0.1% of total weight of batch)

V.

RESULTS AND DISCUSSION

5.1 EFFECTS DUE TO REDUCTION IN CEMENT CONTENT.

299

Vol. 5, Issue 1, pp. 298-302

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
With reduction in water cement ratio the compressive strength improved. This benefit can be exploited by reducing cement content so that we have a concrete with same strength and workability at reduced water cement ratio and reduced cement content ( in presence of water reducing admixtures) To arrive at the exact proportion to achieve the above mention conditions was difficult talking into consideration the availability of time, resources, and facilities available. But for the sake of academic interest cement content was reduced by 10 % and the proportion 1:1. 67:3.33 changed to 1 : 1.86 : 3.72 by mass. The compressive strength at 3, 7, and 28 days was recorded. As observed the strength reduced compared to the mix of proportion 1 : 1.67 : 3.33 at reduced water cement ratio. But compared to control mix the strength a 3, 7, and 28 days was reported to be higher by 82.6%, 29.3% and 22.8%. The results are as under:
Table 2.0 comp. Strength of concrete with age and cement content AGE (DAYS ) 3 7 28 COMP. STRENGTH Mpa CF = 400 kg/m3 15.24 26.83 30.23 COMP. STRENGTH Mpa after reducing cement factor by 10 % CF = 360 kg/m3 15.3 23.8 30.2

The results indicate that the cement content can still be reduced to attain the ultimate strength equal to that of control mix. This is important point to note regarding the economic benefits of using a water reducing admixtures.

5.2 INFLUENCE OF SAND CONTENT:


While using water reducing admixtures at higher dosage level, at constant water cement ratio sighs of segregation and bleeding started to surface out, this prompted to increase sand content at the cost aggregates, Keeping the cement factor constant ( 400 kg/m3) The signs of segregation started reducing when sand was increased to 40 % and almost disappeared at 45% of total aggregate content. This is a reason which advocates for higher sand content in flowing concrete. The mix then was very workable and plastic. The reason could be, the cement slurry which used to separate out under normal condition is mixed up with extra sand to give effectively higher paste volume which gives higher workability. To understand its effect on compressive strength, cubes, were casted to record 7 days and 28 days only 9 for sake of academic knowledge only) the proportion then was 1:2.03:.03 by mass when sand content was 40% and 1:2. 7:2. 78 when the sand content was increased to 45 % of total aggregate. With 40% sand, 0.75% of SNF, and water cement ratio of 0.42 giving slump equal to 60 mm, the 7 days strength was noted to be 33.7 N/ mm3, where as 28 days strength was 40.73 N/mm3, with 45% of sand and other thing in common, the water cement ratio required for 60 mm slump was 0.425 The 7 days and 28 days strength were recorded as 34.1 N/ mm2 and 41N/ mm2 respectively The above results in comparison to those obtained from mix 1:1.67:3.33 at similar dosage of SMF are tabulated below:
Table 3.0 Comp. Strength of concrete with age and water cement ratio. mix 1 : 2.03 : 3.03 1 : 2.03 : 3.03 %in comp. strength 7 days 17.10 18.51 %in comp. strength 28 days 9.23 10.10 Change in water cement ratio -1.2% -2.4%

The water cement ratio required slightly increased as the surface area increased with the increase in volume of fine particles. The increase in the compressive strength might be because the mix got better graded.

VI.

ECONOMIC ASPECTS

The use of plasticizers and super plasticizers enable us to increase the strength of concrete at reduced water content. Certain amount of cement can thus be reduced, resulting in cost saving which at times can be higher than the additional cost admixture.

300

Vol. 5, Issue 1, pp. 298-302

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Considering a normal grade concrete ( 1:1.67:3.33 ) with cement factor 400 k/m3, the weight of sand and aggregates accompanying it would be 668 kg and 1336 kg respectively per cubic meter of concrete. Considering the bulk density of sand aggregates as 1.4 kg/m3 the volume of sand and aggregate required per cubic meter of concrete would be 0.477 cubic meters and 0.954 cubic meters respectively. Dosage rate of plasticizers (CLS) was 0.3 to 0.6% by weight. With specific gravity of 1.18, requirement per cubic meter of concrete is 1 to 2 liters. Similarly for super plasticizers the range is 0.4 to 1.5 % with specific gravity of 1.22 requirements per cubic meter of concrete is 1.35 liters to 5 liters. It calculated as for 1N/mm2 of compressive strength the cost was reduced by 28.5% with CLS and 34%with SNF and SMF respectively thus chemical admixtures proves to be more economical in gain. The reduction in cost of concrete per N/mm2 of concrete was more with super plasticizers compared to plasticizers

VII.

CONCLUSIONS

Based on present investigation the results are summarized below: There is a marked improvement in the workability of fresh concrete. The normal slump of 63mm could be increased to 134 mm using plasticizers (CLS) and greater than 190 mm by super plasticizers (SNF or SNF) this apparent rise in workability is short lived. Initially the slump loss was very high but the slump of treated concrete at all ages was greater than the control mix. The slump loss was found to be higher for a treated mix than control mix. Following relation was noted. Slum loss with Super plasticizers

Slum loss with plasticizers

Slum loss with control concrete

The slump loss also increased with increase in the dosage level. At higher dosage signs segregation and bleeding were noticed. Once the sand content was increased from 33% of total aggregates to 40 % and 45 % the sign of segregation almost disappeared even at 0.54 water cement ratio and 1.0% of super plasticizers. The most noticable advantage seemed to be increase in the compressive strength. Not only amount but the rate strength gain development also increased Using CLS, the 7 and 28 days strength of control concrete was gained in 3 and 7 days respectively. The ultimate strength was 33.4 N/mm2 which was 35.8% higher than control mix. Using super plasticizers the 3 and 7 days strength was greater than 3 and 7 day strength obtained by CLS, and much higher than 7 and 28 day strength of control mix SNF performed better than SMF. The ultimate strength at reduced water cement at 1.0% dosages level was 41.3 N/mm2 and 43.6 N/mm2 with SNF and SMF respectively. Thus per cent gain in ultimate strength by SMF was 67.9 % and SNF was 77.2 % respectively. In presence of water reducing agent, the variation in three days was greater than the variation in 7 day strength which was again greater than the variation in 28 days strength , whether water cement was reduced or not .At constant water cement ratio, with super plasticizers the strength was always higher compared to control mix. But with CLS certain reduction in strength was reduced at constant water cement ratio at maximum prescribed dosage level. Considerable amount of cement can be saved if the benefit of higher strength development is exploited. The cost of unit strength of concrete (cost per N/mm2) decreased by 28.5 % for lignosulphates, 38% for SMF and 34% for SNF. Considering all types of plasticizers and super plasticizers SNF. Both of them performed much better CLS. Although the comparison is carried out talking into consideration the absolute numerical values, a point should be born in mind that molecular weight, monomer content (repeating unit) was unknown. These factors greatly influence the performance of any water reducing admixtures. Amongst the

301

Vol. 5, Issue 1, pp. 298-302

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
water reducing admixtures obtained, the SNF sample was highly condensed and can be applied for concrete of grade MN30 and above.

REFERENCES
[1]. Ramchandran. V. S. Properties Concrete Mixes and Admixtures. [2]. Vance Dodson. Concrete admixtures .Structural engineering series. Van Nostrand Reinhold, New York. [3]. ACI committee 212, Guide for admixture in concrete, Journal of American concrete institute Vol.68, No -09 Sept.1971.pp 5-41. [4]. Banfill P.F.G, a viscometric study of cement pastes containing Super plasticizers notes on experimental techniques. [5]. Kumar V, Roy B.N and A.S.R Sai, Effect of Super plasticizers on concrete Indian concrete Journal Vol.68 june1994, pp31-33. [6]. Litvan G .G, Air entrainment in presence of Super plasticizers. Journal of American concrete institute Vol.July-august 1983.pp 326-330. [7]. Manjrekar S K, Use of Super plasticizers: Myth sand Reality .Indian concrete Journal Vol. 68.june 1994 pp317-320. [8]. Neville, A.M., (1995) Properties of concrete, Pitman, London, 4me edition, ISBN0-582-23070-5 [9]. IS: 9103-1979 Indian Standard Specifications for Admixtures for concrete New Delhi.

M K Maroliya: Assistant professor, Applied Mechanics Dept, Faculty of Technology & Engineering, M.S. University of Baroda, Vadodara. He has over 15 years teaching experience. He has published about 14 papers in various journals/conferences. He is involved in research work in the field of concrete and fibre reinforced concrete.

302

Vol. 5, Issue 1, pp. 298-302

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

ENHANCEMENT OF SAFETY PERFORMANCE AT CONSTRUCTION SITE


Aref Charehzehi1, Alireza Ahankoob2
Department of Structure and Materials, Faculty of Civil Engineering, Universiti Teknologi Malaysia (UTM), Johor Bahru, Johor, Malaysia

ABSTRACT
Recently, the issue of safety performance has been focused at construction projects in both developed and developing countries. As a matter of fact, the construction industry contributes in a significant proportion of economic and social aspects. However, it is also considered to be the most hazardous industry in terms of personal safety and health.Many factors are involved in the accident occurrence at construction site. Some important elements that create a significant portion of accidents include: safety management error, poor training programs, human element, act of god, outdated procedure and no clear monitoring policy. Although some of these items are inevitable, but the occurrence of the largest part can be prevented.Therefore, for improving the safetyin a project each of these items should be analysed and a practical approach introduced. In general, near miss, incident and accident are three dependent levels that mainly lead to injury. Risk and hazard are allocated in first level which means near miss, therefore, no on-time identification of hazard and risk causes to create incident and preventing accident in incident stage is unavoidable. The aim of this paper is to focus on factors influencing in theimprovement of safety performance at construction site and suggests a clear procedure to develop safety performance byreducing risk and hazards.

KEYWORDS:safety performance, continuous improvement, construction industry.

I.

INTRODUCTION

Development of safety for personnel in construction environment is recognized as a major factor for tranquility of staff and should be adhered precisely in accordance safety regulations. Despite the mechanization, the construction industry is still based on labor intensive,while working environments are often changing and include several different parties. The construction workers are one of the most vulnerable members in a project and are faced with a wide variety of hazards during their work. A common approach for prevention of construction accident is to predict the upcoming event under given circumstances. The accuracy of such predictions is based on knowledgeabout past accidents. It has been proved that the main reasonsfor accidents in the construction industry are resulted fromthe unique nature of the industry, human behavior, difficult work-site conditions, and poorsafety management which result in unsafe work methods and procedures [4]. The construction engineers and projects managers should be fully aware of hazards and prepared to deal withaccidents when they occur.They should apply proper investigations and reporting

303

Vol. 5, Issue 1, pp. 303-312

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
proceduresafterwardsbecause theprobability and severity of accidents in construction are higherin compare with other industries.From another aspect, accidents not only cause horrible human disasters but also create substantialeconomic losses. These financial losses are due to the impact of accidents and damages onplant equipment and workers.Moreover, there is also a loss ofproductive work time until the normal site working environment and morale return to the initial state. Hazard is defined as potential situation that may cause unintentional injuries or deaths to people, or damage to, or loss of an item or belongings. Therefore the estimation of the safety level at construction sites can be applied by specifying all on-site hazardous elements. Therefore, Safety performance of each element should then be studied and measured by evaluating the relevant on-site hazard factors. By reducing the potential hazard of elements, its safety performance improves [2].On the other hand, safety development in the construction industry occurs only when all workers in the operation of construction sites change their behavior, respect to regulations and try to improve safety level in their personal activities. Moreover, the management support to the workers is also very important in providing the best solution for the safety related problems. The main purpose of this paper is to identify main factors which contribute to safety development and provides a continuous approach to reduce risk and potential hazard of elements by six stages which will be discussed later. For the purpose of this study, at first the causes of accident are explained from literature. The second part describes the eight factors influencing safety performance level in construction project and finally an applicable approach is discussed to obtain safer workplace.

II.

CAUSES OF ACCIDENT IN CONSTRUCTION INDUSTRY

Nowadays statistics of accident in construction industry encourage researchers to find new way for improving or enhancing safety performance in construction industry. Furthermore, both of direct and indirect cost of accident adds more expenses to construction projects that are because of improper safety performance in construction site. Most of these accidents near 99 percent are caused because of unsafe act, unsafe condition or both [11]. In order to improve safety performance in construction industry we require to identify the root causes of construction accident. According to Pipitsupaphol and Watanabe (2000) [10] kind of equipment and machineries, site condition, nature of the industry, management attitude and method, and human elements can directly influence the safety performance in construction industry. Working at height, in adequate safety devices, poor management, lack of obedient on site, negligence of worker, and employing unskilled worker is so common in construction industry that cause to increase the risk of accident and making damage and injuries. Kartam and Bouz (1998) [7] stated that the causes of accident are related to worker turnover and wrong act, lack of safety performance, unsuitable or unclean materials, no maintenance tool, and weak supervisory and inspection. On the other hand, we can divide the causes of accident to human and physical factors. Human factors are related to personal duty and responsibility such as neglect to use protective equipment, utilizing machines and equipment without permission, rushing in operating and doing work, personal factors, service moving and energized equipment, remove safety device, select unsafe position in working, utilizing improper equipment and other unsafe act. While, physical factors were addressed to wrong act of another person, unconsidered to accident source, disregard to special procedure, clothes hazard, environment hazard, fire hazard, wrong method or arrangement, assignment of personnel in wrong position, no safety guard in site and other unsafe condition [1]. Lubega et al (2000) [9] mentioned that the cause if the accident in construction site is directly related to inadequate safety regulation, no force to use the regulation in site, no safety consideration by personnel on site, no encourage professional people to work in site, mechanical problem of construction machinery and equipment, and chemical or physical disturbance. We can exhibit the cause of accident according above statement in fishbone model (Fig.1):

304

Vol. 5, Issue 1, pp. 303-312

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Figure 1. Root Causes of Accident

III.

MAIN FACTORS CONTRIBUTING IN SAFETY DEVELOPMENT

In general there are several items which influence the safety performance that should be analyzed and specified in design and pre-construction stage in order to increase safety. Sawacha, Naoum and Fong (1999) [12] explained different variables that effect safety on construction sites. In their research the impacts of the historical, economical, technical, procedural, organizational and environmental subjects are recognized in terms of how these items are connected to the level of site safety. The results show that variables regarding`the organization strategy are the most influential group of factors that has effects on safety performance in the United Kingdom construction industry. In another study conducted byEvelyn, Florence and Adrian (2005) [3],the results of a postal survey of contractors in Singapore discussed. The obtained results of this research showed that site accidents take place when there are insufficient company policies, unsafe procedures, poor attitudes of construction personnel, low efficiency in management commitment and inadequate safety knowledge and training of staff. The study recommended that project managers must pay more attention regarding the factors determined above to enhance safety performance on construction sites and alleviate the frequency of accidents. From the above investigation, it can be understood that having the right policies in conjunction with safety management associated in design and pre-construction phase can greatly reduce accidents.One way that an owner should apply is to hire contractors who have proved a record of good safety performance. This factor should be considered during the processes ofqualifying contractors for bidding work and ranking contractors for a contract award. Aprospectivecontractor with an acceptable history of safety performance commitment is more likely to performsafely in the future than a contractor with a poor safety record.In the following section we will discuss on the main elements of improving safety in construction project.

3.1 Risk Analysis in the Design Stage


Identifying future risk in design stage will greatly decrease the loss of accident to people or properties. The collaboration between the designer and client will be resulted in a safety risk analysis for each project option. This approach will be applied by assessing the relationship among the stakeholders, the public, the final users of the facilities and the environment. This strategy will concentrate what can happen, how and why it can happen in the implementation of the tasks. It will also focus on separating acceptable risks from the risk of dangerous activities. Moreover, the level of risk will be classified by comparing the severity and probability. Therefore, they can be ranked for further analysis.Finally, a wide range of options for treating risk will be determined which aim to reduce, eliminate or omit the risk.

305

Vol. 5, Issue 1, pp. 303-312

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963 3.2 TrainingStrategy
It is clear that training has a contributing role in defining management practices to enhance safety performance. Providing regular training sessions increase the awareness of employees about hazardous tasks. On the other hand, the safety training is very useful as it allows employees to predict future accidents or near misses. In order to improve the quality of safety and health in a large scale, the management level should consider a systematic and comprehensive safety approach at construction site. This approach should be clearly explained by specific procedure for each hazardous activity which has been identified in design stage. The process should be clear and understandable for everyone. Moreover, the organization should hold safety and health training program for new employees. This strategy will put the orientation of the organization in a preventive process. Workers who are properly trained would make a correct decision in deal with incidents associated with their workplace [5].With the aim of training the organization can prevent from accidents and injuries as it informs their employees about adherence to safety regulations.

3.3 RewardPolicy
To improve the safety culture in construction workplace creating of reward system is necessary which runs parallel to safety education and training. On the other hand, the safety based on incentive program reinforces the reporting of accidents or any unsafe act that leads to an accident. The policy within the organization should be based on the prevention of accident, not punishment after any accident take place. The rewarding system can be monetary (economic type) or job promotion.

3.4 Management Commitment to the Implementation of Safety Culture


The policy selected by manager in relation to safety issues is effective in the development of safety level within an organization. Defining clear procedures and providing safety standards such as the Occupational Safety and Health Act 1970 (OSHA) will help to run process properly. In addition, the management is responsible to allocate people with the sufficient level of competency and knowledge as a representation in each part of the work. This approach will respond the need of workers in terms of problem solving.

3.5 Contractor Comply with Safety Regulation


The contractor commitment to comply with safety rules should be recognized in every construction project. Therefore, hiring contractors with a record of good safety performance during the bidding process is prioritized by client. Contractor attitudes toward safety range from minimal compliance to total commitment,so concerned owners should consider past safety performance of contractors during thebidding process and when awarding the contract. All owners have a legal right to use reasonable care to correct orwarn contractors of any non-apparent hazards present on the site which could affect the safeperformance of the construction and to use reasonable care to prevent contractors from injuringothers on the site. Owners must make sure that contractors recognize their contractualresponsibility to perform safely.On the other hand, increasedowner involvement, if not handled adroitly, can interfere with thecontractor's productivity and may cause ill will between an owner and the contractor. Owners can implement following strategies to achieve better safety performance such as: Identify safety rules and guidelines that the contractor must comply. Providing a permit systemregarding the potentially hazardous tasks. Force the contractor to allocate anaccountable supervisor to coordinate safety on the site. Discuss about safety issue at regular meetings between owner and contractor. Develop safety monitoring during construction.

3.6 Providing Safe Equipment and Tools


Use of safe machinery and facilities is essential to maintain the health and safety of site personnel. By the advent of technology in the construction industry, the design of machinery and plants has been

306

Vol. 5, Issue 1, pp. 303-312

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
improved. Technological intervention has resulted in automation and comprehensive facility redesign[8]. Although this approach has reduced the large number of accidents, but at the same time it causes new type of accident. For instance, new workers who are not familiar very well with the technology of plants and facilities cause accident in construction projects. To overcome this problem new control techniques has been emerged in the form of emergency switch operated by workers to control the operation. This system involves a sensor that specifies the existence of workers in the workplace. Another fundamental solution is the proper layout of plant and material at construction projects. With proper arrangement of machinery and exact implementation a large percentage of hazards will be omitted. In a construction project the issues of re-design and re-engineering the work station altogether will help to eliminate dangerous incidents.

3.7 Personnel Selection


The concept of accident has roots in personal behavior. Some employees are more accident prone than others while some other employees have a preventive attitude toward accident. Certain variables have identified by researchers such as: personal mismatch, social deviance, impulsive behavior, family stability, alcohol and drug test that should be examined and analyzed in finding prospective employees [6].

3.8 Take a Responsibility to Report Near-Miss Accident


The near-miss is defined as an event that does not result in injuries or illness to people or damage to assets and the environment. The ability of workers for reporting near-miss accident has a significant contribution to the prevention of hazards.Near-miss reporting and investigation allows experts and specialists to control safety or health procedures in a site before they cause a more serious incident. Some supervisors of the industry consider a less importance to near-miss reporting but investigation of accidents shows that for each accident there are several near-misses with different levels of impacts. Thus, the workers require to understand what they should report, when they should report and when they should report.

IV.

DEVELOPMENT OF SAFETY PERFORMANCE

Regarding to eight previous elements, in this part an efficient approach is recommended as a guideline to assist the team members in the construction industry to manage their safety in their workplaces. We required preventing accident because of 3 reasons as mentioned in below: 1) Humanitarian Reason: to ensure that people are safe and healthy at work and nobody suffers from accident due to the work activity. 2) Legal Reason: to comply with provisions of law which, specify standards to ensure safety and health at work. 3) Economic Reason: to prevent losses due to accident in term of expenses on medical, compensation, property damage, downtime, etc. This guideline measures safety on site and includes 6 steps (Fig.2) that are mentioned one by one. These steps are joined as chain together.

307

Vol. 5, Issue 1, pp. 303-312

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Safety & Health Regulation Review and Update Identify Hazards

Record Findings Decide Precautions

Assess Risks

Figure 2. Guidelines on Risk Assessment and Continuous Safety Improvement

1) 2) 3) 4) 5) 6)

Creating safety and health regulation Identify hazard Assess and evaluation risk Decide precautions Record findings Review and update

4.1 Step 1: (Creating safety and health regulation)


Safety policy is contained some notification that exhibit responsibilities, commitments, culture, behavior and requirement to ensure that a workplace is safe, healthy and acceptable. So this statement will encourage all the employees and other people in the site that are affected by the site condition to pay attention to these notifications to increase safety performance. The responsibility of this policy is as below: Create a condition to ensure that workers are operating in safe and healthy environment Decline the situation that cause to create risk Provide safe tools and equipment Provide reliable method and procedure for doing work Provide needed information, training, and instruction regarding to site condition and type of the construction project Emphasize to use suitable clothe and safety equipment Assign personnel according to their ability and skills Create compulsory entrance regulation to site for regular people

4.2 Step 2: (Identify the Hazard)


Hazard can cause different injuries to the workers and sometimes can cause death. Therefore, identifying the hazard is important to control risk and decreasing accident in site. In site all the materials, equipment, machineries, and also work activity can cause hazard. Therefore, we have to evaluate work place and work activities to identify hazards or find the resource of hazards. Hazards can be physical, healthy, chemical, biological, and humanitarian. [13] Some regular causes of physical hazard are falling from scaffold, moving heavy burden manually, cutting by machine, burning by firing materials, straining, injury by another person and etc. while chemical hazards are related to chemical materials that are utilized in a project such as glues and correction fluids to industrial solvents, dyes, and acids. Regulation is required for using chemical

308

Vol. 5, Issue 1, pp. 303-312

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
materials by workers. This regulation should create according to effect of chemical materials on skin that is initial problem and also examine long term effect of these materials. Biological hazards contain every kind of viruses and bacteria that may lead infection and substances from animals that can cause health problems. Therefore, biological regulation for more protection and increase safety is required. Human factors are related to the mental and physical capacity of the worker. Workers must have the ability to do their duty and work place and system should be comfortable and without stress. For instance, pregnant women, people with disabilities, older worker, or young worker with no experience have higher accident rate. All the employees must be informed about the hazards that can existence in the site regarding type of work. Record of previews accident, experience of the expert people, and different kind of standards can assist employees to determine the resource of hazard. Furthermore, we can use professional people to provide safety statement and identify hazards but the advisor is required to know about the situation, kind of work and must have an adequate experience.

4.3 Step 3: (Assessment and evaluation of risk)


Possibility of harm to the people by hazard is risk that has different severity and frequency. Risk is also related to the number of people who will be affected by hazard. The magnitude and serious of the harm and also the number of the worker that are affected is important for assessing risk. Risk assessment must be done by own employees in the work therefore, if the experience and expertise of the worker is not enough, the company must provide the competent person to assist them. There is different quantitative and qualitative risk assessment that we have to choose suitable one regarding to the project and site condition [13].

4.4 Step 4: (Decide What Precautions Are Required)


You have to use proper method and tools regarding the situation to preventing risk. Law requirement is one of the important strategies that must be followed by all the employers. Law is going to make a guideline on how evaluate the risk and increase safety. Most of the times improving safety and start to protect from the hazard is no so expensive but it is creativity, for example using non slip material in slipper surface or sometimes change the method and procedure to do the work can be useful and effective. Some of the precaution is as below: Reliable and clean work condition Using safeguard in high level Using skilled worker Enough training for worker Provide reliable inspection Availability of emergency aid Availability of protective equipment

4.5 Step 5: (Record Finding)


All the finding of the risk assessment must be record in safety statement. It means mention more hazard and dangerous situation that can affect employees in workplace. Therefore, company rule, manufacturing instruction, and choosing appropriate attitude is related to these records. This finding must be update and related to the work position because of increasing safety and also decline risk. Some documents that can assist us to add several useful notifications to the safety policy which is utilized in the organization are according to the following: Manual instruction of materials and plants. Company regulations. Operating instructions. Manufacturers instructions. Company safety and health procedures.

4.6 Step 6: (Review and Update) 309 Vol. 5, Issue 1, pp. 303-312

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Using safety statement should be one of the important parts of the work and everyday this statement should be available for inspection in the workplace. This statement should be obvious and relevant to the work. Significant change in workplace or kind of the work that can add new hazard to the employee cause to provide new statement related to these hazards. Employees are responsible to amend safety statement if necessary. Sometimes employee cannot do it and should take help from professional persons. Employee should consider some important issues to revise safety statement as below: Safety statement must be related to the work condition Examine hazards, risks, risk assessment and identify essential safety protector Use practical methods to implement in site All the notification should be according safety and health performance standard Consider all the humanitarian, legal, and economical reason for preventing hazard and risk Examine how can improve safety and health performance As mentioned this guideline are going to assist the employer to manage safety and health in working places.

V.

FUTURE WORK AND APPLICATIONS

Due to the fact that complexity and extension of construction projects are increasing, we require to study causes of accident in more detail in near future. This objective is obtained when management and experts consider the following in the construction workplace: Improvement of site condition Integration of client, designer and contractor in design stage to eliminate adversarial nature and preventing conflict in early stage of projects which lead to mitigate the destructive risk during building process. Providing new methods in construction and planning such as Building Information Modeling methodology and IBS technology. Providing a high level of safety training for employees. Avoiding the use of outdated equipment and plants during construction stages.

VI.

RESULTS AND DISCUSSION

As mentioned before several fatal accident in construction industry force people to find causation of these accident in construction industry to prevent or minimize risk. According Ridley (1986) [11] unsafe act, unsafe condition or both lead to create more than 99 percent of accident in construction industry. On the other hand, site condition, kind of equipment and machinery, management attitude and method, and human element can increase or decrease the rate of risk and accident in construction industry [10]. The causes of accident can be divided to human and physical factors. Human factors are exposed by personal factors, service moving and energized equipment, removing safety devices, select unsafe position in working and etc. While, physical factors were addressed to wrong act of another person, negligence of accident source, disregard to special procedure, clothes hazard, environment hazard, fire hazard, wrong method or arrangement, assignment of personnel in wrong position, no safety guard in site and other unsafe condition [1]. According to all of these factors, experts developed different contributor to improve safety performance in construction industry. Furthermore, the main elements of improving safety in construction projects are risk analysis in design stage, training strategy, reward policy, management commitment to the implementation of safety culture, contractor comply with safety regulation, providing safe equipment and tools, personal selection, and take responsibility to report near miss accident. With attention to these 8 elements, continues improvement of safety is achieved. This continues improvement contains 6 steps that are joined as chain together. Therefore according these steps, creating policy and identifying hazard can help to decrease risk and increase safety.

VII. CONCLUSIONS In conclusion, dangerous and risk in complex construction projects is inevitable and might play with workers forever. We must carefully examine all the factors that can cause hazard and increasing risk
310 Vol. 5, Issue 1, pp. 303-312

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
in construction site. It is clear that enhancing safety performance in construction site is not easy but possible. In this paper we mentioned different elements and strategies to improve construction safety performance such as risk analysis and assessment in design stage, training strategy and management commitment, etc. In all these strategies the important attitude for increasing safety performance and declining risk is to identify the root causes of construction hazard and accident and also manipulating proper precaution tool and equipment related to kind of construction project and site condition. Therefore, we tried to introduce continuous safety development that includes 6 steps. These steps involve creating safety regulation, identify hazard, assess and evaluate risk, decide precaution, record findings, and updating our finding in relation to the work condition. This continuous strategy starts by examining the condition and kind of the work to identify hazard and risk in construction industry. According to hazards that are available in site, employees must use their own experience,applying the advices of the experts, and previous report about safety to create some regulations and training strategy for workers. In overall, increasing safety performance and creating safer condition in construction projects need more attention to find hazard and kind of risk that can cause any damage to the properties and humans.

REFERENCES
[1]. Abdelhamid, T.S. and Everett, J.G. (2000) Identifying of Root Causes of Construction Accident, Journal of Construction Engineering and Management, ASCE, pp52 60. [2]. D.P.Fang, F.Xiea, X.Y. Huang &H.Lic, (2004) Factor Analysis-Based Studies on ConstructionWorkplace Safety Management in China, international Journal of projects Management 22, pp43-49. [3]. Evelyn Ai Lin Teo, Florence Yean YNG Ling & Adrian FookWeng Chong, (2005)Framework for project managers to manage construction safety, International Journal of Project Management. 23 (4), pp329-341. [4]. Farooqi R.U., (2008) Safety Performance in Construction Industry of Pakistan, first International Conference on Construction Education, Research& Practice. [5]. G. Vredenburgh, & H. H. Cohen, (1995) High-risk recreational activities: skiing and scuba-what predicts compliance with warnings, International Journal of Industrial Ergonomics, 15, pp123128. [6]. Guastello, S. J., (1993)Do we really know how well our occupational accident prevention programs work?, Safety Science, 16(3-4), pp445-463. [7]. Kartam N., A., Flood I., &Koushki P., (2000) Construction Safety in Kuwait: procedures, problem, and recommendation, Journal of Safety Science 36, pp. 163 184. [8]. Kjellen, U., (1990)Safety Control in Design: Experiences of an Offshore Project, Journal of Occupational Accident, 12, pp49-61. [9]. Lubega, H. A., Kiggundu, B. M. &Tindiwensi, D. (2000) An Investigation into the Causes of Accidents in the Construction Industry in Uganda, 2nd International Conference On Construction In Developing Countries: Challenges Facing The Construction Industry In Developing Countries, pp1-12 [online] Available http://buildnet.csir.co.za [10]. Pipitsupaphol, T. and Watanabe, T. (2000) Identification of Root Causes of Labor Accidents in the Thai Construction Industry, Proceedings of the 4th Asia Pacific Structural Engineering and Construction Conference (APSEC 2000) 13-15 September 2000 Kuala Lumpur, pp193-202. [11]. , J. (1986) Safety at Work, 2nd Edition. London: Butterworth Ltd. [12]. Sawacha, E., Naoum, S. & Fong, D., (1999) Factors affecting safety performance on construction sites, International Journal of Project Management 17 (5), pp309-315. [13]. Mark Gabel, P.E., (2010) Project RiskManagement, Washington State Department of Transportation. Administrative and Engineering Publications.

AUTHORS
Aref Charehzehi received B.Sc. degrees in Civil Engineering from Islamic Azad University (IAU), Zahedan branch (Zahedan, Iran) in 2008 and M.Sc. in Construction management from Universiti Teknologi Malaysia (UTM), Malaysia in 2013.

311

Vol. 5, Issue 1, pp. 303-312

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Alireza Ahankoob received B.Sc. degrees in Industrial Engineering from Islamic Azad University (IAU), Qazvin branch (Qazvin, Iran) in 2006 and M.Sc. in Construction management from Universiti Teknologi Malaysia (UTM), Malaysia in 2013.

312

Vol. 5, Issue 1, pp. 303-312

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

A MODIFIED SWIFTER START ALGORITHM FOR EVALUATING HIGH BANDWIDTH DELAY PRODUCT NETWORKS
Ehab Aziz Khalil
Department of Computer Science & Engineering, Faculty of Electronics Engineering, Menoufiya University, Menouf-32952, EGYPT

ABSTRACT
It is well known that the TCP congestion control algorithm has been remarkably successful in improving the current TCP/IP function better and efficiently. However, it can perform poorly in networks with high Bandwidth Delay Product (BDP) paths. This paper presents modification to the Swifter Start congestion control algorithm, that may helps the TCP better utilize the bandwidth provided by huge bandwidth long delay links. It also presents results to show a comparison of the original Swifter Start algorithm and the modified Swifter Start algorithm, these are promising enough.

KEYWORDS:
evaluation

TCP congestion control, Swift Start algorithm, round trip time, high BDP, performance

I.

INTRODUCTION

Today as well as tomorrow the main problem in the design of networks is the development of congestion control algorithms. Conventional congestion control algorithms were deployed for two principle reasons: the first one is to ensure avoidance of network congestion collapse [1], [2]; and second one is to ensure a degree of network fairness. Roughly speaking, network fairness refers to the situation whereby a data source receives a fair share of available bandwidth, whereas congestion collapse refers to the situation whereas an increase in network load results in a decrease of useful work done by the network (usually due to retransmission of data). Attempts to deal with network congestion have resulted in the widely applied transmission control protocol [3]. While the current TCP congestion control algorithm has proved remarkably durable, it is likely to be less effective on next generation networks featuring gigabit speed connectivity and heterogeneous traffic and sources. These considerations have led to widespread acceptance that new congestion control algorithms must be developed to accompany the realization of next generation systems and perhaps also to better exploit the resources of existing networks [4]. In recent years, several more aggressive versions of TCP have been proposed [5-23], and there are some researches investigate the congestion control and long delay bandwidth product such as in [2033], have been published and many are still in progress, all of them investigate and discuss the congestion control mechanisms in the Internet which consists of the congestion window algorithms of TCP, running at the end-systems, and Active Queue Management (AQM) algorithm at routers, seeking to obtain high network utilization, small amounts of queuing delay, and some degree of fairness among users. This paper presents a comparison performance evaluation of a modified TCP congestion algorithm [19-23].

313

Vol. 5, Issue 1, pp. 313-325

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
The remaining of the paper is organized as follow: section 2 explains the background and motivation. The slow start over high bandwidth product networks presents in section 3. Section 4 describes the simulation and results. Section 5 explores the conclusions of the paper. Finally section 6 produces the future research direction.

II.

BACKGROUND AND MOTIVATION

It is well known that network congestion control occurs when too many sources attempt to send data at too high rates. At the sender end, this is detected by packet loss. Due to congestion, the network experiences large queue delays and consequently the sender must retransmissions in order to compensate for the lost packets. Hence the average transmission capacity of the upstream routers is reduced. A TCP connection controls its transmission rate by limiting its number of unacknowledged segments; the TCP window size W. TCP congestion control is based on dynamic window adjustment. The TCP connection is begins in slow start phase the congestion window is doubled every RTT until the window size reaches slow-start threshold. After this threshold, the window is increased at a much slower rate of about one packet each RTT. The window cannot exceed a maximum threshold size that is advertised by the receiver. If there is a packet loss then the threshold drops to half of the present value and the window size drops to the one Maximum Segment Size (MSS). A congestion avoidance mechanism maintains the network at an operating point of low delay and high throughput. However, there are four basic congestion algorithms that should be included in any modern implementation of TCP, these algorithms are: Slow Start, Congestion Avoidance, Fast Retransmit, and Fast Recovery [15]. The last two algorithms were developed to overcome the short comings of earlier implementations, like TCP Tahoe [34], where TCP Tahoe was getting into the slow start phase every time a packet was lost and thus valuable bandwidth was wasted and the slow start algorithm is used to gradually increase the size of the TCP congestion window. It operates by observing that the rate at which new packets should be injected into the network is the rate at which the acknowledgments are returned by the other end. Indeed, a modern TCP implementation that includes the above four algorithms is known as TCP Reno which is the dominant TCP version. J.C. Hoe [35] modified the Reno version of TCP to improve the start up behavior of TCP congestion control scheme. These improvements include finding the appropriate initial threshold window (ssthresh) value to minimize the number of packets lost during the start up period and creating a more aggressive fast retransmit algorithm to recover from multiple packet losses without waiting unnecessarily for the retransmission timer to expire. We have to mention here that the TCP Reno is like TCP Tahoe except that if the source receives three "duplicate" Acks, it consider this indicative of transient buffer overflow rather than congestion, and does the following: (1) it immediately (i.e., without waiting for timeout) resends the data requested in the Ack; this is called fast retransmit. (2) it sets the congestion windows and slow start threshold to half the previous congestion window (i.e., avoids the slow start phase) this is called fast recovery. The two variable congestion window (CWND) and slow start threshold (ssthresh), are used to throttle the TCP input rates in order to much the network available bandwidth. All these congestion control algorithms exploit the Additive Increase Multiplication Decrease (AIMD) paradigm, which additively increases the CWND to grab the available bandwidth and suddenly decreases the CWND when the network capacity is hit and congestion is experienced via segment losses, i.e., timeout or duplicate acknowledgments. AIMD algorithms ensure network stability but they don't guarantee fair sheering of network resources[36-38]. Recently there are research papers highlight and study many versions TCP algorithms [20,21, 39-56].

2.1- Swift Start Algorithm


Swift Start is a new congestion control algorithm was proposed and designed by BBN Technologies [57] to increase the performance of TCP over high delay-bandwidth product networks by improving its start up. Swift Start tries to solve the congestion control problems by using packet pair and pacing algorithms together. As known that the traditional packet pair algorithm has a problem in which it assumes that the ACK path does not affect the delay between ACKs. But both ACKs may be subjected to different queuing delays in their path, which may causes an over or under estimate of the bottleneck capacity. However,

314

Vol. 5, Issue 1, pp. 313-325

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
to avoid congestion due to over estimation the Swift Start uses only a fraction of the calculated bandwidth; this fraction is determined by a variable which indicates the rates between 1 and 8. The following sub-section presents the propose modification in packet pair algorithm to avoid the mentioned defects in traditional packet pair, and use the modification packet pair algorithm in the Swift Start.

2.2- Modification Swift Start Algorithm [23]


The objective of this modification is to avoid error sources in the traditional packet pair algorithm. However the idea behind that is instead of time depending on the interval between the acknowledgments that may cause errors, the time between the original messages will be calculated by the receiver when they arrive it, and then the receiver sends this information to the source when acknowledging it. The sender sends its data in form of packet pairs, and identifies them by First/Second (F/S) flag. When the receiver receives the first message, it will record its sequence number and its arrival time, and it will send the acknowledgment on this message normally according its setting. When it receives the second one, it will check whether it is the second for the recorded one or not, if it is the second for the recorded one, the receiver will calculate the interval t between the arrival time of the second one and that of the first one:t = t_seg1 t_seg2 sec .. (1) Where: t_seg1 and t_seg2 are the arrival time of the first and second segments respectively. However, when the receiver sends the acknowledgment for the second segment, it will insert the value of t into the transport header option field. The senders TCP will extract t from the header and calculate the available bit rate BW : BW = t x SegSize .. (2) Where: SegSize is the length of the second segment. If the receiver uses the DACK technique, it will record the first segment arrival time and wait for 200 ms, when it receives the second one it will calculate t and wait for new 200 ms, if it receives another packet pair it will calculate another t, whenever it sends an acknowledgment, it will send t in it. By this way the error sources are avoided and the estimated capacity is the actual capacity without neither over estimation nor under estimation. So it is not needed to use only a fraction of the capacity like the traditional Swift Start.

2.3 Motivation
Swift Start faces many problems when combining with other techniques such Delayed Acknowledgment and acknowledgment compression. I- Effect of Delayed Acknowledgment (DACK) The majority of TCP receivers implement the delayed acknowledgment algorithm [58], [59] for reducing the number of pure data less acknowledgment packets sent. A TCP receiver, which is using that algorithm, will only send acknowledgments for every other received segment. If no segment is received within a specific time, an ACK will send, (this time typically is 200 ms.). The algorithm will directly influence packet pair estimation, because the ACK is not sent promptly, but it may be delayed some time (200 ms). If the second segment of the packet pair arrives within 200 ms the receiver will send single ACK for the two segments instead of sending an ACK for each segment. Hence the sender can not make that estimation. II- Effect of Acknowledgment Compression [60], [61] Router that supports acknowledgment compression will send only one ACK if it receives two consecutive ACK messages of a connection within small interval. This will also affect the Swift Start and cause it to falsely estimate the path capacity. III-Effect of ACK Path It is well known that the traditional packet pair algorithm has a problem in which it assumes that the ACK path does not affect the delay between ACKs. But both ACKs may be subjected to different queuing delays in their path, which may causes an over or under estimate of the bottleneck capacity. However, to avoid congestion due to over estimation the Swift Start uses only a fraction of the calculated bandwidth; this fraction is determined by a variable which indicates the rates between 1 and 8.

315

Vol. 5, Issue 1, pp. 313-325

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

III.

SLOW START OVER HIGH BDP NETWORKS

As stated in TCP congestion control [13], the slow start and congestion avoidance algorithms must be used by a TCP sender to control the amount of outstanding data being injected into the network. To implement these algorithms, two variables are added to the TCP per-connection state. The congestion window (CWND) is a sender-side limit on the amount of data the sender can transmit into the network before receiving an acknowledgment (ACK), while the receiver's advertised window (RWND) is a receiver-side limit on the amount of outstanding data. The minimum of CWND and RWND governs data transmission. Another state variable, the slow start threshold (ssthresh), is used to determine whether the slow start or congestion avoidance algorithm is used to control data transmission. When a new connection is established with a host, the congestion window is initialized to a value that is called Initial window (IW) which typically equals to one segment. Each time an acknowledgement (ACK) is received; the CWND is increased by one segment. So TCP increases the CWND by percentage of 1.5 to 2 each round trip time (RTT), The sender can transmit up to the minimum of the CWND and the advertised window RWND. When the congestion window reaches the ssthresh the congestion avoidance should starts to avoid occurrence of congestion. The congestion avoidance increases the CWND when receiving an acknowledge according to equation 3. CWND += SMSS*SMSS/CWND --- (3)

Where SMSS is the sender maximum segment size TCP uses slow start and congestion avoidance until the CWND reaches the capacity of the connection path, and an intermediate router will start discarding packets. Timeouts of these discarded packets informs the sender that its congestion window has gotten too large and congestion has been occurred. At this point TCP reset CWND to the IW, and the ssthresh is divided by two and the slow start algorithm starts again. Many other additions such as fast retransmit, fast recovery], the new Reno Modification to TCP fast recovery algorithm, and increasing TCP's initial window were added to TCP congestion control. The current implementations of Slow Start algorithm are suitable for common link which has lowdelay and modest-bandwidth. Because it takes a small time to correctly estimate and begin transmitting data at the available capacity. While, over high delay-bandwidth product networks, it may take several seconds to complete the first slow start and estimate available path capacity. Figure 1 shows a network model used to illustrate the effect of Round Trip Time (RTT) on the connection time, using different RTTs and bottleneck bandwidth of 49.5 Mbps. Figure 2 shows the effect of RTT on the connection time, for a connection that transmit a 10 Mbytes file. Figure 3 shows the bandwidth utilization for the same RTTs.

Figure 1 Network Model

From Figures 2, 3, it is clear that as RTT increases the bandwidth utilization decreases and the connection time increases although there is a large amount of bandwidth is available. The second note shown in Figure 3 is that as the RTT increases the time to reach the maximum transfer rate increases. These are two problems in slow start over high delay bandwidth product connections.

316

Vol. 5, Issue 1, pp. 313-325

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Figure 2 Effect of RTT on the TCP Connection Figure 3 Effect of RTT on the TCP Transmission Rate Time

The first problem is due to the fact that each RTT the slow start can not send greater than the minimum of CWND and RWND, the later one is limited to 65535 bytes because its advertised by the receiver in a 16 bit field. It can be overcome by using Window Scaling option in TCP. The second problem is the longer time to reach the maximum transfer rate, many algorithms was proposed to overcome this problem such Increasing TCP's Initial Window, Fast TCP, TCP Fast Start, Explicit Control Protocol (XCP), High speed TCP, Quick-Start for TCP and IP [34] and Swift start for TCP.

IV.

SIMULATION AND RESULTS

We implement the Modified swift start model using Opnet modeler to compare its performance with that of the slow start in deferent network conditions of bandwidth and path delay, then we compare between them using single flow and multiple flow to show the effect of these flows on each other.

4.1- Single Flow Low BDP Networks


The network shown in Figure 1 was used to show the performance of Swift start TCP and compare it with Slow start using single flow the sender and the receiver. The sender uses FTP to send a 10 MB file to the receiver. The TCP parameters of both the sender and the receiver is shown in Table-1 The sender and the receiver are connected to the routers with 100 Mbps Ethernet connections. both of the routers are CISCO 3640 with forwarding rate 5000 packets/second and memory size 265 MB. The two router are interconnected with point to point link that link is used as a bottleneck by changing its data rate , also the path delay is controlled using this link.
Table-1 TCP Parameters of the Sender and Receiver Maximum Segment Size Receive Buffer Receive Buffer Usage Threshold Delayed ACK Mechanism Maximum ACK Delay Slow-Start Initial Count Fast Retransmit Fast Recovery Window Scaling Selective ACK (SACK) Nagle's SWS Avoidance 1460 Bytes 100000 Bytes 0 Segment/Clock Based 0.200 Sec 4 Disabled Disabled Disabled Disabled Disabled

317

Vol. 5, Issue 1, pp. 313-325

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Karn's Algorithm Initial RTO Minimum RTO Maximum RTO RTT Gain Deviation Gain RTT Deviation Coefficient Persistence Timeout Enabled 1.0 Sec 0.5 Sec 64 Sec 0.125 0.25 4.0 1.0 Sec

Figure 4 shows the congestion window for both slow start TCP and Modified swift start TCP when the bottleneck data rate is 1.5 Mbps (T1) and the path RTT is 0.11674 second, which is low rate, low delay network. its clear that the modified swift start is faster and better than slow start TCP in estimating the path congestion window which is = 22120 bytes after only one RTT , then the packet pair is disabled and the slow start runs normally. The estimated congestion window is proportional to the link bandwidth and round trip time it can be calculated as follow: Assuming that packet pair delay deference is D, CWND = the amount of data that can be sent in RTT = RTT * MSS / D Theoretically the packet pair delay deference is the frame length on the bottleneck link, so D = frame length /link rate = (1460+20+7) * 8 / 1544000 = 0.007705 sec and RTT is measured for the first pair (RTT = 0.11674 sec ) so CWND = 0.11674 * 1460 / 0.007705 = 22120.75 bytes. We neglect the processing delay which may affect the value of D and so decrease CWND. The result in the simulation shows that the delay difference is 0.007772 sec and the CWND is 21929 bytes, these results very close to the mathematical results. In Figure 4 also we note that the next value for the congestion window is 24849 bytes, because this window was calculated when receiving the ACK for the second pair, this pair was buffered in the network, so the round trip time for it was increased, the simulation results shows that RTT for the second pair is 0.13228 sec, and the delay between the first and second packet is the same 0.007772 sec.

Figure 4 Congestion Window for Slow Start TCP and Modified Swift Start TCP for BW = 1.5 Mbps and Path RTT= 0.11674 Sec

Figures 5-a and 5-b show the sent segment sequence number for this connection. it is shown that both algorithms start the connection by sending 4 segments, after 1 RTT (0. 11674 sec) slow start send 6 segments with in the second RTT, while modified swift start send a large number of segments because of its large congestion window which is 20722 bytes which is about 14 segments, these segments were paced along the second RTT, until the sender receives an other ACK that indicates the end of the second RTT and the beginning of the third RTT, at this time the pacing was stopped and the slow start was used to complete the connection.

318

Vol. 5, Issue 1, pp. 313-325

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
In Figure 5-a we note that after a certain time both algorithms reaches a constant transmission rate , we roughly calculate this rate as : Transmission rate = 187848 bytes / sec, Rate * RTT = 187848 * 0.343 = 64431, This is a proximately equal to the max RWND,

Figure 5-a The Sent Segment Sequence Number for Slow Start Figure 5-b The Sent Segment Sequence Number for Slow Start TCP and Modified swift start TCP for BW = 1.5 Mbps and TCP and Modified Swift Start TCP for BW = 1.5 Mbps and Path RTT= 0.11674 Sec. Path RTT= 0.11674 Sec

We also not that the modified swift start algorithm reaches this rate more faster than the slow start so it enhances the startup for the connection. Figure 6 shows the received sequence number at the receiver, combined with the sent sequence number from the sender, from this Figure we have two notes. The first is that for Modified swift start the sending rate at the first and second RTT form the sender is equal to the to the receiving rate at the receiver this means no buffering in the routers, this is the objective of using pacing in connection to avoid over flow in the routers buffers. then slow start also tries to avoid congestion by slowly increasing the window each ACK, so it also tries to avoid congestion. The second note is that for slow start there are some idle intervals that is not used and so the network resources are wasted, this time wasting is avoided in modified swift start.

Figure 6 The Received Segment Sequence Number at the Receiver and the Sent Segment Sequence Number at the Sender for Slow Start TCP and Modified Swift Start TCP for BW = 1.5 Mbps and path RTT=0.11674 Sec

Figure 7 Calculated Mean RTT for Slow Start TCP and Modified Swift Start TCP for BW = 1.5 Mbps and Path RTT= 0.11674 Sec

Figure 7 shows the calculated RTT for both slow start and modified swift start, it is clear that the modified swift start quickly calculate the mean RTT because it start sending data with a high rate faster that the former, which leads to more buffering in the routers so each segment RTT will increase.

319

Vol. 5, Issue 1, pp. 313-325

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963 4.2- Low Bandwidth, Long Delay Networks
We also test the modified swift start model on this connection with the same bandwidth but with longer delays to check the performance for long delay paths. for link delay 0.1 sec the first RTT was 0.31281 sec, the second RTT was 0.32836 sec, the first CWND was 58762 bytes and the second was 61683 bytes, Figure 8 shows the congestion window for this connection, its clear that the modified swift start is very faster than slow start takes in estimating the congestion window, Figure 9 shows the sent segment sequence number for RTT = 0.32836 Sec. Comparing between Figure 4 with Figure 8 and Figure 5 with Figure 9, we conclude that as RTT increased the difference between slow start and modified swift start increases. For test the modified swift start on high bandwidth networks we use the same model in Figure 1 with PPP link of rate OC1 (518400000 bps) and with different RTT. Table-2 shows the summary information for link bandwidth T1 and different delays. The table shows that as RTT increases the first estimated congestion window also increases and the connection time difference between slow start and modified swift start also increases which means the larger RTT the better performance of modified swift start.

Figure 8. Congestion Window for Slow Start TCP and Modified Swift Start TCP for BW = 1.5 Mbps and Path RTT= 0.32836 Sec

Figure 9 The sent segment sequence number for Slow start TCP and Modified swift start TCP for BW = 1.5 Mbps and path RTT= 0.32836 Sec Table2 Information for T1 Connection for Bandwidth T1 and Different Link Delays Link Delay 0.0001 0.0005 RTT 0.11674 0.11381 First CWND 21929 21379 Second Conn. CWND Time diff. 24849 0.0326 24300 0.0236

320

Vol. 5, Issue 1, pp. 313-325

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
0.001 0.005 0.01 0.05 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.11481 0.12281 0.13281 0.21281 0.31281 0.51281 0.71281 0.91281 1.11281 1.31336 1.51281 1.71281 21567 23070 24948 39977 58762 96333 133903 171474 209045 246719 284186 321757 24488 25991 27869 42898 61683 99254 136824 174395 211966 249638 287107 324678 0.0362 0.0893 0.0902 0.3697 0.8290 0.2451 1.0378 1.8887 26.0561 30.8050 35.9797 40.9113

4.3- High Bandwidth Networks


First we check for small RTT to test low delayhigh bandwidth networks. We check for RTT= 0. 07327 sec, Figure 10 shows the congestion window for this connection, we note the large congestion window which equals 462128 bytes and 539251 bytes which were estimated by the modified swift start TCP.

Figure 10 Congestion Window for Slow Start TCP and Modified Swift Start TCP for BW = OC1 Mbps and Path RTT= 0.07327 Sec.

This congestion window can be calculated as follow: CWND= RTT * MSS / D D = (1460+20+ 7) * 8 / 51840000 = 0.0002295 sec CWND = 0. 07327 * 1460 / 0.0002295 = 466168 bytes Figure 11 shows the sent sequence number for this connection, Figure 11 shows the effect of large congestion window on the traffic sent in the second RTT slow start transmits six segments only while modified swift start send a bout 44 segments, thats equal to the maximum RWIND. For long delayhigh bandwidth networks we increase the link delay to achieve a longer RTT.

321

Vol. 5, Issue 1, pp. 313-325

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Figure 11 The Sent Segment Sequence Number for Slow Start TCP and Modified Swift Start TCP for BW = OC1 Mbps and Path RTT= 0.07327 Sec

Table-3 summarizes the information for OC1 connection with different RTT. The table show s that the modified swift start finished before the slow start TCP.
Table3 Information for OC1 Connection for Bandwidth T1 and Different Link Delays Link RTT First Second Conn. Delay CWND CWND Time diff. 0.0001 0.07327 462128 539251 0.281211 0.0005 0.07407 467174 545091 0.282627 0.001 0.07507 473481 552391 0.229694 0.005 0.08307 523939 610791 0.317579 0.01 0.09307 587011 683791 0.284387 0.05 0.17307 1091587 1267791 0.522022 0.01 0.27307 1722307 1997791 0.873903 0.2 0.47307 2983747 3457791 1.42374 0.3 0.67307 4245187 4917791 2.024241 0.4 0.87307 5506627 6377791 3.478777 0.5 1.07307 6768067 7837791 26.86863 0.6 1.27307 8029507 9297791 31.85851 0.7 1.47307 9290947 10757791 36.81996 0.8 1.67307 10552387 12217791 41.82369

V.

CONCLUSIONS

This paper presents modification to what is called a Swift Start congestion control algorithm that may help the TCP better utilize the bandwidth provided by huge bandwidth long delay links. It also presents results to show a comparison of the original algorithm. However, the modified swift start algorithm combines three algorithms to enhance the connection start up, it uses packet pair to quickly estimate the available bandwidth and calculate the congestion window, then uses pacing to avoid overflowing the networking nodes that may occur if this window sent in burst, and then uses slow start to try using the available buffers capacity on the networking nodes. The algorithm avoids the drawbacks of each algorithm by using all of them each in suitable time. It succeeded in enhancing the start up of the connection even in low speed or moderate networks. Modified swift start maintains the core of current TCP implementations; it needs only simple modification to current TCP.

322

Vol. 5, Issue 1, pp. 313-325

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

VI.

FUTURE RESEARCH DIRECTION

The future research direction of this work includes the following:- Applying modified swift start algorithm and evaluate its performance in real networks. Comparing between the MSS and other developed algorithms such as quick start TCP and BIC-TCP. Studying the effect the aggressive behavior of MSS on the performance of other algorithms. Studying the effect of the redundant data added by MSS for each connection on the queues of the network routers and the possibility of congestions due to this redundant traffic.

REFERENCES
[1] V. Jacobson, "Congestion Avoidance and Control," Proceedings of the ACM SIGCOMM 88 Conference, August 1988, pp. 314329. [2] S. Floyd and K. Fall, "Promoting the use of End to- End Congestion Control in the Internet," IEEE / ACM Transactions on Networking, Vol.7, No.4, 1999, pp. 458-472. [3] R.N. Shorten, D.J.Leith, J.Foy, R.Kiduff, Analysis and Design of Congestion Control in Synchronized Communication Networks, Hamilton Institute, NUI Maynooth, June 20, 2003. [4] Steven H. Low, F. Paganini, and John C. Doyle, Internet Congestion Control, IEEE Control Systems Magazine,Vol.22, No.1, Feb.2002, pp.28-39. [5] J. Postl, " Transmission Control Protocol," RFC793, September 1981. [6] C. Jin, D. Wei, S. H. Low, G. Buhrmaster, J. Bunn, D. H. Choe, R. L. A. Cottrell, J. C. Doyle, W. Feng, O. Martin, H. Newman, F. Paganini, S. Ravot, and S.Singh, " Fast TCP: From Theory to Experiments," IEEE Network, Vol.19, No.1, Jan. / Feb. 2005, pp.4-11. [7] Y.-T. Li, D. Leith, and R. N. Shorten, "Experimental Evaluation of TCP Protocols For High-Speed Networks," IEEE Trans. On Networking 2005. [8] Y. J. Zhu and L. Jacob, " On Making TCP Robust Against Spurious Retransmissions, " Computer Communication, Vol.28, I.11, Jan. 2005 , pp.25-36. [9] F. Paganini, Z. Wang, J.C. Doyle and S. H. Low, "Congestion Control For High Performance, Stability and Fairness in General Network," IEEE / ACM, Transactions on Networking, Vol.13, No.1, Feb.2005 , pp.43-56. [10] C. Jin, D. X. Wei, and S. Low, " Fast TCP Modification, Architecture, Algorithm, Performance," Proc. of IEEE INFOCOM'04, 2004. [11] C. Jin, D. Wei, and S. Low, Fast TCP Modification, Architecture, Algorithm, Performance, Caltech CS Report Caltech CSTR:2003:01Q 2003. [12] S . Floyd, and T. Henderson " The new Reno Modification to TCP Fast Recovery Algorithm, RFC 2582, April 1999. [13] M. Allman, W. Richard Stevens "TCP Congestion Control," RFC 2581 NASA Glenn Research Center, April 1999. [14] V.Padmanabhan and R. Katz," TCP Fast Start: A Technique ForSpeeding up Web Transfers,"Globecom Sydney Australia, Nov. 1998. [15] W. R. Stevens, "TCP Slow Start, Congestion Avoidance, Fast Retransmit, and Fast Recovery Algorithms," RFC 2001 Jan. 1997. [16] S. Floyd, "TCP and Successive Fast Retransmits," ftp://ftp.ee.lbl.gov/papers/fastretransmit.pps. Feb 1995. [17] V. Jacobson, " Berkeley TCP Evolution from 4.3-Tahoe to 4.3-Reno, " Proceedings of the British Columbia Internet Engineering Task Force, July 1990. [18] V. Jacobson "Fast Retransmit, Message to the End2End," IETF Mailing List , April 1990. [19] Ehab A. Khalil, "Simulation-Based Comparisons of TCP Congestion Control," IJAET, Vol.4, Issue 2, pp.84-96, September, 2012. [20] Ehab A. Khalil, "Simulation and Analysis Studies for a Modified Algorithm to Improve TCP in Long Delay Bandwidth Product Networks," IJAET, Vol.1, Issue 4, pp.73-85, September, 2011. [21] Ehab A. Khalil, "A Modified Congestion Control Algorithm for Evaluating High BDP Networks," Accepted for publication in IJCSNS International Journal of Computer Science and Network Security, Vol.10 No.11, November 2010. [22] Ehab A. Khalil, and etc., "A Modification to Swifter Start Algorithm for TCP Congestion Control," Accepted for Publication in the, VI. International Enformatika Conference IEC 2005, Budapest, Hungary, October 26-28, 2005. [23] Ehab A. Khalil, Comparison Performance Evaluation of a Congestion Control Algorithm, Accepted for publication in the 2nd IEEE International Conference on Information & Technologies From Theory to Applications (ICTTA06) which has been held at Damascus, Syria, April 24-28, 2006. [24] R. El-Khoury, E. Altman, R. El-Azouzi, "Analysis of Scalable TCP Congestion Control Algorithm," IEEE Computer Communications, Vol.33, pp.41-49, November 2010. [25] K. Srinivas, A.A. Chari, N. Kasiviswanath, "Updated Congestion Control Algorithm for TCP Throughput Improvement in Wired and Wireless Network," In Global Journal of Computer Science and Technology, Vol.9, Issue5, pp. 25-29, Jan. 2010. [26] Carofiglio, F. Baccelli, M. Piancino, "Stochastic Analysis of Scalable TCP," Proceedings of INFOCOM, 2009.

323

Vol. 5, Issue 1, pp. 313-325

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
[27] A. Warrier, S. Janakiraman, Sangtae Ha, I. Rhee, "DiffQ.: Practical Differential Backlog Congestion Control for Wireless Networks," Proceedings of INFOCOM 2009. [28] Sangtae Ha, Injong Rhee, and Lisong Xu, "CUBIC: A New TCP-Friendly High-Speed TCP Variant," ACM SIGOPS Operating System Review, Vol.42, Issue 5, pp.64-74, July 2008. [29] Injong Rhee, and Lisong Xu, "Limitation of Equation Based Congestion Control," IEEE/ACM Transaction on Computer Networking, Vol.15, Issue 4, pp.852-865, August 2007. [30] L-Wong, and L. Y. Lau, "A New TCP Congestion Control with Weighted Fair Allocation and Scalable Stability," Proceedings of 2006 IEEE International Conference on Networks, Singapore, September 2006. [31] Y. Ikeda, H. Nishiyama, Nei. Kato, "A Study on Transport Protocols in Wireless Networks with Long Delay," IEICE, Rep. Vol.109, No.72, pp.23-28, June 2009. [32] Yansheng Qu, Junzhou Luo, Wei Li, Bo Liu, Laurence T. Yang, " Square: A New TCP Variant for Future High Speed and Long Delay Environments," Proceedings of 22nd International Conference on Advanced Information Networking and Applications, pp.636-643, (aina) 2008. [33] Yi- Cheng Chan, Chia Liang Lin, Chen Yuan Ho, "Quick Vegas: Improving Performance of TCP Vegas for High Bandwidth Delay Product Networks," IEICE Transactions on Communications Vol.E91-B, No.4, pp.987-997, April, 2008. [34] T. V. Lakshman, U. Madhow, "The Performance of TCP/IP For networks with High Bandwidth delay Products and Random loss," IEEE/ACM Trans. on Networking, June 1997. [35] J. C. Hoe, "Improving the Start-up Behavior of a Congestion Control Scheme for TCP," Proce., of ACM SIGCOMM'96, Antibes, France, August 1996, pp.270-280. [36] K. Chandrayana, S. Ramakrishnan, B. Sikdar, S. Kalyanaraman, "On Randomizing the Sending Times in TCP and other Window Based Algorithm," Vol.50, Issue5, Feb. 2006, pp.422-447. [37] C. Dah-Ming and R. Jain, "Analysis of the Increase and Decrease Algorithms for Congestion Avoidance in Computer Networks,"Computer Networks and ISDN Systems, Vol.17, No.1, 1989, pp.1-14. [38] J. Padhye, V. Firoiu, D. Towsley, J. Kurose, "Modeling TCP Throughput: A Simple Model and it's Empirical Validation," Proceedings of ACM SIGCOMM'98, Sept.1998, pp.303-314.

[39] Yanping Teng, Haizhen Wang, Mei Jing, Zuozheng Lian, A Study of Improved Approaches for TCP Congestion Control in Ad Hoc Networks, Procedia Engineering, Volume 29, 2012, pp. 1270-1275. [40] Dmitri Moltchanov, A Study of TCP Performance in Wireless Environment Using Fixed-Point Approximation, Computer Networks, Volume 56, Issue 4, March 2012, pp. 1263-1285. [41] Hui-Wen Kan, Ming-Hsin Ho, Adaptive TCP Congestion Control and Routing Schemes Using Cross-Layer Information for Mobile Ad Hoc Networks, Computer Communications, Volume 35, Issue 4, February, 2012, pp. 454-474. [42] Adnan Majeed, Nael B. Abu-Ghazaleh, Saquib Razak, Khaled A. Harras, Analysis of TCP performance on multi-hop Wireless Networks: A Cross Layer Approach, Ad Hoc Networks, Volume 10, Issue 3, May 2012, pp.586-603. [43] Jie Feng, Lisong Xu, Stochastic TCP Friendliness: Expanding the Design Space of TCP-Friendly Traffic Control protocols, Computer Networks, Volume 56, Issue 2, February, 2012, pp. 745-761. [44] Fu XIAO, Li-juan SUN, Ru-chuan WANG, Yue-chao FANG, BIPR: a New TCP New Variant Over Satellite Networks, The Journal of China Universities of Posts and Telecommunications, Volume 18, Supplement 1, September, 2012, pp.34-39. [45] Venkataramana Badarla, C. Siva Ram Murthy, Learning-TCP: A Stochastic Approach for Efficient Update in TCP Congestion Window in Ad Hoc Wireless Networks, Journal of Parallel and Distributed Computing, Volume 71, Issue 6, June, 2011, pp. 863-878. [46] Minsu Shin, Mankyu Park, Byungchul Kim, Jaeyong Lee, Deockgil Oh, Online Loss Differentiation Algorithm with One-Way Delay for TCP Performance Enhancement, IJCSNS International Journal of Computer Science and Network Security, VOL.11 No.4, April 2011, pp. 26-36. [47] Sumedha Chokhandre, Urmila Shrawankar, TCP over Multi-Hop Wireless Mesh Network, International Conference on Computer Communication and Management Proc .of CSIT vol.5 (2011), IACSIT Press, Singapore, 2011, pp. 461-465. [48] Ghassan A. Abed, Mahamod Ismail and Kasmiran Jumari, A Survey on Performance of Congestion Control Mechanisms for Standard TCP Versions, Australian Journal of Basic and Applied Sciences, 5(12)., ISSN 1991-8178, 2011, pp. 1345-1352. [59] Yew, B.S., B.L. Ong and R.B. Ahmad, Performance Evaluation of TCP Vegas versus Different TCP Variants in Homogeneous and Heterogeneous Wired Networks, World Academy of Science, Engineering and Technology, 2011, pp.74. [50] Abed, G.A., M. Ismail and K. Jumari, A Comparison and Analysis of Congestion Window for HSTCP, Full-TCP and TCP-Linux in Long Term Evolution System Model. ICOS 2011, Langkawi, Malaysia, 2011, pp.364-368. [51] Ekiz, N., A.H. Rahman and P.D. Amer, Misbehaviors in TCP SACK generation. ACM SIGCOMM Computer Communication Review, 41(2):, 2011, pp.16-23. [52] R. El Khoury, E. Altman, R. El Azouzi, Analysis of scalable TCP congestion control algorithm, Journal of Computer Communications, Volume 33, November, 2010

324

Vol. 5, Issue 1, pp. 313-325

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
[53] Zvi Rosberg, John Matthews, Moshe Zukerman, A Network Rate Management Protocol With TCP Congestion Control and Fairness For All, Computer Networks, Volume 54, Issue 9, June, 2010, pp. 13581374. [54] William H. Lehr, John M. Chapin, On the Convergence of Wired and Wireless Access Network Architectures, Information Economics and Policy, Volume 22, Issue 1, March 2010, pp. 33-41. [55] Haiyan Luo, Song Ci, Dalei Wu, Hui Tang, End-to-end optimized TCP-friendly Rate Control For RealTime Video Streaming Over Wireless Multi-Hop Networks, Journal of Visual Communication and Image Representation, Volume 21, Issue 2, February, 2010, pp. 98-106. [56] Jehan.M, G.Radhamani, T.Kalakumari, A survey on congestion control algorithms in wiredand wireless networks, Proceedings of the International conference on mathematical computing and management (ICMCM 2010), Kerala, India, June 2010.
[57] C. Partridge, D. Rockwell, M. Allman, R. Krishnan, J. Sterbenz "A Swifter Start For TCP" BBN Technical Report No. 8339, 2002. [58] Afifi, H., Elloumi, O., Rubino, G. " A Dynamic Delayed Acknowledgment Mechanism to Improve TCP Performance for Asymmetric Links,"Computers and Communications, 1998. ISCC '98. Proceedings. 3 rd IEEE Symposium, on 30 June- 2 July 1998, pp.188 192. [59] D. D. Clark, "Window and Acknowledgement Strategy in TCP," RFC 813, July 1982. [60] Mogul, J.C., "Observing TCP Dynamics in Real Networks," Proc. ACM SIGCOMM 92, Baltimore, MD, August 1992, pp. 305-317. [61] Zhang, L., S. Shenker, and D. D. Clark, "Observations on the Dynamics of a Congestion Control Algorithm: The Effects of Two-Way Traffic," Proc. ACM SIGCOMM91, Zurich, Switzerland, August 1991, pp. 133-148.

AUTHOR
Ehab Aziz Khalil, (B.Sc78 M.Sc.83 Ph.D.94), Ph.D. in Computer Network and Multimedia in the Dept. of Computer Science & Engineering, Indian Institute of Technology (IIT) Bombay-400076, India in July 1994, Research Scholar from 1988-1994 with the Dept. of Computer Science & Engineering, Indian Institute of Technology (IIT) Bombay-400076, India, M.Sc in the Systems and Automatic Control, Faculty of Electronic Engineering, Minufiya University, Menouf 32952, EGYPT, Oct. 1983, B.Sc. in the Dept. of Industrial Electronics, Faculty of Electronic Engineering, Minufiya University, Menouf 32952, EGYPT, May 1978. Since July 1994 up to now, working as a Lecturer, with the Dept. of Computer Science & Engineering, Faculty of Electronic Engineering, Minufiya University, Menouf 32952, EGYPT.. Participated with the TPC of the IASTED Conference, Jordan in March 1998, and With the TPC of IEEE IC3N, USA, from 2000-2002. Consulting Editor with the Whos Who? in 2003-2004. Member with the IEC since 1999. Member with the Internet2 group. Manager of the Information and Link Network of Minufiya University, Manager of the Information and Communication Technology Project (ICTP) which is currently implementing in Arab Republic of EGYPT, Ministry of Higher Education and the World Bank. Published more than 90 research papers and articles review in the international conferences, Journals and local newsletter.

325

Vol. 5, Issue 1, pp. 313-325

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

INFLUENCE OF SOIL-INDUSTRIAL EFFLUENTS INTERACTION ON SUBGRADE STRENGTH OF AN EXPANSIVE SOIL-A COMPARATIVE STUDY
A. V. Narasimha Rao1, M. Chittaranjan2
2

Professor, Department of Civil Engineering, S.V.University, Tirupati, India Senior Lecturer, Bapatla Engineering College, Bapatla, Guntur District, India

ABSTRACT
The rapid growth in population and industrialization cause generation of large quantities of effluents. The bulk effluents generated from industrial activities are discharged either treated or untreated over the soil leading to changes in soil properties causing improvement or degradation of engineering behaviour of soil. If there is an improvement in engineering behaviour of soil, there is a value addition to the industrial wastes serving the three benefits of safe disposal of effluent, using as a stabilizer and return of income on it. If there is degradation of engineering behaviour of soil then solution for decontamination is to be obtained. Hence an attempt is made in this investigation to study the effect of certain Industrial effluents such as Textile effluent, Tannery effluent and Battery effluent on the California Bearing Ratio Value of an expansive soil.

KEYWORDS: Expansive Soil-Textile Effluent-Tannery Effluent-Battery Effluent-C.B.R Values

I.

INTRODUCTION

The Index and Engineering properties of the ground gets modified in the vicinity of the industrial plants mainly as a result of contamination by the industrial wastes disposed. The major sources of surface and subsurface contamination are the disposal of industrial wastes and accidental spillage of chemicals during the course of industrial operations. The leakage of industrial effluent into subsoil directly affects the use and stability of the supported structure. Results of some studies indicate that the detrimental effect of seepage of acids and bases into sub soil can cause severe foundation failures. Extensive cracking damage to the floors, pavement and foundations of light industrial buildings in a fertilizer plant in Kerala state was reported by Sridharan (1981).Severe damage occurred to interconnecting pipe of a phosphoric acid storage tank in particular and also to the adjacent buildings due to differential movements between pump and acid tank foundations of fertilizer plant in Calgary, Canada was reported by Joshi (1994). A similar case of accidental spillage of highly concentrated caustic soda solution as a result of spillage from cracked drains in an industrial establishment in Tema, Ghana caused considerable structural damage to a light industrial buildings in the factory, in addition to localized subsidence of the affected area has been reported by Kumapley (1985). Therefore, it is a better to start ground monitoring from the beginning of a project instead of waiting for complete failure of the ground to support human activities and then start remedial actions.

326

Vol. 5, Issue 1, pp. 326-335

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
In many situations, soils in natural state do not present adequate geotechnical properties to be used as road service layers, foundation layers and as a construction material. In order to adjust their geotechnical parameters to meet the requirements of technical specifications of construction industry, studying soil stabilization is more emphasized. Hence an attempt has been made by researchers to use industrial wastes as soil stabilizers so that there is a value addition to the industrial wastes and at the same time environmental pollution can also minimised. Shirsavkar (2010) have been made experimental investigations to study the suitability of molasses to improve geotechnical properties of soil. He observed that the value of CBR is found to increase by the addition of molasses. Kamon Masashi (2001) reported that the durability of pavement is improved when stabilized with ferrum lime-aluminium sludge. Ekrem Kalkan (2006) investigated and concluded that cementred mud waste can be successfully used for the stabilization of clay liners in geotechnical applications. The thickness of Pavement and subgrade strength depends on C.B.R value of soil. Hence an attempt is made in this investigation to study the effect of certain industrial effluents such as Textile effluent, Tannery effluent and Battery effluent on the California Bearing Ratio Value of an expansive soil. Experimental investigations, Results and discussion, Mechanism involved in modification of C.B.R values, summary and conclusions, scope for future work are critically discussed in the following sections.

II.

EXPERIMENTAL INVESTIGATIONS

2.1. Materials used


2.1.1. Soil Expansive soils due to its swelling nature it causes lot of damages to Civil Engineering structures which are constructed over them. These type of soils are very sensitive to changes in environment such as change in applied stress, Pore fluid chemistry and its surrounding environmental conditions. Hence expansive soil is considered for investigation. The soil used for this investigation is obtained from CRS near Renigunta, Tirupati. The dried and pulverized material passing through I.S.4.75 mm sieve is taken for the study. The properties of the soil are given in Table.1. The soil is classified as SC as per I.S. Classification (IS 1498:1970) indicating that it is clayey sand. It is highly expansive in nature as the Differential Free Swell Index (DFSI) is about 255%.
Sl.No. 1. Table:1 Properties of Untreated soil Property Grain size distribution (a)Gravel (%) (b)Sand (%) (c)Silt +Clay (%) Atterberg Limits (a)Liquid Limit (%) (b)Plastic Limit (%) (c)Plasticity Index (%) Differential Free Swell Index (%) Swelling Pressure (kN/m2) Specific Gravity pH Value Compaction characteristics (a) Maximum Dry Unit Weight (kN/m3) (b) Optimum Moisture Content(%) California Bearing Ratio Value (%) at (a)2.5mm Penetration (b) 5.0mm Penetration Unconfined compressive Strength(kN/m2) Value 3 65 32 77 29 48 255 210 2.71 9.20 18.3 12.4 9.98 9.39 173.2

2.

3. 4. 5. 6. 7.

8.

9.

327
Strength(kN/m )
2

Vol. 5, Issue 1, pp. 326-335

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963 2.1.2. Industrial Effluents
2.1.2.1 Textile effluent Textile effluent is a coloured liquid and soluble in water. The chemical properties of the effluent are shown in Table 2. 2.1.2.2 Tannery effluent Tannery industry effluent is dark coloured Liquid and soluble in water. The chemical composition of Tannery effluent is given in Table.3 2.1.2.3. Battery effluent Battery effluent is a colourless liquid and soluble in water. The chemical properties of the effluent are shown in Table .4
Table.2: Chemical Composition of Textile effluent Sl.No 1. 2. 3. 4. 5. 6. 7. 8. Parameter Colour PH Chlorides Alkalinity Suspended solids Total solids BOD COD Value Yellow 9.83 380mg/l 2400mg/l 1500gm 13.50 150mg/l 6200mg/l Table. 3: Chemical Composition of Tannery effluent S.No. 1. 2. 3. 4. 5. 6. 7. 8. 9. PARAMETER Color pH Chromium Chlorides Sulphates Total Hardness BOD COD Suspended Solids VALUE Black 3.15 250 mg/l 200 mg/l 52.8 mg/l 520 mg/l 120 mg/l 450 mg/l 1200 mg/lit

Table.4: Chemical Composition of Battery Effluent S.No. 1. 2. 3. 4. 5. 6. 7. 8. 9. PARAMETER Color pH Sulphates Chlorides Lead Sulfate Free Lead Total Lead BOD COD VALUE White 8.45 250 mg/l 30 mg/l 63.08% 7.44% 75.42% 110 mg/l 320 mg/l

III.

PROCEDURE FOR MIXING

The soil from the site is dried and hand sorted to remove the pebbles and vegetative matter if any. It is further dried and pulverized and sieved through a sieve of 4.75mm to eliminate gravel fraction if any. The dried and sieved soil is stored in air tight containers and ready to use for mixing with effluents. The soil sample so prepared is then mixed with solutions of different concentrations of Textile, Tannery and Battery effluent. The percentage varied from 20 to 100% in increment of 20%.The soil effluent mixtures are mixed thoroughly before testing.

IV.

TESTS CONDUCTED ON TREATED SOIL

4.1. Standard Proctor Test


The compaction parameters i.e. optimum moisture content and Maximum dry unit weight play a vital role in changing the strength characteristics of an expansive soil. In practice C.B.R. test is also conducted at Optimum Pore fluid Content and corresponding Maximum Dry Unit Weight. But these two parameters are strongly influenced by pore fluid chemistry. Hence in this investigation Standard Proctors compaction tests are carried out on expansive soil treated with Textile effluent, Tannery effluent, Battery effluent at various percentages of 0%, 20%, 40%, 60%, 80% and 100% by dry weight of the soil.

328

Vol. 5, Issue 1, pp. 326-335

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963 4.2. California Bearing Ratio Tests
The strength of the subgrade is an important factor in the determination of the thickness required for a flexible pavement. It is expressed in terms of its California Bearing Ratio, usually abbreviated as CBR. The results obtained by these tests are used in conjunction with empirical curves, based on experience for the design of flexible pavements. The California Bearing Ratio value is determined corresponding to both 2.5 mm and 5.0 mm penetrations, and the greater value is used for the design of flexible pavement. In this investigation California Bearing Ratio tests on Expansive soil treated with Textile effluent, Tannery effluent and Battery effluent varying from 0% to 100% in increment of 20% is carried out. The tests are conducted on remolded soil specimens at their respective Optimum Pore fluid Content and Maximum Dry Unit Weights and compacted according to I.S. Light compaction.

V.

RESULTS AND DISCUSSIONS

5.1. Standard Proctors test


Standard Proctors compaction tests are carried out on expansive soil treated with Textile effluent, Tannery effluent, Battery effluent at various percentages of 0%, 20%, 40%, 60%, 80% and 100% by dry weight of the soil and the results i.e. Optimum Pore fluid Content and Maximum Dry unit Weight are obtained. The variation of the Optimum Pore fluid Content at different percentages of Textile, Tannery and battery effluents are shown in Table.5.The variation of the Maximum dry Unit Weight at different percentages of Textile, Tannery and Battery effluents are shown in Table.6
Table: 5: Optimum Pore fluid (OPC) Content at different Percentages of effluents Effluent (%): Water (%) O.P.C Textile 0:100 20:80 40:60 60:40 80:20 100:0 12.4 12.6 12.9 13.4 14.4 15.4 Tannery 12.4 12.1 11.9 11.6 11.3 11.1 Battery 12.4 13.5 13.6 13.7 13.9 14.1

Table.6. Maximum Dry Unit Weight (M.D.U) at different Percentages of effluents Effluent (%): Water (%) M.D.U (kN/m3) Textile 0:100 20:80 40:60 60:40 80:20 100:0 18.30 18.27 18.22 18.14 18.09 18.03 Tannery 18.3 18.6 18.8 19.1 19.5 19.8 Battery 18.30 17.71 17.51 17.41 17.37 17.2

From the Table.5 it is observed that the optimum pore fluid content increases with increase in percentage of Textile effluent and Battery effluent whereas it decreases with increase in percentage of Tannery effluent. From the Table.6 it is observed that there is reduction in maximum dry density with percentage increase in Textile effluent and Battery effluent whereas it increases with increase in percentage of Tannery effluent.

5.2. California Bearing Ratio Test


The load penetration curves of treated and untreated soil obtained from California Bearing Ratio tests at different percentages of Textile effluent are presented in Fig.1.

329

Vol. 5, Issue 1, pp. 326-335

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Fig.1: Load Penetration curves of treated soil at different percentages of Textile Effluent

The load penetration curves of treated and untreated soil obtained from California Bearing Ratio tests at different percentages of Tannery effluent are presented in Fig.2.The Top most curve corresponds to 100% of effluent followed by 80%, 60%, 40%, 20% and 0% respectively.

Fig.2: Load Penetration curves of treated soil at different percentages of Tannery Effluent

The load penetration curves of treated and untreated soil obtained from California Bearing Ratio tests at different percentages of Battery effluent are presented in Fig.3.

330

Vol. 5, Issue 1, pp. 326-335

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Fig.3: Load Penetration curves of treated soil at different percentages of Battery Effluent

5.2.1. CBR Values for 2.5mm penetration


The variation of the CBR values with different percentages of Textile, Tannery and Battery effluents are shown in Table.7.The percent increase/decrease in CBR values at different effluent percentages are also shown in Table.8.From the Table.8 it is observed that the maximum percent increase in CBR value at 2.5mm penetration for 100% Textile effluent is about 45% and for 100%Tannery effluent is about 50%.It is found that the maximum percent decrease in CBR value for 100% Battery effluent is about 21%.
Table.7: C.B.R.Values at 2.5mm penetration at different percentages of effluents Effluent %): Water (%) C.B.R.Values (%) Textile 0:100 (%) 20:80 40:60 60:40 80:20 100:0 9.98 10.59 10.93 14.02 14.12 14.43 Tannery 9.98 10.38 11.28 12.18 13.58 14.98 Battery 9.98 9.76 9.31 8.84 8.37 7.9

Table.8: Percent increase/decrease in CBR values at 2.5mm penetration at different Percentages of effluents Effluent %): Water (%) Textile 0:100 20:80 40:60 60:40 80:20 100:0 6.11 9.52 40.48 41.48 44.58 C.B.R.Values (%) Tannery 4.00 13.02 22.04 36.07 50.10 Battery -3.57 -11.54 -16.90 -18.91 -20.83

331

Vol. 5, Issue 1, pp. 326-335

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
The Variation of CBR values at 2.5mm penetration and different percentages of the three effluents are shown in Fig.4. From the figure it is observed that the CBR value increases with per cent increase of Textile and Tannery effluent where as it decreases in the case of Battery effluent. The maximum percentage increase or decrease occurs at 100% effluent in all the three cases.

Fig.4: Variation of CBR values at 2.5mm penetration at different percentages of Effluents

5.2.2. CBR Values at 5.0mm penetration


The variation of the CBR values with different percentages of Textile effluent, Tannery effluent and Battery effluent are shown in Table.9.The percent increase/decrease in CBR values at different effluent percentages are also shown in Table.10.From the Table.10 it is observed that the maximum percent increase of CBR value at 5.0mm penetration for 100% Textile effluent is about 39.82% and for 100% Tannery effluent is about 45.47%.It is found that the maximum percent decrease in CBR value for 100% Battery effluent is about 16.50%.
Table.9: CBR Values at 5.0 mm penetration at different percentages of effluents Effluent %): Water (%) C.B.R. Values (%) ater(%) 0:100 20:80 40:60 60:40 80:20 100:0 Textile 9.39 10.24 10.43 10.94 12.77 13.13 Tannery 9.39 9.69 10.61 11.22 11.83 13.66 Battery 9.39 9.08 8.46 8.09 7.96 7.84

Table.10: Percent increase/decrease in C.B.R values at 5.0mm penetration at different percentages of effluents Effluent (%): Water (%) (%) C.B.R.Values (%) (%) Textile Tannery Battery 0:100 ater(%) 20:80 9.05 3.20 3.30 40:60 11.07 12.9 10.47 60:40 16.50 19.48 13.88 80:20 35.99 25.98 15.22 100:0 39.82 45.47 16.50

332

Vol. 5, Issue 1, pp. 326-335

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
The variation of CBR value at 5.0mm penetration and different percentages of the three effluents are shown in Fig.5. From the figure it is observed that the CBR value increases with per cent increase of Textile and Tannery effluent where as it decreases in the case of Battery effluent. The Maximum per cent increase or decrease in C.B.R. Value occurs at 100% effluent in all the three cases.

Fig.5: Variation of CBR values at 5.0mm penetration at different percentages of Effluents

VI.

MECHANISM INVOLVED TREATED SOIL

IN

MODIFICATION

OF

C.B.R VALUES

OF

In the case of Expansive soils the Engineering behavior of the soil is governed by thickness of diffused double layer. The thickness of double layer in turn affected by pore fluid chemistry such as Dielectric Constant, Electrolyte Concentration, Ion valence, hydrated ion radius etc., when soil interacts with industrial effluents, the interaction changes the pore fluid chemistry and subsequently the thickness of diffused double layer. These changes are likely to be reflected by variation in engineering properties. 6.1. Textile effluent When soil is mixed with Textile effluent the dry density decreases and Optimum pore fluid content increases. This could be attributed to ion exchange at the surface of clay particle. The chlorides in the additives reacted with the lower valence metallic ions in the clay microstructure and causes decrease in double layer thickness. The decrease in double layer thickness causes increase in attractive forces and decrease in repulsive forces leading to flocculated structure. Hence Dry density decreases. Due to retaining of water within the voids of flocculated structure water holding capacity of soil increases hence optimum moisture content increases. The CBR values of soil treated with Textile effluent increases at 2.5mm penetration and 5.0mm penetration. This is due to Textile effluent is capable of forming covalent linkages with cellulose, amino, thiol and hydroxyl groups (Srimurali, 2001).Also Textile effluent do contain Cl- or O-So3 as leaving group enabling the dyes to form covalent bonds with fibre (Srimurali 2001).The Clay minerals do contain hydroxyls groups at the surface and possibly a bonding takes place between hydroxyls in the clay minerals and dyes or O-So3Na of dyes. This Chemical bonding may be responsible for increase in CBR Values of soil treated with Textile effluent. 6.2. Tannery Effluent The CBR values of soil treated with Tannery effluent increases at 2.5mm penetration and 5.0mm penetration. When soil is mixed with Tannery effluent the dry density increases due to adsorption of Chromium CrO4 ions on to the clay particles present in the Tannery effluent. Due to its higher valence adsorption of divalent or trivalent chromium decreases the double layer thickness. The

333

Vol. 5, Issue 1, pp. 326-335

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
reduction of the double layer thickness brings the particles closer and increases the mechanical contact between the particles. Increase in dry density and mechanical contacts between the particles increase the inter particle friction that in turn led to high resistance to penetration of the plunger. Hence high CBR values of soil observed for soil treated with Tannery effluent. 6.3. Battery Effluent The CBR values of soil treated with Battery effluent decreases at 2.5mm penetration and 5.0mm penetration. This is attributed to adsorption of sulphates on to the clay particles present in the Battery effluent. Adsorption of divalent negative sulphate ions causes entire clay particles to be negatively charged. If the entire clay particle becomes negatively charged it increases the activity of clay mineral and holds more quantity of water as double layer water and promotes expansion of double layer. Expansion of diffused double layer increases the distance between individual soil grains which may cause decrease in Electro static and Electromagnetic attractive forces which hold the soil particles together. Hence decrease in Maximum Dry Unit Weight and weak chemical bonding that developed between clay minerals with the reactive chemicals present in Battery effluent led to low resistance to penetration of plunger. Hence low CBR values of soil observed for soil treated with Battery effluent.

VII.

SUMMARY AND CONCLUSIONS

Industrial activity is necessary for socio-economic progress of a country but at the same time it generates large amounts of solid and liquid wastes. Disposal of solid or liquid effluents, waste byproducts over the land and or accidental spillage of chemicals during the course of industrial process and operations causes alterations of the physical and mechanical properties of the ground in the vicinity of industrial plants. If soil waste interaction causes improvement in soil properties then the industrial wastes can be used as soil stabilizers. On other hand if it causes degradation of soil properties then the solution for decontamination of soil is to be obtained. In this investigation, an attempt has been made to study the effect of certain industrial effluents such as Textile, Tannery and Battery effluents on CBR values of an expansive soil. From the results presented in this investigation, the following conclusions are drawn. Expansive clay considered in this investigation is sensitive when it is treated with industrial effluents. When soil mixed with Textile and Tannery effluents separately an increase in CBR values is observed. But when it is mixed with Battery effluent CBR values are decreased. The Maximum improvement in CBR values corresponding to 2.5mm penetration and 5.0 mm penetration are about 45% and 40% respectively when the soil is treated with Textile effluent. The maximum improvement in CBR values occur when the soil is treated with 100% Textile effluent. The Maximum improvement in CBR values corresponding to 2.5mm penetration and 5.0mm penetration are about 50% and 45% respectively when the soil is treated with Tannery effluent. The maximum improvement in CBR values occur when the soil is treated with 100% Tannery effluent. The Maximum reduction in CBR values corresponding to 2.5mm penetration and 5.0mm penetration are about 21% and 17% respectively when the soil is treated with Battery effluent. The maximum reduction in CBR values occurs when the soil is treated with 100% Battery effluent. Textile and Tannery effluents raise the hope of value addition to the industrial wastes where as Battery effluent can be treated as contaminant.

VIII.

SCOPE FOR FUTURE WORK

In the present investigation the effect of Textile, Tannery and Battery effluents CBR values have been studied. Studies can also be made to know the influence of these effluents on Plasticity, Swelling, Compaction and Strength Characteristics, pH and Drainage and Consolidation characteristics so that a comprehensive knowledge of the behaviour of expansive soil treated with Textile, Tannery and Battery effluents can be obtained.The work can be extended to other contaminants / pollutants / effluents / industrial wastes namely work shop waste, sugar mill waste, Pharmaceutical plants waste, Dairy waste, Paper and Pulp mill waste, Fertilizer plant waste, Steel mill waste, Oil refineries waste, Petro chemical complex waste, Soap Industry waste etc.,

334

Vol. 5, Issue 1, pp. 326-335

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

REFERENCES
[1]. Ekrem Kalkan Utilization of red mud as a Stabilization material for the preparation of clay liners Engineering, Geology, 87, (3- 4), 220-229. (2006) [2]. I.S. 1498-1970 (First Revision), Classification of soils for General Engineering Purposes. [3]. Joshi, R.C., Pan., and Lohita, P. Volume Change in Calcareous Soils due to Phosphoric acid, Contamination, Proc. Of the XIII ICSMFE, New Delhi, Vol.4, 1569- 1574. (1994) [4]. Kumapley, N.K., and Ishola, A., The Effect of Chemical Contamination on Soil Strength Proc. of the XI ICSMFE, Sanfrancisco, Vol.3, 1199-1201.(1985) [5]. Kamon Masashi GU Huanda, Masahiro Improvement of mechanical properties of ferrum lime stabilized soil with the addition of aluminium Sludge Materials science research international ISSN13411683, vol.7, 47-53(2001) [6].Sridharan, A, Nagraj, T.S., and Sivapullaiah, P.V. Heaving of soil Due to Acid Contamination Proc. of the X ICSMFE, Stockholm, Vol.2, 383-386. (1981) [7]. S.S.Shirsavkar, Prof.S.S.Koranne Innovation in Road Construction Using Natural Polymer, EIJJE, Journal, Vol.15, Bund.O, 1614-1624. (2010) [8]. Srimurali,M (2001)Removal of Colour from Dye Wastes By Chemical coagulation Ph.D. Thesis Submitted to Dept.of.Civil.Engg. S.V.University, India

AUTHORS BIOGRAPHIES
A. V. Narasimha Rao presently Professor of Civil Engineering,S.V.U.College of Engineering,Tirupati having 33 years of Teaching, Research, and Consultancy experience. He obtained his Master of Engineering Degree and Ph.D from IIT Chennai. He held various administrative posts in S.V.University like Head of Civil Engineering Department; Vice Principal etc.He published more than 100 research papers. He has published two books. He received Eminent Engineer Award conferred by the Institution of Engineers (India), Engineer of the year award- 2007 conferred jointly by the Government of A.P. and the Institution of Engineers (India), and State Teacher award-2012. His interests are Geosynthetics, Ground improvement techniques, Environmental geotechniques, Marine Geotechnology.

M. Chittaranjan is research Scholar working for Ph.D Degree in Civil Engineering. His area of research is Environmental geotechniques.Presently he is working as a Senior Lecturer in the Department of Civil Engineering of Bapatla Engineering College, Bapatla, India. He had 08 years of experience in the field of academics. He obtained his Master of Technology Degree from S.V.University, Tirupati, India. He had successfully published national research papers and international research papers.

335

Vol. 5, Issue 1, pp. 326-335

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

IMPLEMENTATION OF BROWSER BASED IDE TO CODE IN THE CLOUD


Lakshmi M. Gadhikar1, Deepa Vincent2, Lavanya Mohan3, Megha V. Chaudhari4
1

Department of Information Technology, Fr.CRIT, Vashi, Mumbai University, Navi Mumbai City, Maharashtra, India

ABSTRACT
Cloud computing is one of the latest computing paradigms and many companies are turning towards making it an integral component of their computing strategy. Cloud computing provides a way of taking applications online and all these applications and their associated data can be accessed with just an Internet connection and a web browser. Like many other softwares and applications, an Integrated Development Environment (IDE) can also be hosted on the cloud. This paper conveys the details of the implementation of a cloud based IDE for the Java language. This browser based IDE empowers the users to write, compile and run their Java language code with various devices like smart phones, laptops or desktops that allow Internet access. This IDE is implemented to accommodate sharing of projects and files among users. It also supports the feature of real time collaboration with peers; by which two or more people who have access to the same file can modify it at the same time and ensure that the changes are reflected to the others in real time. This IDE also provides users with the facility to download files so as to keep a copy of them on the users local machine. This IDE integrates forums and blogs. The users who require instant help related to coding can make use of the integrated forum to post their queries. The users who wish to share their knowledge can post on the integrated technical blog. This IDE provides a lot of features under one single roof that the users can utilize even on-the-go with their mobile devices that have Internet access like laptops or smart phones. This IDE eliminates the need to download any software or desktop IDE because this application is present on the cloud and it also permits people working under various heterogeneous environments to code and collaborate and share knowledge with ease.

KEYWORDS: Browser Based Ide, Cloud Computing, Integrated Development Environment.

I.

INTRODUCTION

The latest trend is to take desktop applications online and to provide them as a service. Many desktop applications are being hosted on the cloud to be made easily available and accessible. Google Docs editor is one such application on the cloud. Today, people are feeling the increasing need to write software programs on the go. They, at times, may also find that their machines do not have required software for coding. For example, they may want to code a Java program on a device that does not have an IDE or a JDK installed. In such cases, they will have to download hundreds of megabytes of software followed by a lengthy installation process. This can be very inconvenient. The solution to this problem can be an online IDE. (IDE is a program for the software developers that combine the functionality of a text editor, compiler, etc. [1]) An online IDE which can also be called a Browser

336

Vol. 5, Issue 1, pp. 336-348

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
based IDE [2] is an online programming environment that is accessible to everyone through a web browser and an Internet connection.

II.

RELATED WORK

There are a few browser based IDEs that have support for coding in various languages like C#.net, HTML, CSS, JavaScript, etc. The concept of a browser based IDE to code, compile and run Java language programs is a very recent development. A few existing browser based coding environments are Cloud9 IDE, CodeRun Studio, ideone, Eclipse Orion, eXo Cloud IDE, etc. Cloud9 IDE supports HTML, CSS, JavaScript, etc [3]. It is mainly for web development. It has support for real time collaboration. Code Run Studio[4] supports ASP.net, C#.net, Silverlight as well as the languages supported by Cloud9 IDE. It allows sharing of code through hyperlinks. Eclipse Orion [5] is mainly for web development and it supports HTML and JavaScript. All these above IDEs do not have support for Java language. Ideone [6] is not an IDE. It is like a pastebin that supports compilation and debugging of code in many languages including Java but it does not permit the creation of projects. eXo Cloud IDE is the only cloud-based IDE that supports programming in Java language [7]. But it does not support real time collaboration.

III. PROPOSED SYSTEM


This paper explains the implementation details of the browser based IDE which is present on the cloud as shown in Fig 1.

Fig 1. IDE present on the cloud accessible from various devices that provide internet access

This IDE is accessible from various devices like desktops, laptops and smart phones that have an Internet connection and a web browser. Internet connection and a web browser are the only requirements to access this IDE present on the cloud. This, thus, removes the need to download and install software. Also, as the application is present on the cloud, most of the operating system issues or hardware compatibility issues are eliminated [8]. This IDE allows users to collaborate to write Java language programs and compile and run them. Now a days, organizations conduct global projects in which various employees from across the world participate. They find the need for improved collaboration techniques. Collaboration can be made easy with applications hosted on the cloud. The IDE implemented provides real time collaboration by which multiple users with appropriate access rights for the files can access and modify the same files at the same time and their changes will be reflected to all others in real time. This IDE also provides a way to share projects with other peers by giving appropriate access rights. The IDE also gives users the facility to take a backup of all the projects that they have created by downloading the project files on the users local machines. The users can also download the files in the format of their choice. For example, some users who have created Java language files may want those files to be saved as a text file. They can do so by specifying the .txt extension or any other extension of their choice. Also, many times, the users developing applications encounter errors and exceptions in their programs. The browser based IDE provides, apart from easy sharing and collaboration, a method to get immediate help. The IDE integrates an online forum. All the users who have registered with the cloud based IDE will have access to this forum. They can ask queries or hold discussions in order to get their problems solved. The IDE also provides an integrated technical blog facility for all the users who wish to share their knowledge on various technical topics.

337

Vol. 5, Issue 1, pp. 336-348

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
This paper describes how the cloud based IDE with additional features like easy sharing and collaboration, integrated forums and blogs was implemented to program, compile and run programs in the Java language. Rest of the paper is organized as follows. Section IV describes the advantages of the Browser Based IDE, section V describes the architecture , section VI describes the implementation details and section VII describes the experimental results.

IV. ADVANTAGES OF THE BROWSER BASED IDE


1. Virtually limitless computing power The browser based IDE is present on the cloud. This means that the IDE has all the advantages of cloud hosting like virtually limitless computing power, scalability, risk protection etc [9]. Also, there is no large initial investment. 2. Easy collaboration with peers It also provides for easy pair programming and sharing. It provides the feature of real time collaboration by which different users working on the same project or module can share it and modify it simultaneously. The changes made by one are visible to the others in real time. This enables easy collaboration. 3. Downloading and Saving of files The developers can also download their files and save them on their own machines in the format of their choice. E.g.: Certain developers who are using a smart phone may want to save a java file as a .txt file. 4. Integrated Forums The developers can also hold discussions, ask queries or answer them with the help of integrated forums. This will aid the developers who require any technical help. 5. Knowledge sharing with Blogs The developers can share their knowledge by using the integrated blog facility. This will be of great use to them to share new ideas, knowledge or know-how. 6. Accessibility via variety of devices The IDE is accessible from anywhere with devices like smart phones or laptops that have an Internet connection. 7. Increased Portability This IDE also eliminates the operating system issues that may arise while downloading and installing different software because there is no need to perform any download and install operation to use this IDE. The IDE needs only a browser and an Internet connection.

V.

SYSTEM ARCHITECTURE

The browser based IDE has four important modules [10] that themselves have many sub-modules as shown in Fig 2.

5.1 Registration and Login module


The first one is registration and login module. This module makes use of the table called Users. The users who want to access the cloud IDE have to first register themselves. Their details are stored in the Users tables. The details from this table are used again, to authenticate the user at the time of login.

5.2 Editor module


There are many sub-modules in the second module.

338

Vol. 5, Issue 1, pp. 336-348

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
5.2.1 Create , open, save and delete modules The create module permits the users to create new files. The details of the file are stored in the Documents table and the access details of the file for the user are stored in Access Rights table. The owner has by default all access access rights. The users who have already created files before or have shared access to the files created by someone else, can open those files. The permissions for the user are checked from the Access Rights table. The next module is the save module. The saving process is an automatic process in the IDE. The user does not have to explicitly hit the save button. The saving of the files is done automatically at regular intervals of time or when the users make changes in the file and an event is generated. The users who have appropriate permissions can delete the files. When they do so, the entire file details are removed from the Documents table. The users are also given a special type of delete option called as remove from the list. When the user hits this option, the access rights for that user for that particular file is removed from the Access Rights table but the file details and other users access rights details are retained in the tables.

Fig 2 Block diagram of Browser Based IDE to Code in the Cloud

5.2.2 Download module The users who have all access access rights, are allowed to download the files onto their local machines for either backup or other purposes. The users will have to provide the extension with which they want the files to be stored on their machines. The new file with the extension of users choice will be created on the users machine and all the contents of the original file will be copied into the new local machine file.

339

Vol. 5, Issue 1, pp. 336-348

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963 5.3 Compilation and Execution module
The most important use of the IDE is to compile and run the Java programs. This is the next module. The users who create Java language programs can execute the programs by hitting the compile and run button. On doing so, the compiler will check for compilation errors. If there are errors, they will be reflected to the user in a text area. If there are no errors, the .class file will be created and the output of execution will be displayed to the user.

5.4 Add-ons module


This module contains all the additional features of the IDE. 5.4.1 Sharing and Real time collaboration This module allows users to share their projects and files with others. The details of the users with whom they want to share a particular file is stored in the Access Rights table. Either read only or all access access rights can be provided. After the files are shared between multiple users, 0all can access the same file and make changes into them at the same time. The changes made by one are immediately reflected to all others. 5.4.2 Integrated Forums The IDE also integrates forums. The details of the queries that the users ask are stored in the Forums table and the responses from other users are stored in the Answers table. 5.4.3 Technical Blogs The last module in the figure is integrated blogs. The users can post technical blogs and the details are stored in Posts table. These blogs are accessible to all the users who have registered with the cloud IDE.

VI.

IMPLEMENTATION DETAILS

The browser based IDE is a cloud application that is provided as a service. This IDE is mainly built to allow developers to code, compile and run Java language programs. The IDE has been implemented using CakePhp framework [11]. XAMPP Server 1.0, Apache Tomcat Server 6, My SQL Database and an Internet Explorer 6 web browser were used during the implementation of the IDE. Also, an Internet connection is needed. The implementation details of all the modules, options and the features of the IDE are as below:

6.1 Registration and Login


The users can register themselves by providing the details like username, password, etc as shown in Fig 5. The password is hashed using the SHA1 algorithm which is a US Secure Hash Algorithm. These details are stored in the database tables and are used for authentication of the user at the time of login. After successful login users home page is displayed In this page, the list of all the files that the user has access to are displayed. This list contains the files that the user has created and also those files which are shared by someone else with that user. The user can click on any file name from the list to open and view its contents and also make changes to the file.

6.2 Create, Open and Save Files


The users can create new files by clicking on the appropriate button. The user is asked to provide the name of the file at the time of creation as shown in Fig 7. The details of the new file including blank

340

Vol. 5, Issue 1, pp. 336-348

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
content are stored in the database tables. After the successful creation of the file, it will be opened. Also, when a new file is created, the user who creates it is given all access access rights for that file. The users have access to the files that are listed in the user home page as it was shown in Fig 6. The names of the files are actually links to a new page where the file contents will be displayed and can be edited if the user has appropriate access right as shown in Fig 8. So, the user has to click on the name of the file in order to open it. The files which the users have opened and are editing are saved through automatic process. The original file contents are displayed to the user in a text area when he opens the file as shown in Fig 8. The user can then modify the contents of the text area. The new contents of the text area are then sent by the client to the server at set intervals of time (500ms). The server then stores the new contents and this way, the files are saved automatically.

6.3 Compilation and Execution


The main functionality provided by the browser based IDE is the compilation and execution of Java language programs written by the users of the IDE. When the users press the compile and run button, the contents of the text area i.e. the program and the class name are retrieved. Then, it is checked whether there already exists a file with that particular name. If no such file exists, a new file is created on the cloud. The user is not given any information regarding the location of the file, etc. The new file is given a .java extension and it is given the same name as the class name retrieved. Then, this file is opened in both read write mode and it is filled with the contents that were initially retrieved. If the file already exists on the cloud, then no new file is created. The existing file is opened in the read write mode. The contents retrieved are then over-written into the file. Every time the user clicks on compile and run option, the .java file created on the cloud is written into. This is done so that the changes that the user makes to the file before compiling it are used and the latest copy of the file is compiled. After the .java file is created, the actual compilation takes place by the javac command is used. If errors are present, they are displayed to the user as shown in Fig 9. If there is no error, then a new .class file with the same name is created if it doesnt already exist. If a .class file with the name same as the class name retrieved before already exists, then that .class file is first opened in the read write mode and all its contents are erased. Then, compilation takes place and this file is loaded with new byte code contents. Now, after the .class file is generated, this class file is run using the java command. The output is obtained and displayed to the user on his screen as shown in Fig 10 and Fig 11.

6.4 Sharing Projects


This IDE facilitates the users to share their files and projects with others. To share a project with other user, the email id of that user has to be specified and the project that is to be shared needs to be mentioned as shown in Fig 13. When this is done, the other user will be given access to the project. During sharing of the projects with others, the person who shares the file, should specify the access rights to be given to the other user. If the user is given a read only access, then he cannot make changes to the contents of the files of that project. Also, if that user wishes to share the project with someone else, he will only be allowed to give the other new user a read only access only. But, if the creator of the project shares it with some other user providing him an all access access right for that project, then that user can share the project with others providing them either read only or all access access rights.

6.5 Real Time Collaboration


Real time collaboration is one of the most important features of this IDE. This feature was implemented to provide various users with the ability to modify the same file at the same time and also view the changes made by others in real time. This feature is implemented by automatic saving of files and retrieval of their contents at regular intervals. When a user opens the file that he has access to, the contents of that file are retrieved from the database tables and are displayed to the user. If some other user who has access to the same file opens it, the contents of the file are displayed to him as well. So, multiple users who open same file at the same time will have their own copies of the actual contents of the file.

341

Vol. 5, Issue 1, pp. 336-348

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Now, when any user makes changes to the file the modified content is retrieved from the text area. This is then stored into the database table again. The server then reflects the modified content to all the users in their text areas. This entire process uses AJAX technology in order to refresh just the part of the entire page with the modified contents. This entire process is initiated by the client and the client pulls the new contents of the files.

6.6 Deleting and Removing the Projects from the List


There are two versions of delete options that are provided to the users. One is actual delete option and the other is called remove from list. The users who have all access to certain projects can delete those projects completely. When those users press the delete button, the project details are removed from the table and also the details of access rights of different users for that project are also removed from the tables. The delete option is provided only to those users who have all the access rights for that particular project. The other variant called remove from the list is an option that is given to all the users who have access to a particular project irrespective of whether the access right granted to them is read only or all access. When the user clicks this option, the access right of the user for that project is deleted. So, the project name will not appear in the list of projects that the user can see. But, the actual project and other users access rights are retained in the database tables. This means that the other users will still have access to the project. Also, if the user who removed the project from his list wants to get access to the project again, then some other user who currently has access to that project will have to share it with him again. Also, if the user who removes the project from his list was the only user with access to that project, then a warning is displayed to the user and then, the access rights of that user for the project and also the project are removed from the tables.

6.7 Downloading Files


The users who have all access to a particular file can only download that file. The ones with read only access are not permitted to download the files. The users have to select the file they want to download. They can also specify the format in which they want the file to be downloaded. For example, the users may want the file to be downloaded with a .txt extension. The users should then specify that extension. If no extension is provided, by default, the files will be stored on the local machines with the .java extension. After the user selects the file and specifies the extension, the name of the file on the cloud and the contents of the file are retrieved. Then, a new file is created in the users local machine at the location he specified to save the downloaded file. The file is given the name and the extension that was retrieved initially. The retrieved contents of the file are written into the new file on the local machine. After this process completes, the user can view the file on his machine. This downloading process is shown in Fig 14.

6.8 Integrated Forums


This IDE integrates forums for all the users who are faced with various problems during implementation of their programs or those who have queries related to various technical topics. The forums can be accessed by all the users of the IDE after logging into it as shown in Fig 15. Any user is allowed to ask questions or answer them. Only the user who puts forth the question can delete or modify it. Similarly, when users post answers corresponding to particular question, only the users who are the owner of the answers can delete or modify them as shown in Fig 16.

6.9 Integrated Blogs


Another feature that the IDE provides is that of integrated technical blogs. All users after login, can also avail this integrated blogs facility. They can use this to write about various new technologies, how to best approach a type of problem, algorithms, etc. All the users will be able to view the posts

342

Vol. 5, Issue 1, pp. 336-348

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
from every other user but only the creator of the post is given the rights to modify or delete it as shown in Fig 18.

VII.

EXPERIMENTAL RESULTS

As the IDE is provided as a service, the users need to have Internet connection to access it. The users have to type the correct URL and the home page will be displayed to the users as shown in Fig 3.

Fig 3. Home Page

Fig 4. Home Page accessible through a mobile phone

This IDE is accessible from all devices that have Internet access such as smart phones, laptops, desktops, etc. The Fig 4 shows that this Browser Based IDE can be accessed from a mobile phone with Internet connection.

7.1 Registration and Login


The users can register themselves by providing the details like username, password, etc as shown in Fig 5.

7.2 Users Home Page


After successful login, the user will be redirected to a page called the user home page shown in Fig 6.

Fig 5 New User Registration Page

Fig 6 User Home Page (After Login)

7.3 Creation of New Files


The users can create new files by clicking on the appropriate button. The user is asked to provide the name of the file at the time of creation as shown in Fig 7.

343

Vol. 5, Issue 1, pp. 336-348

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963 7.4 Opening Existing Files
The names of the files are actually links to a new page where the file contents will be displayed and the user has to click on the name of the file in order to open it as displayed in Fig 8

Fig 7 Create New Projects Page

Fig 8 Page upon opening of a file

7.5 Compilation and Execution


When the user hits the Compile and Run button, the source file is compiled using the Java compiler. If errors are present, they are displayed to the user as shown in Fig 9.

Fig 9 Compilation error displayed to the users

If there is no error, then the .class file is generated, this class file is run using the java command. The output is obtained and displayed to the user on his screen [12] as shown in Fig 10 and Fig 11.

344

Vol. 5, Issue 1, pp. 336-348

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Fig 10. Compilation and execution output

Fig 11. Compilation and execution of results through a mobile phone

7.6 Real Time Collaboration


Real time collaboration provides various users with the ability to modify the same file at the same time and also view the changes made by others in real time.

Fig 12 Real Time Collaboration by different users using different browsers (Mozilla Firefox and Google Chrome)

7.7 Sharing Projects and Downloading Files


This IDE facilitates the users to share their files and projects with others. To share a project with other user, the email id of that user has to be specified and the project that is to be shared needs to be mentioned as shown in Fig 13. The users who have all access to a particular file can only download that file as shown in Fig 14.

345

Vol. 5, Issue 1, pp. 336-348

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Fig 13 Share File option

Fig 14 Download option for the users by specifying the extension

7.8 Integrated Forums and Blogs


The forums can be accessed by all the users of the IDE after logging into it as shown in Fig 15. Any user is allowed to ask questions or answer them as shown in Fig. 16.

Fig 15 Forums Home Page

Fig 16 View the questions and answers in detail and also post new answers to that question.

Another feature that the IDE provides is that of integrated technical blogs as shown in Figs. 17 and 18. All users after login, can also avail this integrated blogs facility.

346

Vol. 5, Issue 1, pp. 336-348

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Fig 17 Blog Post titles Page

Fig 18 Opening blog post to view, edit or delete it.

VIII.

CONCLUSION

This paper describes the implementation of browser based IDE to code in the cloud. The IDE is a software provided as a service and is mainly to write, compile and run Java language programs. The IDE also has integrated forum and blog facility and additional features of sharing, download and real time collaboration. This IDE eliminates the need to have software like JDK installed on the device that developers use for writing their Java programs. Also, it can be accessed from anywhere, anytime and with various devices like desktops, laptops and smart phones that have a web browser and an Internet connection.

IX.

FUTURE SCOPE

The Browser Based IDE currently supports programming in Java language with additional feature of real time collaboration integrated forums and blogs. This IDE can be extended to support many more languages and features in the future like Support for advanced Java like J2EE ,Support for other programming languages like PHP, C#, etc ,Syntax Highlighting, Automatic code completion ,Code to UML diagram converter to understand legacy code and UML diagrams to code converter.

REFERENCES
[1] Adam Jimenez, Shift-Edit the online IDE http://shiftedit.net/ [2] Richelle Charmaine G, Audrey Elaine G., Marc Anthony M., Thesis on An architecture of web based [3] Cloud [4] [5] [6] [7] [8] [9] [10] [11]

IDE De La Salle University Manila. netcentric.dlsu.edu.ph/CtrlSpace/DOC/MAIN/Main_Document.pdf tweaks, Joyent and cloud partner to provide to deploy node js ide http://www.cloudtweaks.com/2011/07/joyent-and-cloud9-partner-to-provide-ready-to-deploy-node-js-ide/ Gilad Khen, Dan-El Khen and Alon Weiss , http://www.coderun.com. Hugo Bruneliere Hands on with eclipse orion http://jaxenter.com/hands-on-with-eclipse-orion-44900.html Joel Spolsky, Jeff Atwood Good online IDEs for C++ http://stackoverflow.com/questions/6924600/good-online-ides-for-c SD Times Newswire, eXo cloud IDE gives developers an on-ramp to VMware cloud foundry platform as a service http://www.sdtimes.com/link/35860 Simon Slangen, The top three browser based ideas to code in the cloud in http://www.makeuseof.com/tag/top-3-browser-based-ides-code-cloud-2 Andrew McCombs ,Engineering the Cloud www.uwplatt.edu/csse/.../Engineering_the_Cloud_mccombsa.ppt. Mrs Lakshmi M. Gadhikar, Lavanya Mohan, Megha Chaudhari, Pratik Sawant, Yogesh Bhusara Design paper of browser Based IDE to Code in the Cloud ,International conference,IIMT,Goa,7/4/12. php-webmaster@lists.php.net, http://php.net/manual/en/function.fopen.php

347

Vol. 5, Issue 1, pp. 336-348

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
[12] Xaprb, What Does Devnull 21 Mean

http://www.xaprb.com/blog/2006/06/06/what-does-devnull-21-

mean/

AUTHORS
Lakshmi M. Gadhikar is Associate Professor in Information Technology department of Fr.CRIT College, Vashi, Navi Mumbai. She completed her M.E. in Information Technology from Goa University. Her Current areas of interest are Parallel Processing , Distributed Computing, Cloud Computing and Middleware.

Deepa Vincent is Assistant Professor in Information Technology department of Fr.CRIT College, Vashi, Navi Mumbai. She completed her B.E. in Electronics and Communication from Calicut University and pursuing M.E from Mumbai University. Her current areas of interest are automation in power electronics and drives and cloud computing.

Lavanya Mohan is a Quality Analyst trainee in Though Works University, Bangalore, India. She completed her B.E in Information Technology from the Mumbai University. Her current areas of interest are Cloud Computing, Software Testing and Debugging and Agile Technology.

348

Vol. 5, Issue 1, pp. 336-348

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

HAMMING DISTANCE BASED COMPRESSION TECHNIQUES WITH SECURITY


Atul S. Joshi1, Prashant R. Deshmukh2
1

Associate Professor, Department of Electronics and Telecommunication Engineering, Sipna College of Engineering and Technology, Amravati, Maharashtra State, India Professor & Head of Department of Computer Science and Engineering, Sipna College of Engineering and Technology, Amravati, Maharashtra State, India
2

ABSTRACT
The proposed algorithm suggests a lossless data compression with security. Input Bit stream is divided into a group of 8-bits each .Encoder deviate the available Randomly generated key according to input bit stream which results into encrypted data. Encrypted key of 64-bits is treated as unit of 4-bits each. Each 4-bits of Key will be at the Hamming distance is of Two from Deviated key. There will be total Six numbers of 4-bits which are at the Hamming distance of Two from Key. These numbers can be indexed by using 3-bits. The index of Three bits of a number is available on the channel as a compressed bit stream. Proposed algorithm is to encrypt the file, compress it, decompress it, and finally decrypt it back to the original file. The algorithm not requires a prior knowledge of data to be compressed. Intelligent algorithm reverse the order of compression & encryption ( i.e. Encryption prior to compression) without compromising compression efficiency , information theoretic security & with lesser computational cost. Proposed algorithm suitable for images, audio as well as text.

KEYWORDS: Compression, Decompression, Encryption, Decryption, Key

I.

INTRODUCTION

Developing technology increased the need for storing data. Several applications in the field of multimedia [1] achieved a lot of attention towards data compression to conserve the Bandwidth. With the increasing amount of data stored on computers, the need for security in transmission has also gained attention towards encryption. Compression aids encryption by reducing the file size, the compression scheme shortens the input file, which shortens the output file and reduces the amount of CPU required to do the encryption algorithm, so even if there were no enhancement of security, compression before encryption would be worthwhile. However, concerning compression after encryption it is stated; If an encryption algorithm is good, it will produce output which is statistically indistinguishable from random numbers and no compression algorithm will considerably compress random numbers [2]. On the other hand, if a compression algorithm succeeds in finding a pattern to compress out of an encryption's output, then a flaw in that algorithm has been found. This algorithm reverse the order of compression & encryption (i.e. Encryption prior to compression) without compromising compression efficiency or information theoretic security. Proposed algorithm is designed to achieve both compression and confidentiality by using Symmetric keys. Algorithm is suitable for images, videos as well as text. We develop framework based on Hamming distance for

349

Vol. 5, Issue 1, pp. 349-353

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
joint encryption and compression Distributed source coding [3] has emerged as an alternative to achieve low-complexity compression for correlated sources. Johnson et al. proved that reversing the order of compression and encryption to compress the encrypted data can still achieve significant compression [4]. Computational cost of the present work is also lesser. Paper is organized as follows: Section 2 discusses related work in this area. Section 3 provide proposed scheme. In Section 4 evaluation methodology is describe along with comparison results with existing methods. Future scope is described in Section 6.

II.

RELATED WORK

Wavelet predictive algorithm results well for relatively small data set. If the high degree of image compression is to be achieved then wavelet algorithm closely approximates the original data sets . However algorithm would be no useful for the text compression because there is no underlining deterministic process in natural language text [5].Huffman procedure proposed by Wolfe and Chanin [6] creates optimal code for set of symbol and probability subject to constraints that symbol be coded at one time. It is very effective as both the frequency and probability occurrence of the source symbol are taken into account. But since tree progressively spares results in lengthy search procedure for locating the symbol. Daniel Hillel Schonberg [7] presents practical distributed source code that provide framework for compression of encrypted data. Since encryption masks the source code, traditional compression algorithm is ineffective. M. A. Haleem, K.P. Subbalakshmi & R. Chandramouli [8] proposed a Joint encryption & compression scheme. It reduces complexity of compression process & at the same time use cryptographic principles to ensure the security. Dr. V.K. Govindan & B.S. Shajee [9] Mohan proposed better encoding scheme which offers higher compression ratio & better security towards all possible ways of attack. This algorithm compression transforms the text into some intermediate form which can be compressed with a better efficiency. Dictionary based code encoding techniques suggested by H. Lekatasas and J. Henkel [10] provide good compression efficiency as well as fast decompression mechanism. The basic idea is to take advantage of commonly occurring instruction sequence by using a dictionary & repeatedly occurrence are replaced by codeword that point to index of dictionary[11].

III.

PROPOSED SCHEME

X Input binary data of 8 bits Y Randomly generated binary Key of 8 bits ZE Encrypted binary output ZC Compressed binary output WC Decompressed binary output WD Decrypted binary output X = {Xi} & Y = {Yi } 2 = (Y1 Y0)
i

Where i = 0 to 7

2i = (Y3 Y2) 2i = (Y5 Y4) 2i = (Y7 Y6)

ZE = Y = {Yi } - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -- - ( 1 ) Hd [(Y1 Y0), (Y1 Y0)] = 1 Hd [(Y3 Y2), (Y3 Y2)] = 1 Hd [(Y5 Y4), (Y5 Y4)] = 1 Hd [(Y7 Y6), (Y7 Y6)] = 1

350

Vol. 5, Issue 1, pp. 349-353

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Thus Hd [(Y3Y2Y1Y0), (Y3Y2Y1Y0)] = Hd [(Y7Y6Y5Y4), (Y7Y6Y5Y4)] = 2 ZC = (ZC ZC) - - - - - - - - - - - - - - - - - - - - - - - - - - - - -- - - ( 2 ) Where ZC & ZC S WC = (WC WC Where WC = f [Z2, (Y3Y2Y1Y0)] WC = (Y3Y2Y1Y0) ~ [Z2, (Y3Y2Y1Y0)] & WC = (Y7Y6Y5Y4) ~ [Z2, (Y7Y6Y5Y4)] i.e. Hence WC = (Y3Y2Y1Y0) & WC = (Y7Y6Y5Y4) - - - - - - - --------- ( 3 ) WD = {WDi} = Yi . 2i = Xi Thus WD i = X i WD = X - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -( 4 ) Suppose S = {Si } = { 001,010,011,100,101,110 } Thus

IV.

EVALUATION METHODOLOGY OF PROPOSED WORK

In the proposed joint encryption & compression scheme MATLAB tool is used for simulation. As a input Text, Audio & Image is provided to Encoder. Another input to encoder is Binary Key of 64 bits is generated through pseudorandom generator & it is transmitted towards receiver through secure channel. Decoder which is joint decryptior & decompressor, reconstruct data again. Compression Ratio & Security Performance is tested.

4.1 DEGREE of SECURITY


The key size and the randomness of the encrypted plaintext are two major factors to analyze the degree of security. The amount of time that required breaking a cryptosystem can be measured by T = 2k-1 t .Here k is the size of the encryption key, t is the amount of time needed for encryption of plaintext. In proposed algorithm the size of the key is 64 bits & Key generation is random & it is independent of input bit stream. Symmetric key [12] is used to process the data results into fast encryption as compare to asymmetric key algorithms. Technique used for encryption of input data using key is not similar for all chunks. Encryption method changes depend on the parity of sequence of input bit stream. This offers good encryption strength in proposed algorithm.

4.2 COMPRESSION RATIO & SAVING IN COMPUTATION


Speed of both compression and decompression is important. Implementation of this algorithm achieves a average compression ratio of about 70% with saving in compression & decompression time [13]. In most of the other algorithm data undergoes initial key addition and substitution. Each round requires row shifting and column mixing operation followed by the addition of a round key and substitution. In the proposed scheme, compression is based on Hamming distance of Two bit key & encrypted data. Total Six numbers with Hamming distance of two are indexed using three bits & these bits used for compression. In short since algorithm has to search for only six numbers at a time hence computational cost of the said algorithm is challenging. A bar chart given below is drawn on the basis of the observation on the ratio differences between the various algorithms.

4.3 COMPARISION of RESULTS

351

Vol. 5, Issue 1, pp. 349-353

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
120 100 80 Proposed work 60 40 20 0 40976 Bits 71062 Bits 186384 Bits 1399824 Bits Win Zip Win Rar

Figure 1: Percentage Compression of Various Algorithms

X-axis reflects the percentage compression ratio of the file after converting it to binary bit stream applying algorithm, while the Y-axis reflects the bit size. Results are taken for the key size of 64 bits. Each bar has a unique color in order to identify the algorithm. Blue bar represents our proposed algorithm. Red bar represents WinZip algorithm. Green bar represents WinRar algorithm. Text input of 40976 bits is taken from the file Geeta in English. Audio input of 71062 bits is taken from the Audio Word Wave file. Stored Image input of 186384 bits is taken from Flower image. Fourth input of 1399824 bits is taken from Photo Gallery For the First input we obtained 62% compression, where as WinZip & WinRar provide 88.02%& 83.34% respectively. For the Second input we obtained 72.01% compression, where as WinZip & WinRar provide 98%& 98.16% respectively. . For the Third input our algorithm provide compression of 72%, where as WinZip & WinRar provide 89% & 89.42% respectively. For the Fourth input our algorithm provide compression of 71.06%, where as WinZip & WinRar provide 96.08% & 94.02% respectively.

V.

CONCLUSION

The proposed work is based on symmetric key joint encryption & compression algorithm provides lossless data compression with security. Reversal the order of encryption & compression is not affecting compression efficiency & information theoretic security Computational complexity is less as compare to other algorithms. On an average compression of about 69.026 % approximately 70% is achieved with proposed scheme shows better performance as compare to WinZip & WinRar.

VI.

FUTURE SCOPE

The authors believe that there are additional research opportunities in this work with variable Key size & with variable chunks size of input binary bit stream.

ACKNOWLEDGEMENT
First of all I would like to record my immense gratitude toward Respected Supervisor Dr. Prashant Deshmukh whose guidance and conclusive remarks had a remarkable impact on my work. I am also indebted to all my colleagues who supported me during this work. Last but not least, as always, I owe more than I can say to my exceptionally loving Guru Achyut Maharaj, my Parents, my wife & daughter Adya whose support pave every step of my way.

REFERENCES
[1]. C.-P. Wu and C.-C. J. Kuo, Efficient multimedia encryption via entropy codec design, in security and Watermarking of Multimedia Contents III, vol. 4314 of Proceedings of SPIE,pp.128138, San Jose, Calif, USA, January 2001.

352

Vol. 5, Issue 1, pp. 349-353

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
[2]. A. Hauter, M.V.C., R. Ramanathan. Compression and Encryption. CSI 801Project Fall 1995. December 7, 1995 [cited 10 March2006]; Available from: http://www.science.gmu.edu/~mchacko/csi801/proj-ckv.html. [3]. Seon-Won Seong & P. Mishra , Bitmask based code compression for embedded system , IEEE transaction on computer aided design of integrated circuit & system , vol. 27 , No. 4 , April 2008 , pp 673-685. [4]. S. Shani, B.C. Vemuri , F. Chenc Kapoor, State of art image compression algorithm , October 30, 1997. [5]. ChairatRittirong, Yuttapong Rangsanseri &Punya Thitimajshima , Multispectral Image Compression using Median Predictive Coding and Wavelet transform, in GIS proc., 1999 [6]. A. Wolf & A. Chanin , Executing compressed program of embedded RISC architecture, in poc. Int. symp. Micro, 1992, pp 81-91 [7]. Daniel Hillel Schonberg , Practical Distributed Source Coding & its application to the compression of Encrypted data , Technical Report No. UCB/EECS-2007-93, July 2007 [8]. M.A. Haleem , K.P. Subbalakshmi , R. Chandramouli , Joint encryption & compression of correlated sources, EURASIP Journal on Information Security , Jan. 2007 [9]. Dr. V.K. Govindan , B.S. Shajee Mohan , IDBE An intelligent Dictionary Based Encoding Algorithm for text data compression for high speed Data transmission, Proceeding of International conference on Intelligent signal processing , Feb 2004. [10]. H. Lekats, J. Henkel , Design of one cycle decompression hardware in poc. Des. Conf. 2002 , pp 34-39 [11]. C. Castelluccia and A. Francillon, Tiny RNG, a cryptographic random number generator for wireless sensor network nodes. In:5th International Symposium on Modeling and Optimization inMobile, Ad Hoc, and Wireless Networks. IEEE, New York, NY,USA, 2007 [12]. Gred E. Keiser , Local area network , Tata Mc Graw Hill Edition , 1997, pp 443-497 [13]. R. L. Dobrushin, An asymptotic bound for the probability error of Information transmission through a channel without memory using the feedback, Problemy Kibernetiki, vol. 8, pp. 161168, 1962. [14]. Riyad Shalabi and Ghassan Kanaan , Efficient data compression scheme using dynamic Huffman code applied on Arabic , in journal of computer science Dec 2006 [15]. Irina Chihaia and Thomas Gross , An analytical model for software only main memory compression, in proc.,ACM Int. Conf. Series , vol. 68 [16]. Behrouz Forouzan , Introduction to data communication and networking , Tata McGraw Hill Edition 1999 [17]. J. Prakash & C. Sandeep , A simple & fast scheme for code compression for VLIW processor , in proc., DCC , 2003. [18]. Montserrat Ros & Peret , A Hamming distance based VLIW/ EPIC code compression technique , Proceeding of International conference on compilers, Architecture & synthesis for Embedded system , 2004.. [19]. Chia Wei Lin , Ja Ling Wu & Yuh Jue Chang , Two Algorithm for constructing efficient Huffman code based reversible variable length code , IEEE transaction on communication ,vol. 56 , No. 1, January 2008 , pp 81-88 [20]. E. Celikel and M. E. Dalklc Computer and Information SciencesISCIS 2003, Lecture Notes in Computer Science, Vol.2869/2003.

AUTHORS INFORMATION
Atul Joshi is currently working as a Associate Professor in Department of Electronics & Telecommunication Engineering, at Sipna College of Engineering & Technology. He is pursuing his PhD in Electronics. His areas of interest are Communication Engineering, Communication Network & Electronic Circuits Design.

Prashant Deshmukh is currently working as Head of CMPS & IT Department, at Sipna College of Engineering & Technology, Amravati (India). He has completed his Ph.D. in the faculty of Electronics Engineering From SGBAmravati University, Amravati (India). His areas of interest are Digital Signal Processing, VLSI Design and Embedded Systems.

353

Vol. 5, Issue 1, pp. 349-353

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

ASSOCIATION MODELS FOR PREDICTION WITH APRIORI CONCEPT


Smitha.T1, V.Sundaram2 PhD-Research Scholar, Karpagam University, Coimbatore Asst. Prof., Department of Computer Application, SNGIST, N. Paravoor, Kerala, India 2 Director-MCA, Karpagam College of Engineering, Coimbatore, India
1

ABSTRACT
Data mining techniques have led over various methods to gain knowledge from vast amount of data. So different research tools and techniques like classification algorithm, decision tree, association rules etc are available for bulk amount of data. Association rules are mainly used in mining transaction data to find interesting relationship between attribute values and also it is a main topic of data mining There is a a great challenge in candidate generation for large data with low support threshold. Through this paper we are making a study to show how association rules will be effective with the dense data and low support threshold. The data set which we have used in this paper is real time data of certain area and we are applying the data set in association rules to predict the chance of disease hit in that area using A Priori Algorithm. In this paper three different sets of rules are generated with the dataset and applied the apriori algorithm with it. With the algorithm, found the relation between the parameters in the database.

KEYWORDS: APriori algorithm, Association rules, Data mining, item based partitioning, multi Dimensional
analysis.

I.

INTRODUCTION

Association rules discover correlation among data items in a transactional data base. It involves the discovery of rules that satisfy defined threshold from tabular database. Here the rule how often it occurs in the data base which is known as its frequency is important. Association rule mining is the process of finding frequent set with minimum support and confidence. The first phase is support counting phase where we have to find the frequent set generation. Effective partitioning may help for this process We also have to create a border set to avoid frequent updating of real time data. In real life applications, the number of frequent sets are large in number and as the result, the number of association rules are also very large.. We are selecting only the rules which we have interested for disease prediction in this context. The discovery of frequent item sets with item constraints is also very much important. There are many data mining algorithms such A priori Algorithm, Partition algorithm, Pincer-Search Algorith, Dynamic Itemset Counting Algorithm [2], FP-Tree Growth etc are used for finding the discovery of frequent sets. are related with association rules.Here we are applying A Priori algorithm to the dataset to find the frequent sets and with the help of the algorithm we are predicting the chances of disease hit in the particular area.

354

Vol. 5, Issue 1, pp. 354-360

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963 1.1 ASSOCIATION RULE DEFINITION
The basic definition of association rule states that Let A={l1,l2,l3,........ln} be a set of items and T is the transactional database where t is the set of items of each transaction, then t is the subset of A. A transaction t is said to support an item li if li is present in t, t is said to be support a subset of items XA has a support s in T, denoted by s(X)t , if s% of transaction in T support X.[4] The key feature of association rule algorithm is that each of the methods assume that the underlying database size is enormous and they require minimum passes over the database and the data must run thousands of transactions per second. So to make efficient computing the problem of mining association rules must be decomposed into sub problems. Association rule mining searches for interesting relationships among items in a given set. Here the main rule interestingness are rule support and confidence which reflect the usefulness and certainty of discovered rules. Association rule mining algorithm is a two step process where we should have to find all the frequent item sets and generate strong association rules from the frequent item sets.[9]If a rule concerns association between the presence or absence of items, it is a Boolean association rule. If a rule describes association between quantitative items or attributes, then it is known as quantitative association rules. Here the quantitative values for items are partitioned into intervals. The algorithm can be formed based on dimensions, based on level of abstractions involved in the rule set and also based on various extensions to association mining such as correlation analysis. [27]

Figure 1. - Architecture of Associative Classifier

1.2 RELATED WORKS IN THIS AREA


Many works related in this area have been going on An article Item based partitioning approach of soybean data for association Rule mining ,the authors applied classification technique in data mining in Agriculture land soil. The article on A study on effective mining of association rules from huge data base by V. Umarani, [20] It aims at finding interesting patterns among the databases. The paper also provides an overview of techniques that are used to improvise the efficiency of Association Rule Mining (ARM) from huge databases. In another article K-means v/s K-medoids: A Comparative Study Shalini S Singh explained that portioned based clustering methods are suitable for spherical shaped clusters in medium sized datasets and also proved that K-means are not sensitive to noisy or outliers.[21]. There are many research works carrying out related with data mining technology in prediction such as financial stock market forecast, rainfall forecasting, application of data mining technique in health care, base oils biodegradability predicting with data mining technique etc,[23].

II.

DISCOVERY OF ASSOCIATION RULES

The problems of mining association rules can be decomposed into different sub problems.

355

Vol. 5, Issue 1, pp. 354-360

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Then find the frequent items set by selecting the all the item set whose support is greater than the minimum support specified by the user and then use that frequent item sets to generate the desired rule. The frequent set can be determined by the following rule. Let T be the transaction database and be the user specified minimum support, then the item set XA is said to be frequent ser in T with respect to if S(X) r>= . We cannot establish a definite relationship between the set of maximal frequent sets and the set of border sets. [3] Consider the example of a dataset which contains information of inhabitants in an area. Some of the attributes includes in the database are age, income, education, family history of any disease, sex, environmental condition, area of the house, hygenity, source of water, income etc. With the help of association rules algorithm, we will be able to discover some of the association and sequential tool to predict the disease hit in that area. The male person who is the age between 30-60 and living in urban area with poor drinking water facility has a chance to hit typhoid. Each rule has a left hand side and a right hand side. The left hand side is called the antecedent and the right hand side is called consequent. Both left hand side and right hand side can contain multiple items. The association rule has two measures called confidence and support. [6] Let T consists of 1000 data. 250 data contains the value 0 for disease history and 750 data contains the value 1 for the same parameter. Similarly suppose 380 data contains the value 0 for hygenity and 620 data contains the value 1 for the same attribute. By applying association rule algorithm, we will be able to predict what type of people is affecting the disease. Ie, how the attributes are co related? Or whether there is a co-relation among the parameters disease history and hygenity in the case of disease prediction. Thus we are measuring the confidence and support from the dataset. The pruning step eliminates the item set which are not found in frequent from being considered for counting support.[13] The A Priori frequent set discovery item set uses the functions candidate generation and pruning at every iteration. It moves upward in the lattice starting from level 1 till level k, where no candidate set remain after pruning.[8] 2.1 APRIORI ALGORITHM FOR CANDIDATE GENERATION AND PRUNING The APriori frequent set discovery item set uses the functions candidate generation and pruning at every iteration. It is also known as the level wise algorithm which is used to find all the frequent sets. It uses a bottom up approach and moving upward level wise in the lattice. In each level the data sets has to be pruned to take the frequent sets. The candidate generation method algorithm is as follows Gen-itemsets with the given Lk-1 as follows: Ck= For all itemset l1Lk-1 do For all itemset l2Lk-1 do If l1[1]=l2[1]^l1[2]=l2[2]^..^l1[k-1]<l2[k-1] then C=l1[1],l1[2]l1[k-1],l2[k-1] Ck= Ck U{C}.equ(2) The pruning set eliminates the extension of (k-1) item sets which are infrequent from the counting support.[10] The pruning algorithm is as follows: Prune(Ck) For all c Ck For all(k-1) subsets d of c do If dLk-1 Then Ck=Ck\{c}.Equ(3) It is known as the level wise algorithm which is used to find all the frequent sets. It uses a bottom up approach and moving upward level wise in the lattice. In each level the data sets has to be pruned to take the frequent sets. [25]

356

Vol. 5, Issue 1, pp. 354-360

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

III.

MODELS USED IN PREDICTIVE ASSOCIATION RULE MINING

Association rules allows the analysts to identify the behavior pattern with respect to a particular event where as frequent items are used to find how a group are segmented for a specific set. Clustering is used to find the similarity between entities having multiple attributes and grouping similar entities and classification rules are used to categorize data using multiple attributes.[13] 3.1 APRIORI ALGORITHM BY EXAMPLE We have applied out data set to work with the Apriori algorithm to check its reliability. Initially k:=1 . Read the database to count the support of 1-itemsets and found the frequent item set and their support. Find L1 with k=1 then change k=2 and find the candidate generation step and find the value of C2.Check the pruning step and check whether there is any change in C2.Read the data base to count the support of elements in C2. Then Assign k to 3 and find c3. Read the database to count the support of itemsets in C3 to get L3.Find the set of frequent sets along with their respective support values and qpply it to the association rules.[22]
Table 1: To read the database to count the support of L item sets {1} {2} {3} {4} {5} {6} {7} {8} {9} 2 6 6 4 8 5 7 4 2

K:=1 L1:= ({2}->6,{3}->6, {4}->4, {5}->8, {6}->5, {7}->7, {8}->4, {9}->2}. L1 contains 8 elements. K:=2, calculate L2 and C2. L2:= {{2,3}->3,{2,4}->3,(3,5)->3, (3,7)->3, {5,6)->3, (5,7)->5, (6,7)->3} K:=3 calculate L3 and C3. C3={3,5,7},{5,6,7}}and . L3:={(3,5,7}->3K:=4 As L3 contains only one element candidate C4 is empty. So algorithm can stop L:= L1UL2UL3..Equ(4) 3.2 Generating 1-itemset Frequent Pattern If the database consists of 900 patterns, Calculate minimum support count Minimum support count =200/900= 2%.

Let the minimum confident required is 70%. So we have to find the frequent item set using apriori algorithm and generate the association rule with minimum support and maximum confidence. So scan the data set and count each candidate. Then compare candidate support count with minimum support count.
itemset { 11} {12} {13} {14} {15} Table 2:Generating 1-itemset Frequent Pattern Support count itemset Support count 6 { 11} 6 7 {12} 7 6 {13} 6 2 {14} 2 2 {15} 2

357

Vol. 5, Issue 1, pp. 354-360

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
In the first iteration of the algorithm, each item is a member of the set of candidate then generate 2item set frequent pattern. Step 2: Generating 2-itemset Frequent Pattern To discover the set of frequent 2-itemsets, L2, the algorithm uses L1 Join L1to generate a candidate set of 2-itemsets, C2.,Next, the transactions in D are scanned and the support count for each candidate itemset in C2is accumulated .The set of frequent 2-itemsets, L2, is then determined, consisting of those candidate 2-itemsets in C2having minimum support.
Table 3: Generating 2-itemset Frequent Pattern

C2

C2

L2

Item set {11,12} {11,13} {11,14} {11,15} {12,13} {12,14} {12,15} {13,14} {13,15} {14,15}

itemset {11,12} {11,13} {11,14} {11,15} {12,13} {12,14} {12,15} {13,14} {13,15} {14,15}

Support count 4 4 1 2 4 2 2

itemset {11,12} {11,13} {11,14} {11,15} {12,13} {12,14} {12,15}

Support count 4 4 1 2 4 2 2

STEP 3: Generating 3 itemset Frequent Pattern This step involves the use of Apriori algorithm. Find C3 by computing L2 join L2. C3= L2 JoinL2 = {{I1, I2, I3}, {I1, I2, I5}, {I1, I3, I5}, {I2, I3, I4}, {I2, I3, I5}, {I2, I4, I5}}.equ(4) Now, Join step is complete and Prune step will be used to reduce the size of C3 . Based on the Apriori property that all subsets of a frequent itemset must also be frequent, we can determine that four latter candidates cannot possibly be frequent. Consider the data {I1, I2, I3}. The 2-item subsets of it are {I1, I2}, {I1, I3} & {I2, I3}. Since all 2-item subsets of {I1, I2, I3} are members of L2, We will keep {I1, I2, I3} in C3. {I3, I5} is not a member of L2and hence it is not frequent violating Apriori Property. Thus We will have to remove {I2, I3, I5} from C3.Therefore, C3= {{I1, I2, I3}, {I1, I2, I5}} after checking for all members of result of Join operation for Pruning. Now, the transactions in D are scanned in order to determine L3, consisting of those candidates 3-itemsets in C3having minimum support.[24] Step 4. Generating 4-itemset Frequent Pattern The algorithm uses L3 JoinL3to generate a candidate set of 4-itemsets, C4. Although the join results in {{I1, I2, I3, I5}}, this itemset is pruned since its subset {{I2, I3, I5}}is not frequent. Thus, C4= , and algorithm terminates, having found all of the frequent items. This completes our Apriori Algorithm. These frequent itemsets will be used to generate strong association rules which satisfy both minimum support & minimum confidence.[22]. Generate association rules from frequent item sets for 4 items as follows.

358

Vol. 5, Issue 1, pp. 354-360

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
For each frequent itemset l, generate all nonempty subsets of l. For every nonempty subset s of l, output the rule s->l-s if support_count(l)/support_count(s)>=min_conf where min_confis minimum confidence threshold. Let minimum confidence threshold is , say 70%. The resulting association rules are shown below, each listed with its confidence. R1: I1 ^ I2 I5 Confidence = sc{I1,I2,I5}/sc{I1,I2} = 2/4 = 50%......................Equ(5) R1 is Rejected. R2: I1 ^ I5 I2 Confidence = sc{I1,I2,I5}/sc{I1,I5} = 2/2 = 100%........................Equ(6) R2 is Selected. R3: I2 ^ I5 I1 Confidence = sc{I1,I2,I5}/sc{I2,I5} = 2/2 = 100%.......................Equ(7) R3 is Selected. Step 5:Generating Association Rules from Frequent Itemsets R4: I1 I2 ^ I5 Confidence = sc{I1,I2,I5}/sc{I1} = 2/6 = 33%................................Equ(8) R4 is Rejected. R5: I2 I1 ^ I5 Confidence = sc{I1,I2,I5}/{I2} = 2/7 = 29%...............................Equ(9) R5 is Rejected. R6: I5 I1 ^ I2 Confidence = sc{I1,I2,I5}/ {I5} = 2/2 = 100%..............................equ(10) R6 is Selected. In this way, We have found three strong association rules.

IV.

RESULT AND DISCUSSION

Different three strong association rules are generated with the data set by applying Apriori algorithm. From the study it revels that there are certain associations between different parameters in the database such as age, sex, environmental conditions and humidity, for the prediction of disease of an area. The study reveals the prediction that male person at the age between 30-60 having poor environmental condition have a tendency to hit the contagious disease. Study also reveals that family history of the disease is not an important factor for hitting contagious disease.

V.

FUTURE ENHANCEMENT

Without the candidate generation process also we can apply the same mining technique to the data set. In this candidate generation process we should have to apply the database scan. So to avoid costly database scan, we can do frequent pattern tree structure. The same algorithm can also be applied with different datasets.

REFERENCES
[1]. Arijay Chaudhry and DrP.S.Deshpande. Multidimensional Data Analysis and data mining,Black Book [2]. Oulbourene G, Coenen F and Leng P, Algorithms for Computing Association Rules using a Partial support Tree Knowledge Based System 13(2000)pp-141-149. [3]. R.Agarwal, T.Imielinski and A.Swamy Mining association Rules between Set of Items in Large Database.In ACM SIGMO international conference on Management of Data . [4]. en.wikipedia.org/wiki/Data_mining [5]. David Hand,Heikki Mannila, Padhraic Smyth,principles ofData Mining. [6]. Smitha.T ,Dr.V.SundaramCase study on High Dimensional Data Analysis using Decision Tree model, , International journal of computer science issues Vol9,Issue 3, May 2012. [7]. Smitha.T,Dr.V.SundaramClassification Rules By Decision Tree for disease Prediction, ,International journal of Computer Applications vol-43, No-8, April 2012.

359

Vol. 5, Issue 1, pp. 354-360

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
[8]. Smitha.T, Dr.V.Sundaram Knowledge Discovery from Real TimeDatabase using Data Mining Technique, IJSRP vol 2, issue 4, April 2012. [9]. Hyndman R and Koehler AAnother Look at Measures of Forecast Accuracy (2005). [10]. S. Weng I C Zhang I Z. Lin I X. Zhang 2 Mining the structural knowledge of high-dimensional medical data using Isomap [11]. Bhattachariee.A 'Classification of human lung carcinomasby mRNA expression profiling reveals distinct adenocarcinomasubclasses', Proc. Nat. Acad. Sci. USA, 98, pp. 13790 13795 BLAKE, C. L. and Merz, C. J. 2001 [12]. Borg.T and Groenen.P.): 'Modern multidimensional scaling: theory and application' (Springer-Verlag, New York,Berlin, Heidelberg, 1997). [13]. Adomavicius G,TuzhilinA2001 Expert-driven validation of rule-based user models in personalization [14]. Applications. Data Mining Knowledge Discovery 5(1/2): 3358. [15]. Shekar B, Natarajan R A transaction-based neighbourhood-driven approach to quantifying interestingness of association rules.Proc. Fourth IEEE Int. Conf. on Data Mining (ICDM 2004)(Washington, DC: IEEE Comput. Soc. Press) pp 194201 [16]. Mohammed J. Zaki, Srinivasan Parthasarathy, Mitsunori Ogihara, and WeiLi. Parallel algorithms for discovery of association rules. Data Mining and Knowledge Discovery: An International Journal, special issue on Scalable High-Performance Computing for KDD, 1(4):343373, December 2001. [17]. Refaat, M. Data Preparation for Data Mining Using SAS,Elsevier, 2007. [18]. El-taher, M. Evaluation of Data Mining Techniques, M.Sc thesis (partial-fulfillment), University of Khartoum, Sudan,2009. [19]. Lee, S and Siau, K. A review of data mining techniques, Journal of Industrial Management & Data Systems, vol 101,no 1, 2001, pp.41-46. [20]. Moawia Elfaki Yahia1, Murtada El-mukashfi El-taher2 A New Approach for Evaluation of Data Mining Techniques, ,IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010. [21]. V.Umarani A study on effective mining of association rules from huge database al. / IJCSR International Journal of Computer Science and Research, Vol. 1 Issue 1, 2010. [22]. Shalini S Singh K-means v/s K-medoids: A Comparative Study, National Conference on Recent Trends in Engineering & Technology, May 2011. [23]. C. MRQUEZ-VERA Predicting School Failure Using Data Mining IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010 [24]. K.Srinivas et al. Applications of Data Mining Techniques in Healthcare and Prediction of Heart Attacks International Journal on Computer Science and Engineering Vol. 02, No. 02, 2010, 250-255. [25]. Smitha.T, Dr.V.Sundaram Comparative study of data mining algorithm for high dimensional data analysis International journal of advances in Engineering & Technology, Vol 4, issue 2, ISSN. 22311963, Sept-12, pp. 173-178. [26]. Arun K Pujari Data mining Techniques Arun K Pujari. [27]. Jie Tang,Hang Li,Yunbo Cao and Zhaohui Tang,2005.Email datacleaning.KDD05,Chigago,USA. [28]. G.SenthilKumar online message categorization using Apriori algorithm International Journal of Computer Trends and Technology- May to June Issue 2011. [29]. Han, J.and M.Kamber,2001.Data Mining:Concepts and Techniques,Morgan Kanfmann publishers

AUTHORS BIOGRAPHY
Smitha.T.: She has acquired her Post Graduate Degree in Computer Application and M.Phil in Computer science from M. K. University. Now doing PhD in Computer Science at Karpagam University under Dr. V. Sundaram. .She has 10 years of teaching experience and 4 years of industrial and research experience. She has attended many national and international conferences and workshops and presented many papers, regarding data mining,. She has also published many articles regarding data mining techniques in international journals with high impact factor. Now working as an Asst. ProfessorMCA department of Sree Narayana Guru Institute of Science and Technology, N. Paravoor, Kerala. Her area of interest is Data mining and Data Warehousing. V. Sundaram: He is a postgraduate in Mathematics with PhD in applied mathematics. He has 45 years of teaching experience in India and abroad and guiding more than 15 scholars in PhD and M. Phil at Karpagam and Anna University. He has organized and presented more than 40 papers in national as well as international conferences and have many publications in international and national journals. . He is a life member in many associations. His area of specialization includes fluid Mechanics, Applied Mathematics, Theoretical Computer Science, Data mining, and Networking etc.

360

Vol. 5, Issue 1, pp. 354-360

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

A STUDY OF MULTIPLE HUMAN TRACKING FOR VISUAL SURVEILLANCE


Shalini Agarwal, Shaili Mishra
Department of CS, Banasthali University, Rajasthan

ABSTRACT
Visual surveillance has become very active research topic in computer vision. This paper deals with the problem of detecting and tracking multiple moving people in a static background. Detection of foreground object is done by background subtraction. Tracking multiple humans in complex situations is challenging. The difficulties are tackled with appropriate knowledge in the form of various models in our approach. Human motion is decomposed into its global motion and limb motion. Our objective in this paper is to segment multiple human objects and track their global motion in complex situations where they may move in small groups, have interocclusions, cast shadows on the ground, and reflections may exist.

KEYWORDS:

Background subtraction Method, Blobs, Optical Flow, Multiple-human segmentation, multiple-human tracking, human locomotion model.

I.

INTRODUCTION

Automatic visual surveillance in dynamic scenes has recently got a considerable interest to researchers. Technology has reached a stage where mounting video camera is cheap causing a widespread deployment of cameras in public and private areas. It is very costly for an organization to get their surveillance job done by humans. Beside cost, other factors such as accuracy, negligence makes manual surveillance inappropriate. So, automatic visual surveillance have becomes inevitable in the current scenario. It will allow us to detect unusual events in the scene and warrant the attention of security officers to take preventive actions. The purpose of visual surveillance is not to replace human skill and intuition power but is to assist human for smooth running of the security system. The object can be represented as: Points: The object is represented by a point, that is, the centroid (Figure 1(a)) In general, the point representation is suitable for tracking objects that occupy small regions in an image. Primitive geometric shapes: Object shape is represented by a rectangle, ellipse (Figure 1 (c), (d)), etc. Though primitive geometric shapes are more suitable for representing simple rigid objects, they are also used for tracking no rigid objects. Object silhouette and contour: Contour representation defines the boundary of an object (Figure 1(g), (h)). The region inside the contour is called the silhouette of the object (see Figure 1(i)). Silhouette and contour representations are suitable for tracking complex no rigid shapes. Articulated shape models: Articulated objects are composed of body parts that are held together with joints. For example, the human body is an articulated object with torso, legs,

361

Vol. 5, Issue 1, pp. 361-374

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
hands, head, and feet connected by joints. In order to represent an articulated object, one can model the constituent parts using cylinders or ellipses as shown in Figure 1(e). Skeletal models: Object skeleton can be extracted by applying medial axis transform to the object silhouette. This model is commonly used as a shape representation for recognizing objects. Skeleton representation can be used to model both articulated and rigid objects (Fig 1(f)).[1][2] It is difficult to get a background model from video because background information keeps always changing by factors such as illumination, shadow.[3] So a static background is assumed. Well-known background subtraction method is used for detecting moving object, because it gives maximum number of moving pixels in a frame. Object tracking methods can be divided into 4 groups, they are: Region-based tracking Active-contour-based tracking Feature-based tracking Model-based tracking It is not so easy because of some of the problems, which generally occur during tracking. Occlusion handling problem i.e. overlapping of moving blobs has to be dealt carefully[6][7]. Other problems like lighting condition, shaking camera, and shadow detection, similarity of people in shape, color and size also pose a great challenge to efficient tracking

Fig 1: Object Representation

The rest of the paper is organized as follows: section II gives a survey of techniques used for human tracking in surveillance system .Section III theoretical Background about tracking system. Section IV presents some of the problems occurs in existing technologies and problem formulation. Section V presents solution approach .In Section VI we remove problem of occlusion in multiple human tracking system. Conclusion and future work is given in section VII.

II.

RELATED WORK

Most of the work on tracking for visual surveillance is based on change detection [44][36][40][15][13][11][21][38] or frame differencing [23] if the camera is stationary. Additional stabilization is required if the camera is mobile [7][42]. These methods usually infer global motion only and can be roughly grouped as follows: Perceptual grouping techniques are used to group the blobs in the spatio-temporal domain as in Cohen and Medioni [7] and Kornprobst and Medioni [20]. However, these methods still suffer from the deficiencies of blob-based analysis discussed earlier. In Lipton et al. [23], a moving blob is classified into a single human, multiple-human or a vehicle accord-ing to its shape. However, the positions of the people in a multihuman blob is not inferred. Some work (Rosales and Sclaroff [36], Elgammal and Davis [11], and McKenna et al. [25], etc.) assumes people are isolated when they enter the scene so that an appearance model can be initialized to help in tracking when occlusion happens. These methods cannot be applied where a few people are observed walking together in a group. Some methods try to segment multiple people in a blob. The W4 system [15] uses blob vertical projection to help segment multiple humans in one blob. It only applies to data where

362

Vol. 5, Issue 1, pp. 361-374

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
multiple people distribute horizontally in the scene (step on ones head does not happen, usually from a ground level camera). It handles shadows by use of stereo cameras [14]. Siebel and Maybank [38] extend the Leeds human tracker [1] by the use of a head detection method similar to the approach taken in our system. Tao et al. [41] and Isard and MacCormick [18] track multiple people using the CONDENSATION algorithm [17]. The system in [18] also uses a human shape model and the constraints given by camera calibration. It does not involve any object-specific representation; therefore, the identities of humans are likely to be confused when they overlap. Besides, the performance of particle filter is limited by the dimensionality of the state space, which is proportional to the number of objects. Other related work includes Tao et al. [42] which use a dynamic layer representation to track objects. It combines compact object shape, motion, and appearance in a Bayesian framework. However it does not explicitly handle occlusion of multiple objects since it was designed mainly for airborne video. Much work has been done on estimating human body postures in the context of video motion capture (a recent review is available in [26]). This problem is difficult, especially from a single view because 3D pose may be under constrained from one viewpoint. Most successful systems (e.g., [9]) employ multiple viewpoints, good image resolution, and heavy computation, which is not always feasible for applications such as video surveillance. Use of constrained motion models can reduce the search space, but it only works on the type of motion defined in the model. Rohr [35] describes pioneering work on motion recognition using motion captured data. In each frame, the joint angle values are searched for on the motion curves of a walking cycle. Results are shown only on an isolated human walking parallel to the image plane. Motion subspace is used in Sidenbladh et al. [37] to track human walking using a particle filter. Both [35] and [37] operate in an online mode. Bregler [4] uses HMMs (hidden Markov models) to recognize human motion (e.g., running), but the recognition is separated from tracking. Brand [3] maps 2D shadows into 3D body postures by inference in an HMM learnt from 3D motion captured data, but the observation model is for isolated objects only. In Krahnstover et al. [21], human tracking is treated as an inference problem in an HMM; however, this approach is appearance-based and works well only for the viewpoints for which the system was trained. For motion-based human detection, motion peri-odicity is an important feature since human locomotion is periodic; an overview of these approaches is given in [8]. Some of the techniques are view dependent, and usually require multiple cycles of observation. It should be noted that the motion of human shadow and reflection is also periodic. In Song et al. [39], human motion is detected by mapping the motion of some feature points to a learned probabilistic model of joint position and velocity of different body features, however, joints are required to be detected as features. Recently, an approach similar to ours has been proposed by Efros et al. [10] to recognize actions. It is also based on flowbased motion description and temporal integration.

III.

THEORETICAL BACKGROUND

3. 1 Object Segmentation
Most of the work on foreground objects segmentation is based on three basic methods, namely frame differencing, background subtraction and optical flow. Only background subtraction requires modeling of background. It is faster than other methods and can extract maximum features pixels. It uses a hybrid of frame differencing and background subtraction for effective foreground segmentation. A considerable amount of work has been done on modeling dynamic background. Researchers usually use Gaussian, a mixture of Gaussian, kernel density function or temporal median filtering techniques for modeling background. Assuming that surveillance is taken at the scenario, which is static background. Object extraction i.e. foreground segmentation is done by Background Subtraction. Building a representation of the scene, called the background model and then finding deviations from the model for each incoming frame can achieve object detection. Any significant change in an image region from the background model signifies a moving object. Usually, a connected component algorithm is applied to obtain connected regions corresponding to the objects. This process is referred to as the background subtraction [30].

363

Vol. 5, Issue 1, pp. 361-374

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963 3.2 Background Subtraction
Background subtraction is a computational vision process of extracting foreground objects in a particular scene. A foreground object can be described as an object of attention which helps in reducing the amount of data to be processed as well as provide important information to the task under consideration. Often, the foreground object can be thought of as a coherently moving object in a scene. We must emphasize the word coherent here because if a person is walking in front of moving leaves, the person forms the foreground object while leaves though having motion associated with them are considered background due to its repetitive behavior. In some cases, distance of the moving object also forms a basis for it to be considered a background, e.g. if in a scene one person is close to the camera while there is a person far away in background, in this case the nearby person is considered as foreground while the person far away is ignored due to its small size and the lack of information that it provides. [35][36]Identifying moving objects from a video sequence is a fundamental and critical task in many computer vision applications. A common approach is to perform background subtraction, which identifies moving objects from the portion of video frame that differs from the background model. 3.2. 1 Background Subtraction Algorithms Most of the background subtraction algorithm follows a simple flow diagram as shown in Fig.2 3.2. 1. 1 Pre-processing Frame preprocessing is the first step in the background subtraction algorithm. The purpose of this step is to prepare the modified video by removing noise and unwanted object in the frame in order to increase the amount of information gained from the frame and the sensitivity of the algorithm. Preprocessing is a process collecting a simple image processing task that change the raw input video in to a format. This can be processed by subsequent steps. Preprocessing of the video is necessary to improve the detection of moving objects by example, by spatial and temporal smoothing; snow can be removed from the video. Small moving object such as moving leave in a tree can be removed by morphological processing of the frame after the identification of the objects.[37][39]

Fig 2: Flow diagram of a generic background subtraction algorithm

Another key issue in processing is the data format used by the background subtraction algorithm. Most of the algorithms handle luminance, intensity, which is one scalar value par each pixel. However color image, in either in RGB, or HSV color space, is becoming more popular in the background subtraction algorithms. There are six operations that can be performed: 1. Addition: 2. Subtraction: 3. Multi-image averaging: 4. Multi -image modal filtering: 5. Multi -image median filtering 6. Multi-image averaging filtering.

364

Vol. 5, Issue 1, pp. 361-374

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
3.2. 1.2 Background modeling Background modeling and subtractions core component in motion analysis. The central idea behind such module is to create a probabilistic representation of the static scene that is compared with the current input to perform subtraction. Background modeling is at the heart of any background subtraction algorithm. Background modeling uses the new video frame to calculate and update a background model. Background modeling techniques can be classified into two main categories nonrecursive and recursive technique.[37][39][41] 1) Non recursive techniques: A non recursive technique uses a sliding window approach for background estimation. It stores a buffer of the previous video frames, and estimate the background image based on the temporal variation of each pixel within the buffer. Non recursive technique are highly adaptive as they do not depend on the history beyond those frame stored in the buffer. On the other hand, the storage requirement can be significant if a large buffer is needed to cope with slow -moving traffic. Some of the commonly used non recursive techniques are Median Filter, Linear predictive filter, Frame Differencing. 2) Recursive Technique: Recursive technique do not maintains buffer for background estimation. Instead, they recursively update a single background model based on each input frame. As a result, input frame from distant on the current background model. Compared with non-recursive techniques, recursive techniques require less storage, but any error in the background model can linger for a much longer period of time. 3) Foreground Detection: Foreground detection compares the input video frame with the background model, and identifies candidate foreground pixels from the input frame. Foreground detection then identifies pixel in the video frame that cannot be adequately explained by the background model, and output them as a binary candidate foreground mask. 4) Data Validation: Data validation examines the candidate mask, eliminates those pixels that do not correspond to actual moving objects, and output that the final foreground mask.

3.3 Tracking
Tracking is the problem of generating interference about the motion of an object given a sequence of images. Good solution to this problem has variety of applications: Motion Capture: If we can track a moving person accurately, than we can make an accurate record of their motion .Once we have this record, we use it to drive a rendering process; for example, we might control a cartoon character, thousand of virtual extra in a crowd scene.[10] Furthermore, we could modify the motion record to obtain slightly different motion. Re cognation from motion: The motion of object is quite characteristic. We may be able to determine the identity of the object from its motion; we should be able to tell what it is doing. Surveillance : Knowing what objects are doing can be very useful .For example, different kinds of trucks should move in different, fixed pattern in an airport; if they do not, then something is going wrong. It could be helpful to have a computer system that can monitor activities and give warning if it detects a problem case[11]. Targeting: A significant fraction of tracking literature is oriented toward (a) what to shoot, and (b) hitting it. 3.4 Optical Flow Optical flow or optic flow is the pattern of apparent motion of objects, surfaces and edges in a visual scene caused by the relative motion between an observer (an eye or a camera) and the scene. [45][46]The concept of optical flow was first studied in the 1940s and ultimately published by American psychologist James J. Gibson as part of his theory of affordance. Optical flow techniques such as motion detection, object segmentation, time-to-collision and focus of expansion calculations, motion compensated encoding, and stereo disparity measurement utilize this motion of the objects surfaces, and edges. 3.4.1 Estimation of the optical flow: Sequences of ordered images allow the estimation of motion as either instantaneous image velocities or discrete image displacements. It emphasizes the accuracy and density of measurements. The optical flow methods try to calculate the motion between two image frames which are taken at times t and t + t at every voxel position. These methods are called differential since they are based on

365

Vol. 5, Issue 1, pp. 361-374

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
local Taylor series approximations of the image signal; that is, they use partial derivatives with respect to the spatial and temporal coordinates. Motion estimation and video compression have developed as a major aspect of optical flow research. While the optical flow field is superficially similar to a dense motion field derived from the techniques of motion estimation, optical flow is the study of not only the determination of the optical flow field itself, but also of its use in estimating the three-dimensional nature and structure of the scene, as well as the 3D motion of objects and the observer relative to the scene. Optical flow was used by robotics researchers in many areas such as: object detection and tracking, image dominant plane extraction, movement detection, robot navigation and visual odometry. Optical flow information has been recognized as being useful for controlling micro air vehicles. The application of optical flow includes the problem of inferring not only the motion of the observer and objects in the scene, but also the structure of objects and the environment. Since awareness of motion and the generation of mental maps of the structure of our environment are critical components of animal (and human) vision, the conversion of this innate ability to a computer capability is similarly crucial in the field of machine vision.

IV.

PROBLEM DEFINITION AND FORMULATION

4. 1 Problem definition
Dealing with multiple moving object in static background is a crucial challenge in object detection It is specially relevant in automatic surveillance application where accurate tracking is very important even in under crowded condition where multiple object are in motion . An efficient and robust algorithm for multiple object (human) detection from video surveillance is developed for this process; we had to perform a no of operation step wise and systematic manner.

4.2 Scope
The implementation can be used in video surveillance where the video is stable with a simple background. It can be applied to videos from a fixed camera with stability and the fluctuation is very less .The implementation can be used for many applications where the above condition is met.

4.3 Problem Formulation


We approach the problem with help of the following steps as shown in the flow chart.

V.

SOLUTION APPROACHES

Our surveillance activity goes through three phases. In first phase, the target is detected in each video frame. In second phase, feature extraction is done for matching and in third phase, the detected target is tracked through a sequence of video frames.

5. 1 Assumptions
The background is almost static. It should not change during the whole test video clip. The changes can occur due to shadow, so the video is taken in indoor environment.

366

Vol. 5, Issue 1, pp. 361-374

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
It should be free from illumination changes. The lens of camera should not shake during the process; it must be avoided as far as possible. The overlapping of two people must be avoided so that the problem of occlusion never arises. Moving object in the video should not be very far from camera.

5.2 Computer Algorithm:


In our algorithm we first take a suitable video having no moving object in it, so that, the background (reference) image can be extracted easily. We build an initial statistical model for a background scene that allows us to detect foreground regions even when the background scene is not completely stationary. The system updates the background model parameters adaptively to decrease the number of false positives. Then we have to model the background image which contains the non-moving objects in a video. Obtaining a background model is done in two steps: First, the background initialization where we obtain the background image from a specific time from the video sequence. In the second step, the background maintenance is done. A medium filter is applied afterwards to reduce noise. We then apply the Background subtraction method which is used for object detection. In this method the background objects is subtracted from the current image and thereby obtain the object, then the detected objects are converted into image blobs defined as bounding boxes representing the foreground objects so that significant features can be extracted from them. These features are for matching blobs with corresponding blobs in sequence of frames. The coherent pixels are grouped together as image blobs by seeded region growing approach. After finding all the image blobs, smaller ones are discarded. Many features can be used for matching purpose. Some significant features of blobs for matching purpose can be considered as: Size of blob Average of individual RGB components co-ordinate of centre of blob Motion vector Distance between the blob We consider the size of the blob and co-ordinate of center of blob as feature for matching to be done. We then calculate the feature vector for each and every blob belonging to corresponding frame and this is to be applied to all the frames in the video. Take a Background image Model the Background image Apply Median filter to remove noise Use Background subtraction Image = current image - background image Find the blob for feature extraction Calculate feature vector for each blob Calculate the Euclidian distance between blob pair Find the minimum Euclidian distance

5.3 Mathematical Analysis


Tracking is performed by matching features of blobs in the current frame with the features of the blobs in the previous frame. The difference between the feature vectors of each blob in current frame with each of previous frame is calculated. We do an exhaustive matching among N blobs in the current frame with M blobs in the previous frames, so a total of NxM matching is required. As we do not have a lot of objects in the scene, this exhaustive matching is not time consuming. This difference is obtained by using Euclidian distance given by equation 1 : Dist(Ei,Ej)= Where Ei and Ej are feature vectors And d indicates dimension of the vector. ........... 1

367

Vol. 5, Issue 1, pp. 361-374

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
The corresponding minimum distance between two blobs feature vectors is selected and remaining are discarded. Selected blob pair is the tracked blob from the previous one to current one. This process is continued for complete video and thus tracking of multiple people is achieved.

Fig 3 : A video (240x320) is captured for the simulation. A Background image is taken from the scene as shown in fig (a). At any time t a frame containing the foreground objects along with background image is taken from the video as shown in (Fig b). Foreground image (Fig.c) is calculated by subtracting current image with background using matlab image toolbox detected blob (Fig. d) has been found.

In this above algorithm, we have presented methods of segmentation of foreground object by background subtraction and tracking of multiple people in indoor environment. We selected background subtraction method, because it gives maximum number of moving pixels. We used feature based tracking, as it is faster than other methods. There are some problems associated with this method: Occlusion handling problem i.e. overlapping of moving blobs has to be dealt carefully. Human locomotion tracking Lighting condition. shaking camera shadow detection People in shape, color and size also pose a great challenge to efficient tracking. We propose to solve the problem of human locomotion tracking in complex situations by taking advantage of the available camera, scene, and human models. . We believe that the models we use are generic and applicable to a wide variety of situations. The models used are: A statistical background appearance model directs the systems attention to the regions showing difference from the background. A camera model to provide a transformation from the world to the image. In conjunction with the assumption that humans move on a known ground plane, it helps transform positions between the image and the physical world and allows reasoning with invariant 3D quantities (e.g., height and shape). A 3D coarse human shape model to constrain the shape of an upright human. It is critical for human segmentation and tracking. A 3D human articulated locomotion model to help recover the locomotion modes and phases and recognize walking humans to eliminate false hypotheses formed by the static analysis. The overview block diagram of the system is shown in Fig. 2. First, the foreground blobs are extracted by a change detection method. Human hypotheses are computed by boundary analysis and shape analysis using the knowledge provided by the human shape model and the camera model. Each hypothesis is tracked in 3D in the subsequent frames with a Kalman filter using the objects appearance constrained by its shape. Two-dimensional positions are mapped onto the 3D ground plane and the trajectories are formed and filtered in 3D. Depth ordering can be inferred from the 3D information, which facilitates the tracking of multiple overlapping humans and occlusion analysis.

368

Vol. 5, Issue 1, pp. 361-374

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Fig. 4: The system diagram. Shaded box: program module; plain box: model; thick arrow: data flow; thin line: model association.

VI.

SEGMENTATION AND TRACKING OF MULTIPLE HUMANS

6.1 Background Model, Camera/Scene Model, and Human Shape Model


We incorporate a statistical background model [44] where the color of each pixel in the image is modeled by a Gaussian distribution. The background model is first learnt in a period where there are no moving objects in the scene and then updated for each incoming frame with the non-moving pixels. A single initial background frame is sufficient to start. The background model can be easily replaced with a more complex one (e.g., one with a multi-Guassian model [40] or one which can start with moving objects in the scene [15]) if needed. Change detection is performed on each incoming frame. The pixels whose values are sufficiently different from the corresponding background models are classified as fore-ground pixels. The binary map is filtered with a median filter and the morphology close operator to remove isolated noise, resulting in the foreground mask F . Connected components are then computed, resulting in the moving blobs (or, simply, blobs) of that frame. In contrast to the ground-level camera setup used in some of the previous work (e.g., [15], [25], etc.), we deploy the camera a few meters above the ground looking down. This allows a larger coverage and less occlusion, especially avoiding the situation where the entire scene is occluded by one object. Such a setup is also in accordance with most commercial surveillance systems. To compute the camera calibration, the traditional approach requires enough 3D feature points ( 6 points with 2 of them out of a plane) and their corresponding image points. A linear calibration method described in [12] works satisfactorily if the selected points are distributed evenly in the image. If the number of feature points is not enough or measurement of 3D points is not possible, methods based on the projective invariance (e.g., vanishing points) can be used (e.g., [22], [24]). It has also been shown in [24] that humans walking in more than one direction can provide enough information for an approximate camera calibration. Both methods have been used in our experiments. We assume that people move on a known ground plane. The camera model and the ground plane together serve as a bridge to transform 2D and 3D quantities. Three-dimensional quantities can be projected into 2D quantities by the camera model. The camera model and the ground plane define a transformation (i.e., a homography) between the points on the image plane and the points on the ground plane. The measurements of the objects (such as position, velocity, and height) in the image can be transformed into 3D. Sometimes, we only know the position of a humans head instead of his/ her feet. Then, the transformation can be carried out approximately by assuming that the humans are of an average height. The transformation degenerates when the projection of the reference plane is (or close to) a line in the image, i.e., when the optical axis is on the reference plane. Such a case does not occur in our camera setup. We model gross human shape by a vertical 3D ellipsoid. The two short axes are of the same length and have a fixed ratio to the length of the long axis. The parameters of an object include its position on the ground plane and its height. Assuming an ellipsoid is represented by a 4 by 4 matrix, Q, in

369

Vol. 5, Issue 1, pp. 361-374

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
homogenous coordinates, its image under camera projection P (a 3 by 4 matrix) is an ellipse, represented by a 3 by 3 matrix, C. Relation between them is given in [16] by C-1 = PQ-1PT . An object mask M is defined by the pixels inside the ellipse. The 3D human shape model also enables geometric shadow analysis.

6.2

Segmenting Multiple Humans

We attempt to interpret the foreground blobs with the ellipsoid shape model. Human hypotheses are generated by analysing the boundary and the shape of the foreground blobs. The process is described below and shown step by step graphically in Fig. 5. 6.2.1 Locating People by Head Top Candidates In scenes with the camera placed several meters above the ground, the head of a human is less likely to be occluded; we find that recognizing the head top on the foreground boundary is a simple and effective way to locate multiple, possibly overlapping humans. A point can be a head top candidate if it is a peak (i.e., the highest point in the vertical direction (the direction towards the vertical vanishing point) along the boundary within a range (Fig. 5a)) defined by the average size of a human head assuming an average height. A human model of an average height is placed at each peak.

Fig. 5. The process of multihuman segmentation. (a) unscreened head top candidates; (b) screened head top candidates; (c) first four segmented people; (d) the foreground residue after first four people are segmented; (e) head top candidate after first four people are segmented; (f) the final segmentation; (g) an example of false hypothesis

Those peaks which do not have sufficient foreground pixels within the model are discarded (Fig. 5b). If a head is not overlapped with the foreground region of other objects, it is usually detected with this method (Fig. 5c). For each head top candidate, we find its potential height by finding the first point that turns to a background pixel along the vertical direction in the range determined by the minimum and the maximum human height. We do this for all points in the head area and take the maximum value; this enables finding the height of different human postures. Having head top position and the height, an ellipsoid human hypothesis is generated. 6.2.2 Geometrical Shadow Analysis Assuming that the sun is the only light source and its direction is known (can be computed from the knowledge of time, date, and geographical location, e.g., using [29]), the shadow of an ellipsoid on the ground, which is an ellipse, can be easily determined. Any foreground pixel which lies in the shadow ellipse and whose intensity is lower than that of the corresponding pixel in the background by a threshold Ts is classified as a shadow pixel. Most of the current shadow removal approaches are based on an assumption that the shadow pixels have the same hue as the back-ground but are of lower intensity (see [33] for a review) and ignore the shadow geometry. The color-based approaches are not

370

Vol. 5, Issue 1, pp. 361-374

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
expected to work well on very dark sun cast shadows, as hue computation will be highly inaccurate. 6.2.3 The Algorithm Segmenting multiple humans is an iterative process. We denote the foreground mask after removing the existing human masks and their shadows as the foreground residue map Fr. At the beginning of the segmentation, Fr is initialized with F . The head top candidate set Hc is computed from Fr. We choose one candidate, which has the minimum depth value (closest to the camera) to form a human hypothesis. Figs. 5c and 5d show the first four segmented humans and the foreground after their masks and shadow pixels are removed. As can be seen, a large portion of the shadow pixels is removed correctly. A morphological open operation is performed on Fr to remove the isolated small residues (Fig. 5e). This process iterates until no new head candidates are found (Fig. 5f).[35][44] This approach works well for a small number of overlapping people that do not have severe occlusion; a severely occluded object will be detected when it becomes more visible in a subsequent frame. This method is not sensitive to blob fragmentation if a large portion of the object still appears in the foreground. In our experiments, we found that this scheme tends to have a very low false alarm rate. The false alarms usually correspond to large foreground region not (directly) caused by a human. For example, when people move with their reflections, the reflections are also hypothesized as humans

6.3 Tracking Multiple Humans


Once segmented, the objects are tracked in the subsequent frames. Tracking is a loop consisting of prediction of the positions from the previous frame, search for the best match,and update of the object representation. Multiple objects are matched one by one according to their depth order. Object Representation for Tracking An elliptic shape mask (M) projected from the ellipsoid model represents the gross human shape. The shape/scale of the mask changes automatically according to the humans position and the geometry. A texture template (T) is used to represent the appearance of a human by the rgb value of each pixel. Not every pixel inside the elliptic mask corresponds to the foreground; we also keep a foreground probability template (Fp) for each human object, which stores the probability of each pixel in the elliptic mask as foreground. It enables handling of some variations of body shape/pose. Fig. 6b shows examples of the representation. Due to camera perspective effect, the elliptic masks of the same ellipsoid have different shape (i.e., orientations and lengths of the axes) when the human is at different locations. Therefore, a mapping is needed to align different ellipses for matching and updating. Suppose we have two ellipses e1 (u1,1,1,1) and e2 (u2,2,2,2) in their parametric forms where u, , and are the center, long axis, short axis, and the rotation, respectively. A mapping u =W(u) transforms a point u in e1 to its corresponding point u in e2 by aligning e1 and e2 with their centers and corresponding axes through translation, rotation, and scaling by equation 2,3 & 4. ...........................2 = ............................3

.............................4

371

Vol. 5, Issue 1, pp. 361-374

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Fig. 6. Examples of object representation for tracking and its evolution: (a) texture template, (b) shape mask, and (c) foreground probability template. From top to bottom: 1st, 25th, 100th, 200th frame, respectively

6.4 Handling Occlusions


Occlusion of multiple objects has been addressed in several places in the algorithm, for example, in matching and updating. Furthermore, we compute r, the visible fraction of the object. r is defined by Nv/Ne, where Nv is the number of visible (i.e., unoccluded) foreground pixels in the elliptic mask and Ne is area, in pixel, of the elliptic mask of each object. The measurement noise n1,n2 of the Kalman filter are set proportional to 1/r. Using two thresholds To1 and To2, if To1 > r > To2, the object is said to be partially occluded. If r < To2, the object is said to be completely occluded. In case of complete occlusion, the object follows the prediction of the Kalman filer. If an object is completely occluded for a certain number of frames, it is discarded. [27][47]

VII.

CONCLUSION & FUTURE WORK

We have presented methods of segmentation of foreground object by background subtraction and tracking of multiple people in indoor environment. We selected background subtraction method, because it gives maximum number of moving pixels. We used feature based tracking, as it is faster than other methods. Then described our methods for segmentation and tracking of multiple humans in complex situations and estimation of human locomotion models that address the problem of occlusions in the tracking process. There are a few interesting directions to be explored in the future. A joint likelihood might be needed in segmentation and tracking of more overlapping objects. Further, using 2 cameras to construct 3D human models that would give more precise results. In future Extraction of foreground Object from dynamic scene will be emphasized along with variable light condition and different camera angle. Motion parameters and body parameters can be optimized locally to best fit the images.

REFERENCES
[1] A.M. Baumberg, Learning Deformable Models for Tracking Human Motion, PhD thesis, Univ. of Leeds, 1995. [2] G.A. Bekey, Walking, The Handbook of Brain Theory and Neural Networks, M.A. Arbib, ed., MIT press, 1995. [3] M. Brand, Shadow Puppetry, Proc. Intl Conf. Computer Vision, vol. 2, pp. 1237-1244, 1999. [4] C. Bregler, Learning and Recognizing Human Dynamics in Video Sequences, Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 568-574, 1997. [5] A.F. Bobick and J.W. Davis, The Recognition of Human Movement Using Temporal Templates, IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 23, no. 3, Mar. 2001. [6] Character Studio: Software Package, http://www.discreet.com/ products/cs/, 2002. [7] I. Cohen and G. Medioni, Detecting and Tracking Moving Objects for Video Surveillance, Proc. IEEE Conf. Computer Vision and Pattern Recognition, vol. 2, pp. 319-325, 1999. [8] R. Cutler and L.S. Davis, Robust Real-Time Periodic Motion Detection, Analysis, and Applications, IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 22, no. 8, Aug. 2000. [9] J. Deutscher, A. Davison, and I. Reid, Automatic Partitioning of High Dimensional Search Spaces

372

Vol. 5, Issue 1, pp. 361-374

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Associated with Articulated Body Motion Capture, Proc. IEEE Conf. Computer Vision and Pattern Recognition, vol. 2, pp. 669-676, 2001. [10] A.A. Efros, A.C. Berg, G. Mori, and J. Malik, Recognizing Action at a Distance, Proc. IEEE Intl Conf. Computer Vision, pp. 726-733, 2003. [11] A.M. Elgammal and L.S. Davis, Probabilistic Framework for Segmenting People under Occlusion, Proc. Intl Conf. Computer Vision, vol. 1, pp. 145-152, 2001. [12] D. Forsyth and J. Ponce, Computer Vision: A Modern Approach. Prentice-Hall, 2001. [13] S. Hongeng and R. Nevatia, Multi-Agent Event Recognition, Proc. Intl Conf. Computer Vision, vol. 2, pp. 84-91, 2001. [14] I. Haritaoglu, D. Harwood, and L.S. Davis, W4S: A Real-Time System for Detecting and Tracking People in 2 1/2 D, Proc. European Conf. Computer Vision, pp. 962-968, 1998. [15] S. Haritaoglu, D. Harwood, and L.S. Davis, W4: Real-Time Surveillance of People and Their Activities, IEEE Trans.Pattern Analysis and Machine Intelligence, vol. 22, no. 8, Aug. 2000. [16] R. Hartley and A. Zisserman, Multi View Geometry. Cambridge Press, 2000. [17] M. Isard and A. Blake, Condensation-Conditional Density Propagation for Visual Tracking, Intl J. Computer Vision, vol. 29, no. 1, pp. 5-28, 1998. [18] M. Isard and J. MacCormick, BraMBLe: A Bayesian Multiple- Blob Tracker, Proc. Intl Conf. Computer Vision, vol. 2, pp. 34-41, 2001. [19] R. Kalman, A New Approach to Linear Filtering and Prediction Problems, J. Basic Eng., vol. 82, pp. 3545, 1960. [20] P. Kornprobst and G. Medioni, Tracking Segmented Objects Using Tensor Voting, Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 118-125, 2000. [21] N. Kra hnstover, M. Yeasin, and R. Sharma, Towards a Unified Framework for Tracking and Analysis of Human Motion, Proc. IEEE Workshop Detection and Recognition of Events in Video, 2001. [22] D. Liebowitz, A. Criminisi, and A. Zisserman, Creating Architectural Models from Images, Proc. EUROGRAPH Conf., vol. 18, pp. 39-50, 1999. [23] A.J . Lipton, H. Fujiyoshi, and R.S. Patil, Moving Target Classification and Tracking from Real-Time Video, Proc DARPA IU Workshop, pp. 129-136, 1998. [24] F. Lv, T. Zhao, and R. Nevatia, Self-Calibration of a Camera from a Walking Human, Proc. Intl Conf. Pattern Recognition, vol. 1, pp. 562-567, 2002. [25] S.J. McKenna, S. Jabri, Z. Duric, A. Rosenfeld, and H. Wechsler, Tracking Groups of People, Computer Vision and Image Understanding, vol. 80, no. 1, pp. 42-56, 2000. [26] T.B. Moeslund and E. Granum, A Survey of Computer Vision- Based Human Motion Capture, Computer Vision and Image Understanding, vol. 81, pp. 231-268, 2001. [27] G . Mori and J. Malik, Estimating Human Body Configurations Using Shape Context Matching, Proc. European Conf. Computer Vision, pp. 666-681, 2002. [28] R. Mu rry, Z.X. Li, and S. Sastry, A Mathematical Introduction to Robotic Manipulation. CRC Press, 1994. [29]NOVASNavalObservatory Vector Astrometry Subroutines, http://aa.usno.navy.mil/software/novas/novas_info.html, 2003. [30] Data Set Provided by IEEE Workshop on Performance Evaluation of Tracking and Surveillance (PETS2001), 2001. [31] S. Pingali and J. Segen, Performance Evaluation of People Tracking Systems, Proc. Third IEEE Workshop Applications of Computer Vision, pp. 33-38, 1996. [32] P.J. Phillips, S. Sarkar, I. Robledo, P. Grother, and K.W. Bowyer, The Gait Identification Challenge Problem: Data Sets and Baseline Algorithm, Proc. Intl Conf. Pattern Recognition, pp. 385- 388, 2002. [33] A. Prati, R. Cucchiara, I. Mikic, and M.M. Trivedi, Analysis and Detection of Shadows in Video Streams: A Comparative Evaluation, Proc. IEEE Conf. Computer Vision and Pattern Recognition, vol. 2, pp. 571-576, 2001. [34] L.R. Rabiner, A Tutorial on Hidden Markov Models and Slected Applications in Speech Recognition, Proc. IEEE, vol.77, no. 2, 1989. [35] K. Rohr, Towards Model-Based Recognition of Human Movements in Image Sequences, CVGIP: Image Understanding, vol. 59, no. 1, pp. 94-115, 1994. [36] R. Rosales and S. Sclaroff, 3D Trajectory Recovery for Tracking Multiple Objects and Trajectory Guided Recognition of Actions, Proc. IEEE Conf. Computer Vision and Pattern Recognition, vol. 2, pp. 117-123, 1999. [37] H. Sidenbladh, M.J. Black, and D.J. Fleet, Stochastic Tracking of 3D Human Figures Using 2D Image Motion, Proc.European Conf. Computer Vision, pp. 702-718, 2000. [38] N.T. Siebel and S. Maybank, Fusion of Multiple Tracking Algorithm for Robust People Tracking, Proc. European Conf. Computer Vision, pp. 373-387, 2002. [39] Y. Song, X. Feng, and P. Perona, Towards Detection of Human Motion, Proc. IEEE Conf. Computer

373

Vol. 5, Issue 1, pp. 361-374

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Vision and Pattern Recognition, pp. 810-817, 2000. [40] C. Stauffer and W.E.L. Grimson, Learning Patterns of Activity Using Real-Time Tracking, IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 22, no. 8, Aug. 2000. [41] H. Tao, H.S. Sawhney, and R. Kumar, A Sampling Algorithm for Tracking Multiple Objects, Proc. IEEE Workshop Vision Algorithms, 1999. [42] H. Tao, H.S. Sawhney, and R. Kumar, Object Tracking with Bayesian Estimation of Dynamic Layer Representations, IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 24, no. 1, Jan. 2002. [43] A.M. Tekalp, Digitial Video Processing. Prentice Hall, 1995. [44] C.R. Wren, A. Azarbayejani, T. Darrell, and A.P. Pentland, Pfinder: Real-Time Tracking of the Human Body, IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 19, no. 7, July 1997. [45] T. Zhao, R. Nevatia, and F. Lv, Segmentation and Tracking of Multiple Humans in Complex Situations, Proc. IEEE Conf. Computer Vision and Pattern Recognition, vol. 2, pp. 194-201, 2001. [46] T. Zhao and R. Nevatia, 3D Tracking of Human Locomotion: A Tracking as Recognition Approach, Proc. Intl Conf. Pattern Recognition, vol. 1, pp. 546-551, 2002. [47] T. Zhao, Model-Based Segmentation and Tracking of Multiple Humans in Complex Situations, PhD thesis, Univ. of Southern California, Los Angeles, 2003.

AUTHORS:
Shalini Agarwal: I am Shalini Agarwal student of M.Tech (Computer Science) 2nd year from Banasthali Vidhyapeeth, Rajasthan. I have completed B.Tech (Computer Science and Engineering) in 2009 at B.S.A.C.E.T., Mathura (U.P.). My area of interest is Pattern Recognition & Image Processing, Data Mining.

Shaili Mishra: I am Shaili Mishra student of M.Tech (Computer Science) 2nd year from Banasthali Vidhyapeeth , Rajasthan. I have completed MCA in 2009 at S.R.M.C.E.M; Lucknow (U.P.).My area of interest is Pattern Recognition & Image Processing, Algorithms.

374

Vol. 5, Issue 1, pp. 361-374

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

MODELLING AND PARAMETRIC STUDY OF GAS TURBINE COMBUSTION CHAMBER


M. Sadrameli & M. Jafari
Department of Chemical Engineering, Tarbiat Modares University, Tehran, Iran

ABSTRACT
In order to find the amount of pollution created by combustion in a gas turbine, Conjugate CFD equations in turbulent mixing and combustion equations is done. Overall conservation equations for mass, momentum, energy and the combustion process, for large eddy simulation (LES) and the chemical reaction rate method is merged. For the numerical solution, solving the Structured Grid with the Staggered Grid and cylindrical coordinates is considered. Discretization equations used for grid capability and QUICK algorithm to solve the equations and the numerical algorithm is performed. To verify the numerical solution, the geometry of the boundary conditions of a gas turbine combustor controlled by analytical and experimental results, it turns out that the numerical solution has been considered. Compared with existing analytical models and experimental results with acceptable error has been approved. NO production output of combustion in a gas turbine based on variables such as changes in temperature and the amount of fuel and air entering the gas turbine power optimization has been found. Keywords: Combustion Chamber, Gas Turbine, Large Eddy Simulation, Chemical Reactions Rate, Discretization, Structured Grid, Staggered Grid

I.

INTRODUCTION

Due to the worries about the biological peripheral are caused to exert the limit of the increasing of the propagation from the gas turbines systems. Briefly due to the investigation of the proliferation of pollution basically to the varieties of stable currents in the combustion chambers which is done by Rizk and Nandula another co-operations. [1,2,3] It is worthy to mention that Moin and Mahesh and Menzies work precisely on the analysis of in flammability in the combustion chambers of gas turbines. By some simulating ways and for finding the amount of No which is produced in the combustion chamber of gas turbine, this is analysed by the Fiechtner and Pekkan. [4,5,7] The first result of this article is insufficient studies and investigations on this course precisely, in order to produce a reliable current fluid for the gas combustion chamber. By the strategy which is set on this article most of the physical features of turbulent currents already maintained and of course complicated current of free dimensions which is followed by the deviation of those, most of those are predictable. The next step in the turbulent modelling is the production of the complete model of Reynolds stress. [6] Although this quantity of precise ended up to the complicated modelling, this matter needs more time of calculations which is a liberties task and it is absolutely vital for the analysis of combustion chambers of gas turbines. [8]

II.

CRUCIAL EQUATIONS IN COMBUSTION

For a combination of stable fuel and air (Ideal) all of the derived chemical and thermo dynamical equations are as follows:

375

Vol. 5, Issue 1, pp. 375-386

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963 - Elements: ( ) ( )
)1(

which is density, u element mass ratio, Vi velocity vector, wi the velocity of transportation of element mass, production ratio of chemical element ( Output of combustion) for the i element ( which i is From 1 to Ns).

- Mass: ( )
)2(

- Momentum: ( )
)3(

Which p is pressure, elements.

Tensor of viscous tension and fi (Buoyancy force) is for the unit of mass

- Energy:
( ) [ ( )] ( ) ( ) ( )
)4(

Which e is the combination of internal energy and k is the vibration energy to the mass unit of ) ).

- Viscous Tension:
[ ( ) ] ( )
)5(

Which is is the molecule viscosity, ( ), B is the Buoyancy viscosity and finally S is the intensity strain of tensor, and also I is the permanent tensor. ( )

)6(

Which K is the thermal conductive indicator and are the thermal currency indicator and hi is the i th element enthalpy, and also R is the constant global gas. Hence xi is the molly element ratio, DT,i is the mass currency indicator of thermal element, Dij is the matrix currency indicator of element mass, Mi is the molecule element mass and qR is the vector of the radiation currency. It is worthy to mention that defers influence is the effect of the thermal penetration. ( ) ( ( ) ) ( )
)7(

- Equation of thermo dynamical mood:

376

Vol. 5, Issue 1, pp. 375-386

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
)8(

By the omitting terms of the effect of the compressibility and acoustic, thermal released which is wasted in the time of viscosity, buoyancy viscosity, buoyancy fore, the effects of currency of pressure gradient and the radiation are simplified in the combustion chamber which is over simplified by the following equations:

- Continuity Equation: ( )
)9(

- Momentum Equation: ( ) [ ( )]
)11(

- Scalar Mass Transport: ( ) ( )


)11(

- Mood relation: ( )
)12(

III.

CIRCULATED CURRENT MODEL

AND

EFFECT

OF

THOSE

ON THE

CHEMICAL

Choosing the scale by the LES strategy is basically due to the separation of small and big scale. For the definition of these two groups we have to identify a good resource of length separation. Continuity equation after filtering is remained with no difference but the momentum equation after filtering leads us to an extra matter of scale in the main equation. We need to model this matter in the way to have less small scale circulator effect. It is worthy to mention that one of the privileges of simulating model is to responding them at any times. Density is basically analyzed by the Reynolds as follows:

And according to the velocity of Favre [9] modeling which already filtered:

We reviewed all the mentioned strategies of 9 to 12 equations in the LES:

- Continuity Equation: ( ) ( ) ( )
)13(

- Momentum Equation: ( ) ( ) ( ) ( ) ( )
)14(

- Scalar Mass of Equation:


)15(

- And the relationship mood: 377 Vol. 5, Issue 1, pp. 375-386

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963 ( )
)16(

Then ti is the tensor tension residue and qik is the chemical thermal residue and which have to be modeled. [10] Amount of remained tension in the circular viscosity is as follows: [11]

)17(

And is the basic kinetic energy; also is the circular viscosity which is defined as following equations:

IV.

MODEL OF SMAGORINSKY

Hence, it is worthy to mention the equations are as following, which they called as the Smagorinsky model:
| | ( ) | |
)18(

And for the description of basic kinetic energy it has to be the equation number 19:

| |

)19(

And also for the basic thermal basic currency: [12]

| |

)21( )21(

Indicators formulas of C Ck C already solved in the dynamical solutions which are already mentioned above.

V.

NUMERICAL THEORY

Basic utilized pattern of this article is in the first figure. In this figure angles of the main control volume are v ,u. The first and the most conclusion of this infrastructure is the mass current ratio from different angles of control volume without any mid calculations of velocity matters.

Figure 1. Transport network pattern

378

Vol. 5, Issue 1, pp. 375-386

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
For the coordinate system in two directions of X and Y and for a permanent network with differences of X and Y which is separated by the matters of i and j. [13,14] Also the operations of mid calculations are defined as follows: | |
)22(

Therefore differential operations are also: ( )| ( )|


)23(

VI.

DESCRITIZATION OF THE VITAL EQUATION

We assume a transport network that velocity and density are located at the border of the cells and other transport networks which are already mentioned are defined as space time [15], (Figure 2). We call every cell in this network as a continuous cell which is occupied by the velocity and density and the equations between them are simplified as the continuity equation.

Figure 2. Separated Network in the Space-Time

With the definition of:

Separated vital equations over simplified are then:

)24(

- Continuity:
( ) ( )
)25(

- Momentum:
( ) ( [ [ ( ( ) ) ( ) ( ( ) )] ) ] ( )
)26(

379

Vol. 5, Issue 1, pp. 375-386

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963 - Scalar Mass Transport:
( ) ( ) [ ( )] ( )
)27(

For a better definition of the above equations we are going to review the continuity equation 25 as follow: [( ( [( ( ) ) ) ] ) ]
)28(

Figure 3 is the indication of a transport network and figure number 4 with coordinates of x r are for the preliminaries of solving problems:

Figure 3. Suggested combustion chamber for numerical exported equation

Figure 4. Cylindrical transport network which p, u, and p are in the centre of cellules (Fully black points), and also u is in the boundary of cellules

In cylindrical coordinates system, derived equations are also proposed as:

- Mass currency:
)29(

380

Vol. 5, Issue 1, pp. 375-386

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963 - Velocity divergence:
( ) ( ) ( )
)31(

- Continuity:
( ) ( ) ( ) ( )
)31(

- Momentum:
( ( ( ) ) ) ( ( ( ) ) ) ( ( ( ) ) ) ( ( ( ) ) )
)32( )33( )34(

- Scalar Transport:
( ) ( ) ( ) ( )
)35(

- Outputs of f are as the follows:


[ ( ) ) ( ( ) ) ) ) ( ( ( )] )] ) ] ] ] ]
)36( )37(

[ ( [ [

)38(

)39( )41(

[ ( [ (

)41(

[ (

)42(

[ [

( (

) )

( (

)] )]

)43( )44(

- Outputs of q are as follows:


( ( ) )
)45( )46(

381

Vol. 5, Issue 1, pp. 375-386

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
( )
)47(

- Border Circumstances:
According to the Akselvoll and Moin [16], also Mohseni and Colonius [17] Proposals, measurement of ur in all degrees calculated from the average calculation across to the r coordinate: ( ) [ ( ) ( )]
)48(

- Wall border circumstance:


We consider the wall without any vibrations and the amounts of velocity to the wall are zero.
)49(

In above equation fee parameter of scalar and or velocity and of course sees as the transport velocity, and is the perpendicular direction to the coordinate vector.

)
(50)

VII.

FIXING A CREDIT OF COMPUTER CODE

For this purpose the scientifically model of combustion chamber as shown in figure 3 which is done by Owen and his assistants, these scientists analyzed their conclusions in different procedure. [18] In the first model the temperature and velocity and production of combustion are precisely under the control of defined circumstances. The input air temperature increased to 750K as a preheat and the pressure of 3.8atm. For making a constant current of air and fuel from a combination of metal disks which is utilized in the injection spot of air and fuel. We are making the walls of combustion chamber cooler by cold water which we can fix it at the constant temperature of 500K. We assume the natural gas and the normal air in this process; all the figures are shown in Table 1.
Table1. The dimensional quantity and other elements of combustion chamber Central Pipe Radius (R1) Annular Inner Radius (R2) Annular Wall Thickness (R2 - R1) Annular Outer Radius (R3) Combustor Radius (R4) Combustor Length Mass Flow Rate of Fuel Mass Flow Rate of Air Bulk Velocity of Fuel (V1) Bulk Velocity of Air (V2) Overall Equivalence Ratio Temperature of Fuel Combustor Pressure 3.157 cm 3.175 cm 0.018 cm 4.685 cm 6.115 cm 100.0 cm 0.00720 kg/s 0.137 kg/s 0.9287 m/s 20.63 m/s 0.9 300 K 3.8 atm

The production of combustion in the scietifical model which is gained (based on the mollies' quantity) NO, CO, CH4, O2 and for the productions of H2O and H2 and the steichiometry equations are as follows:

And for the calculation of mollies' quantity of nitrogen we assumed:

382

Vol. 5, Issue 1, pp. 375-386

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Figures number 5 to 6 shows the average of combustion's production of the combustion chamber. Every figure consists of the average combustion production's and these results are scientifically proven and the laboratory and of course already analyzed numerically. For numerical comparison of the research with the scientifical outputs in the laboratory the distance from the combustion chamber to its center of radius is shown as R4 for more oversimplification and eradicating the dimensions like are Rr and x/R. It is readily distinguishable in these figures as we are getting backward from the combustion chamber; the result of numerical outputs alters to the analyzed solutions and of course as we are getting forward to the combustion chambers the numerical mistakes are getting less and less and the average of these mistakes to the analyzed solution is 7.3 percent, which they are almost acceptable. Figures 5 to 6 are the average combustion production mass in the radius locations of the spontaneous combustion.

Figure 5. Dimensionless radius

Figure 6. Average combustion production mass in the radius locations of x/R=7.41

VIII.

RESULT OF NUMERICAL SOLUTION

Figure 7 shows the influences of incoming thermal alterations for the amount of production of NO in different air temperatures which is the indication of reduction of NO by increasing the incoming temperature.

383

Vol. 5, Issue 1, pp. 375-386

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Figure 7. Effect of incoming air thermal degree for the production of NO

Figure 8 illustrate the amount of produced NO of the first effect of the temperature.

Figure 8. Effect of incoming air thermal degree for the production of NO

And in Figure 9 the effect of the out coming power of the produced NO are readily distinguishable.

384

Vol. 5, Issue 1, pp. 375-386

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Figure 9. The effect of outcoming power of the quantity of produced NO

IX.

CONCLUSIONS AND RECOMMENDATIONS

Different strategies of NO currency control for temperature reduction and the combustion chamber by different relative ideal stoichiometry equations which they are enriched of consumptions for limiting the amount of available oxygen.

REFERENCES
[1]. Correa S. M., "Carbon Monoxide Emissions in Lean Premixed Combustion", 1992, Journal of Propulsion and Power, Vol. 8, No.6, pp. 1144-1151. [2]. Rizk, N.K., Mongia, H.C., "Semi-analytical Correlations for NOx, CO and UHC Emissions", 1993, Journal of Engineering Gas Turbine and Power, Transactions of the ASME 115-612-619. [3]. Nandula, S. P., Pitz, R. W., Barlow, R.S., Fiechtner, G. J., "Rayleigh/Raman/LIF. Measurements in a Turbulent Lean Premixed Combustor", 1996, AIAA Paper No. 96-0937. [4]. Moin, P. & Apte, S. V., "Large-eddy simulation of realistic gas turbine combustors", 2006, Am. Inst. Aeronaut. Astronaut. J. 44 (4), 698-708. [5]. Mahesh, K., Constantinescu, G., Apte, S., Iaccarino, G., Ham, F. & Moin, P., "Large eddy simulation of reacting turbulent flows in complex geometries", 2006, In ASME J. Appl. Mech. vol. 73, pp. 374381. [6]. Menzies, K., "Large eddy simulation applications in gas turbines", 2009, Phil Trans Roy Soc Lond A Math Phys Eng Sci, 367:282738. [7]. Pekkan, K., Nalim, R., "Two-Dimensional Flow and NOx Emissions in Deflagrative Internal Combustion Wave Rotor Configurations", 2002, Journal of Engineering for Gas Turbines and Power, also ASME Paper, No: GT-2002-30085 [8]. Fichet, V., Kanniche, M., Plion P., Gicquel O., "A reactor network model for predicting NOx emissions in gas turbines", 2010, Journal of Fuel, Volume: 89 Issue: 9, Page: 2202-2210. [9]. Favre, A., "Turbulence: space-time statistical properties and behavior in supersonic flows", 1983, Physics of Fluids A 23 (10): 28512863 [10]. Moin, P., Squires, K., Cabot, W. & Lee, S., "A dynamic subgrid-scale model for compressible turbulence and scalar transport", 1991, Phys. Fluids, A3, 2746-2757. [11]. Moin, P, "Progress in Large Eddy Simulation of Turbulent Flows", 1997, AIAA Paper 97-0749. [12]. Moin, P., "Fundamentals of Engineering Numerical Analysis", 2001, Cambridge University Press. [13]. Piacsek, S. A., Williams, G. P., "Conservation Properties of Convection Difference Schemes", 1970, J. Comput. Phys., 6- 392-405. [14]. Morinishi, Y., Lund, T. S., Vasilyev, O. V. & Moin, P., "Fully Conservative Higher Order Finite Difference Schemes for Incompressible Flow", 1998, J. Comput. Phys., 143, 90-124.

385

Vol. 5, Issue 1, pp. 375-386

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
[15]. Mahesh, K., Constantinescu, G., Apte, S., Iaccarino, G., Ham, F. & Moin, P., "Large Eddy Simulation of Reacting Turbulent Flows in Complex Geometries", 2006, In ASME J. Appl. Mech. vol. 73, pp. 374-381. [16]. Akselvoll, K., Moin, P., "Large-eddy simulation of turbulent confined co annular jets and turbulent flow over a backward facing step", 1995, Mech. Eng. Dept. Report TF-63, Stanford University. [17]. Mohseni, K., Colonius, T., "Numerical treatment of polar coordinate singularities", 2000, J. Comput. Phys., 157, 787-795. [18]. Owen, F. K., Spadaccini, L. J., Bowman, C. T., "Pollutant formation and energy release in confined turbulent diffusion flames", 1976, Proc. Combust. Inst., 16, 105-117

AUTHORS
Mojtaba Sadrameli is a full professor in Tarbiat Modares University, Tehran, Iran. His research area is Heat Recovery, Petrochemicals and Bio fuels.

Mehdi Jafari Harandi received the B.Sc. degree in Chemical engineering from Isfahan University Of Technology, Iran, in 2007, The M.Sc. degree in Chemical engineering from Tarbiat Modares University, Tehran, research interests Modeling and Parametric Study of Gas Turbine Combustion Chamber.

386

Vol. 5, Issue 1, pp. 375-386

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

STATISTICAL TECHNIQUES IN ANOMALY INTRUSION DETECTION SYSTEM


Hari Om & Tanmoy Hazra
Department of Computer Science & Engineering, Indian School of Mines, Dhanbad, India

ABSTRACT
In this paper, we analyze an anomaly based intrusion detection system (IDS) for outlier detection in hardware profile using statistical techniques: Chi-square distribution, Gaussian mixture distribution and Principal component analysis. Anomaly detection based methods can detect new intrusions but they suffer from false alarms. Host based Intrusion Detection Systems (HIDSs) use anomaly detection to identify malicious attacks i.e. intrusion. The features are shown by large set of dimensions and the system becomes extremely slow during processing this huge amount of data (especially, host based). We show the comparative results using three different approaches: Principal Component Analysis (PCA), Chi-square distribution and cluster with Gaussian mixture distribution. We get good results using these techniques.

KEYWORDS:

Principal Component Analysis, Outlier, Mahalanobis Distance, Confusion Matrix, Anomaly Detection, Gaussian mixture distribution, Chi-square distribution, Expectation maximization algorithm.

I.

INTRODUCTION

The process of monitoring events occurs in a computer system and analyzing them to signal for intrusions is known as intrusion detection. An intrusion is defined as an attack in a network or system by an intruder that hampers the security goals such as integrity, confidentiality, authentication i.e. violation of the security policy of a system. There are various types of attacks: external attacks, internal penetrations, and misfeasors. An intruder tries to gain unauthorized access to a valid users system. An intrusion detection system (IDS) is a program that analyzes what happens/has happened during an execution and tries to find out indications if the computer has been misused. With explosive rapid expansion of computers in past few years, their security has become an important issue. Host based intrusion detection systems (HIDSs) are used to monitor a suspicious activity of the system. The HIDSs can be classified into two types: anomaly detection that is based on statistical measures and misuse detection that is based on signature. Anomaly detection is used to capture the changes in behavior that deviates from the normal behavior. These methods take training data as input to build normal system behavior models. Alarms are raised when any activity deviates from the normal model. These models may be generated using statistical analysis, data mining algorithms, genetic algorithms, artificial neural networks, fuzzy logic, rough set, etc. Anomaly detection methods may raise alarms for normal activity (false positive) or not sound alarms during attacks (false negative). Nowadays, the numbers of new attacks are increasing and the variations of

387

Vol. 5, Issue 1, pp. 387-398

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
known attacks cannot be recognized by misuse detection. Here, we develop an intrusion detection system using the Chi-square distribution and Gaussian mixture distribution that are used to detect outlier data and find out the results using three different approaches. Outlier detection is the most important tasks in data analysis. The outliers describe abnormal data behavior, i.e. data which deviate from the natural data variability. The cut-o value or threshold which divides anomalous and non-anomalous data numerically is often the basis for important decisions. Many methods have been discussed for univariate outlier detection. They are based on (robust) estimation of location and scatter, or on data quantiles. Their major disadvantage is that these rules are independent from the sample size. Moreover, by denition of most rules, outliers are identied even for clean data, or at least no distinction is made between outliers and extremes of a distribution. The basis for multivariate outlier detection is the Mahalanobis distance. The standard method for multivariate outlier detection is robust estimation of parameters in the Mahalanobis distance and the comparison with a critical value of the Chi-square distribution. However, the values larger than this critical value are not necessarily outliers; they could still belong to the data distribution. In order to distinguish between extremes of a distribution and outliers, Garrett introduces the Chi-square plot, which draws the empirical distribution of the robust Mahalanobis distances against the Chi-square distribution. The rest of the paper is organized as follows. Section 2 discuses the related work and section 3 discusses the proposed work.

II.

RELATED WORK

Many designs have been developed for intrusion detection [1-6]. Shyu has developed a network based intrusion predictive model using principal component analysis (PCA) and Chi-square distribution for KDD1999 dataset [1]. Denning describes an intrusion detection model that is capable of detecting breakins, penetrations and other types of computer attacks. This model is based on the hypothesis that the security violations can be detected by monitoring audit records of the system for abnormal patterns of system usage [2]. Errors in multivariate data have been detected using PCA [3]. Ye discusses an anomaly detection technique based on a Chi-Square statistic into information systems that achieves 100% detection rate [4]. Puketza discusses methodologies to test an intrusion detection system and gets satisfactory result in the course of testing IDSs [6]. He further finds experimentally that the performance of Chi-square distribution is better than that of the Hotellings T2 [14]. In [34], the PCA methodology is discussed to detect intrusion. Filzmoser applies the Chi-square method to detect multivariate outlier in exploration geochemistry [17]. Garrett has made a tool for multivariate outlier detection [18]. Gascon et al. have shown a comprehensive statistical analysis in relation to the vulnerability disclosure time, updates of vulnerability detection systems (VDS), software patching releases and publication of exploits [22]. Davis and Clark have shown a trend toward deeper packet inspection to construct more relevant features through targeted content parsing [23]. Jin et al. discuss direct utilization of the covariance matrices of sequential samples to detect multiple network attacks [24]. Yeung and Ding show in their experimental results that the dynamic modeling approach is better than the static modeling approach for the system call datasets, while the dynamic modeling approach is worse for the shell command datasets [25]. Hussein and Zulkernine present a framework to develop components with intrusion detection capabilities [26]. Wang et al. present several cross frequency attribute weights to model user and program behaviors for anomaly intrusion detection [27]. Chen et al. discuss an efficient filtering scheme that can reduce system workload and only 0.3% of the original traffic volume is required to examine for anomaly [28]. Casas et al. present an unsupervised network intrusion detection system that is capable of detecting unknown network attacks without using any kind of signatures, labeled traffic, or training [29]. Trailovic discusses variance estimation and ranking methods for stochastic processes modeled by Gaussian mixture distributions. It is shown that the variance estimate from a Gaussian mixture distribution has the same properties as a variance estimate from a single Gaussian distribution based on a reduced number of samples [31]. Dasgupta presents provably the rst correct algorithm for learning a mixture of Gaussians, which is very simple and returns the true centers of the Gaussians to within the precision specied by the user, with high probability and has linear complexity in the dimension of data and polynomial in the number of

388

Vol. 5, Issue 1, pp. 387-398

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Gaussians [32]. Dempster et al. present a general approach to iterative computation of maximumlikelihood estimator for incomplete data since each iteration of the algorithm consists of an expectation step followed by a maximization step [33]. Chandola et al. have presented a brief survey of anomaly detection [35]. Teodoro et al. have discussed anomaly detection methodologies and different type of problems [36]. In next section, we discuss statistical techniques for intrusion detection.

III.

STATISTICAL TECHNIQUES IN INTRUSION DETECTION

In this work, we discuss three approaches, which are based on PCA, Chi-square distribution and Gaussian mixture distribution and then discuss their comparative performances for host based intrusion detection system. We may mention that these techniques have been discussed in literature for network based intrusion detection systems, not for host based systems. In [34], the PCA has been used for outlier detection in hardware profile. We now discuss PCA, Chi-square distribution, and Gaussian mixture distribution each for host based intrusion detection system. 3.1 Principal Component Analysis The principal component analysis (PCA) is a common technique to nd the patterns in a high dimension data by reducing its dimensions without losing the information contained in it. It produces a set of principal components which are orthonormal eigenvalue/eigenvector pairs, i.e., it projects a new set of axes that best suit the data. In our proposed scheme, these set of axes represent the normal feature data. Outlier detection occurs by mapping the data on to these normal axes and calculating the distances from these axes. If the distance is greater than a certain threshold, we may confirm there is an attack i.e. outlier detection. Principal components are particular linear combinations of m random variables: X1, X2, ... ,Xm that have two important properties: a. X1 , X2 , ... , Xm are uncorrelated and sorted in descending order. b. Total variance, X, of X1 , X2 , ... , Xm is given by m X = Xi (1)
i 1

These variables are found from eigenanalysis of the correlation matrix of the original variables X1, X2, ... , Xm. The values of correlation matrix and covariance are not same [9-10], [15]. Let the original data, X, be an nxm data matrix of n observations with each observation consisting of m elds (dimensions) X1, X2,..., Xm and R be a m x m correlation matrix of X1, X2,...,Xm. The eigenvalues of R are the roots of the following polynomial equation: | R I | = 0 Each eigenvalue of R has a corresponding non-zero vector e, called an eigenvector, Re = e If (1 , e1), (2 , e2), ..., (m, em) are the m eigenvalue-eigenvector pairs of the correlation matrix R, the ith principal component is given by yi = eiT(x x ) = ei1 (x1 x1 )+ ei2 (x2 x2 ) + ... + eim (xm xm ), i =1 , 2 , ..., m. where eigen values are ordered as 1 2 ... m 0, eiT= (ei1 , ei2 ,..., eim) is ith eigenvector, x=(x1, x2, , xm)T is transpose of observation data x =(x1 , x2 , ..., xm)T is transpose of sample mean vector of the observation data. For any attribute xi, let the observed data be a1, a2, , an, then we have (2)

xi = 1/n ai
i 1

389

Vol. 5, Issue 1, pp. 387-398

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
The principal components derived from the covariance matrix are usually different from the principal components generated from the correlation matrix. When some values are much larger than others, their corresponding eigenvalues have larger weights. One of these metrics is known as the Mahalanobis distance d2 (x, y) = (x y)T R-1 (x y ) (3) where R-1 is the sample correlation matrix, x and y are vectors and here XT represents the transpose of X. Using the correlation matrix, the relationships between the data elds is represented more effectively. There are two main issues in applying PCA, interpretation of the set of principal components and calculation of distance. Each eigenvalue of a principal component corresponds to the amount of variation it encompasses. The larger eigenvalues are more signicant and correspond to their projected eigenvectors. The principal components are sorted in descending order. Eigenvectors of the principal components represent axes which best suit a data sample. Points which lie at a far distance from these axes are assumed to exhibit abnormal behavior and they can be easily identified. Using a threshold value, the data generated by normal system with Mahalanobis distance greater than the threshold is considered an outlier and, here, it is an intrusion. But, sometimes it alerts the user as intrusion if the data is on the threshold boundary. Consider the sample principal components, y1 , y2 , ... ,ym of an observation X where yi = eiT (X x ), i =1 , 2 , ..., m The sum of squares of the partial principal component scores is equal to the principal component score:
m

i 1

yi2/i = y12/1 + y22/2 + . + ym2/m

(4)

equates to the Mahalanobis distance of the observation X from the mean of the normal sample dataset [15]. Major principal component score is used to detect extreme deviations with large values on the original features. The minor principal component score is used to detect some attacks which may not follow the same correlation model. As a result, two thresholds are needed to detect attacks. q is a subset of major principal components and r is a subset of minor principal components. The major principal component score threshold is denoted Tq while the minor principal component score threshold is referred to as Tr. q is most significant principal components and r is least significant principal components. An attack occurs for any observation X if :

i 1

yi2/i > Tq

or

i m r 1

yi2/i > Tr

(5)

3.2 Chi-square distribution


The Chi-square distribution is an asymmetric distribution that has a minimum value of 0, but no maximum value. The curve reaches a peak to the right of 0, and then gradually declines. The mean of the Chi square distribution is the degree of freedom and the standard deviation is twice the degrees of freedom. For each degree of freedom, there is a different X2 distribution. This implies that the Chi square distribution is more spread out with a peak farther to the right, for larger than for smaller degrees of freedom. As a result, for any given level of significance, the critical region begins at a larger Chi-square value, the larger the degree of freedom. The size of multivariate data are quantied by the covariance matrix. A well-known distance measure which takes into account the covariance matrix is the Mahalanobis distance. For a p-dimensional multivariate sample xi ( i = 1, , n ), the Mahalanobis distance is dened as MDi = (( xi - t )T C-1 ( xi - t ))1/2 , i = 1, , n where t is the estimated multivariate location and C the estimated covariance matrix. Usually, t is the multivariate arithmetic mean, and C the sample covariance matrix. For multivariate normally distributed data, the values are approximately Chi-square distributed with p degrees of freedom (Xp2). The multivariate outliers can now simply be dened as observations having a large (squared) Mahalanobis distance. For this purpose, a quantile of the Chi-squared distribution (e.g., the 99.5% quantile) could be

390

Vol. 5, Issue 1, pp. 387-398

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
considered. However, this approach has several limitations. The Mahalanobis distances need to be estimated by a robust procedure in order to provide reliable measures for the recognition of outliers. Single extreme observations, or groups of observations, departing from the main data structure can have a severe inuence to this distance measure because both location and covariance are usually estimated in a non-robust manner. The minimum covariance determinant (MCD) estimator is probably most commonly used in practice. Using robust estimators of location and scatter in the above equation leads to robust distances (RD). If the squared RD for an observation is larger than, say, X 2p;0.995, it can be declared as an outlier. The Chi-square plot are obtained by the squared Mahalanobis distances (which have to be computed ont the basis of robust estimations of location and scatter) against the quantiles of Xp2, the most extreme points are deleted until the remaining points follow a straight line. The deleted points are identied as outliers. This method needs user interaction and experience on the part of the analyst. Moreover, especially for large data sets, it can be time consuming, and also to some extent it is subjective. In the next section a procedure that does not require analyst intervention, is reproducible and therefore objective, into consideration is introduced. 3.3 Gaussian Mixture Distribution Assuming the covariance matrix non-singular, the probability density function (pdf) of X can be written as: Nx(, ) = |2 |-1/2 exp(-1/2 (x- )T -1 (x- )) (6) Here denotes mean value, covariance matrix and | . | the determinant. It is possible to have multivariate Gaussian distributions with singular covariance matrix. In that case, the above expression cannot be used for the probability distribution function. We assume non-singular covariance matrices. For a mixture model with K components each zi is between 1 and K. The sum to be maximized is i k p(k|xi;t)[log p(k;) + log p(xi|k;)] (7)

Using the mixture model notation from above, we have p(k;) = k and p(xi|k;) = f(xi;k). Then the sum to be maximized can be written as E = i k p(k|xi;t)[log k + log f(xi;k)] (8) For the E-step, we use Bayes rule: wik = p(k|xi;t) = f(xi;k).k / j f(xi;j).j (9) For the M-step, the two terms inside the square brackets in E involve disjoint sets of parameters. So, we can do two separate maximizations. The rst one to be maximized is i k wik log k = k ck log k (10) where ck = i wik, subject to constraint k k = 1. Using a Lagrange multiplier, its solution is given by k = ck / j cj (11) The second one to be maximized is i k wik log f(xi;k) (12) This can be divided into K separate maximizations, each of the following form k = argmax i wik log f(xi;) (13) The cluster is used to automatically estimate the parameters of a Gaussian mixture model from sample data. This process is essentially similar to conventional clustering except that it allows cluster parameters to be accurately estimated even when the clusters overlap substantially.

IV.

EXPERIMENT

To carry out the experiments, the performance logs need to be generated. The steps for generating the performance logs are as follows. On the start menu, go to settings, then Control Panel. Double click Administrative Tools and double click Computer Management.

391

Vol. 5, Issue 1, pp. 387-398

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Explore performance Logs and Alerts by right click Counter Logs, and then click New Log Settings. Type a name for the counter log and then click OK. Click Add Counters. In the Performance object box, select a performance object that need be monitored. Counters added for experiment. On the General tab under Sample data every, sampling interval of 15 seconds is configured. On the Log Files tab log files properties is configured as Comma delimited files that can be viewed later in reporting tools such as Microsoft Excel. After the performance log has been generated for each day, the log is divided into 4 groups, and the average values for each column of the table are calculated. After finding the average values, the values are maintained in another table. These values are used as our normal data set. In the meanwhile, for one day the system is left to work when the graphics driver, audio driver, and USB driver have been disabled. This generates the logs for system performance that have been considered as intruded data. We have taken the same number and the same type of attributes in our experiments. For our experiment, we have taken the normal dataset and the testing dataset i.e. mixture dataset (normal and intrusion) have been shown in Tables 1 and 2, respectively. Table 1: Normal dataset with some selective attributes
Commit ted byte in use 3.82441 8508 3.64145 3572 5.11114 4298 5.63854 6946 4.57840 4615 4.77965 7979 6.23580 037 5.61765 1849 4.11134 8119 9.08474 6823 10.7049 4185 10.8421 5109 8.92087 0039 9.01944 5732 Availab le Mbytes 1724.57 2519 1736.84 8485 1680.81 9718 1654.26 1628 1702.97 1429 1696 1640.77 8065 1648.30 0971 1720.17 0732 1619.29 0476 1580.08 8785 1563.67 4455 1650.93 6255 1629.35 9551 Cache faults/sec Page faults/sec Page writes/se c 0.06857 4005 0.24534 2241 0.14372 3968 0.03686 9153 0.15858 5618 0.16840 0988 0.13230 9061 0.01555 0526 0.06567 2177 0.60135 9321 1.16544 1987 0.46537 3406 0.29024 2311 1.23764 2668 Page op/sec Pool Nonpaged Allocs 26682 23133.4 2424 37059.2 0563 32614.1 1047 27955.8 33171.6 1811 54970.7 5613 33495.5 1262 26849.5 3659 42900.9 6667 49948.0 1402 86950.0 7321 36718.6 8127 37276.3 9326 Pool Paged Allocs 42251.7 6336 32497.9 697 52290.0 2817 45698.5 5233 37290.3 7143 48146.6 0236 71052.6 1742 43982.1 0097 37876.3 3537 47518.9 2857 62041.1 2617 100841. 3692 45366.9 5219 44230.8 5393 System driver total byte 7503152 .855 7503872 7607771 .944 7503872 7503872 7503872 7503872 7503689 .072 7500175 .61 7470655 .39 7470663 .776 7470944 .498 7470916 .335 7470045 .483 Write copies/se c 3.33582 1308 4.44542 9107 1.91861 6972 2.13336 7554 5.54100 7434 1.32170 4214 3.68972 7265 7.98484 3091 7.74730 1195 4.44356 2404 10.8575 9362 8.17171 7948 7.52211 2596 9.02677 0954

101.6124 671 57.15040 864 79.47708 971 36.17087 94 51.11998 798 49.93331 807 94.47212 975 46.02698 896 132.1210 622 68.32869 399 76.97290 274 102.9971 76 35.11830 951 123.1521 235

295.3630 973 256.0914 499 334.4868 715 162.9619 512 280.3527 608 128.9745 175 308.6162 728 219.4003 424 525.6392 525 458.0278 67 512.5241 61 469.8837 442 189.8569 296 660.7778 565

1.09718 4087 3.92547 5853 2.29958 3495 0.58990 6451 2.53736 9884 2.69441 5815 2.11694 497 0.24880 8412 1.05075 484 9.62174 913 18.6470 7179 7.44597 4504 4.64387 6977 19.8022 8269

392

Vol. 5, Issue 1, pp. 387-398

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
8.15211 776 9.78973 4979 8.82137 0882 9.22601 2337 13.1381 8528 10.2376 1708 14.1174 0933 11.3247 4835 8.32844 5781 11.6934 3107 13.8784 9525 9.62421 2947 10.1362 5358 14.3701 0283 8.60691 6888 9.20217 6526 8.17029 6367 10.3369 9814 8.68097 7551 15.5496 2495 9.92405 1137 14.1485 8942 12.1838 5114 14.0090 7012 9.29341 2873 8.27161 7418 1669.16 8831 1564.33 3333 1635.53 8462 1605.61 8421 1567.82 7759 1621.77 0221 1537.98 6938 1523.32 4022 1651.97 9167 1548.83 8095 1432.84 9765 1599.36 4035 1567.27 7372 1552.65 6716 1628.66 2634 1585.38 3178 1637.28 0702 1550.55 7522 1642.34 4156 1376.74 1758 1585.74 7813 1450.71 467 1465.22 314 1440.05 1704 1583.12 7273 1650.73 9394 46.88692 477 108.4193 887 70.41303 081 82.65173 496 60.97717 436 107.5001 39 55.69721 551 46.23485 621 48.98892 139 46.67733 261 40.06764 295 52.68971 743 80.36205 126 32.39166 472 51.56334 321 140.3664 739 131.4596 738 84.19619 754 120.2821 865 66.20661 941 67.85154 881 69.47134 899 127.4100 169 54.06844 756 108.4084 986 235.4550 432 185.9397 726 688.6777 162 401.7980 469 478.1532 06 335.7265 847 639.6973 908 329.7004 28 296.8309 553 329.3808 661 268.6304 243 200.9667 528 289.4970 927 476.6320 696 251.1562 525 258.0409 433 874.0920 392 607.5526 303 483.3572 935 367.8754 24 480.5522 772 336.0418 492 348.4456 721 848.5976 84 234.6482 035 817.3167 886 575.5292 127 0.32143 1061 1.01201 838 0.52689 9445 0.81245 8579 0.80176 4537 0.61802 9752 1.10637 4628 0.23660 2086 0.64198 8393 0.67348 6201 0.45081 3931 0.56781 3405 0.89210 8133 0.61583 7055 0.31237 3168 1.14834 4959 0.72011 2328 0.56207 5459 0.28778 0024 0.50815 5306 0.35963 2859 0.83024 7837 1.03672 1365 0.36904 3016 1.13076 3968 0.40006 9694 5.14289 6979 16.1922 9409 8.43039 1119 12.9993 3727 12.8282 326 9.88847 6038 17.7019 9405 3.78563 3383 10.2718 1429 10.7757 7921 7.21302 2891 9.08501 4478 14.2737 3012 9.85339 2883 4.99797 0694 18.3735 1934 11.5217 9725 8.99320 7343 4.60448 039 8.13048 4901 5.75412 5742 13.2839 654 16.5875 4184 5.90468 8251 18.0922 2349 6.40111 5106 36697.4 8312 39289.6 8571 36366.8 5 44724.5 49766.0 2174 49423.0 7537 55713.7 1408 45210.7 8911 35643.3 0208 48628.6 8571 65461.5 8216 45187.1 7982 45269.5 9124 50546.9 5191 43503.2 5403 40237.6 3551 40203.4 3275 43950.9 469 55126.2 0698 59087.9 3407 49347.1 1953 59940.0 9017 53400.0 7438 57645.8 0964 40862.4 3636 43779.8 3636 42494.6 1299 42273.7 4286 41617.4 6154 48669.6 7763 58386.9 9164 56112.7 8125 58141.7 2424 52444.3 8687 41295.2 1354 52783.7 6429 69597.7 2457 48521.9 5175 48855.0 219 55239.2 2554 52199.9 8387 44611.6 1682 48259.9 2982 51344.5 885 65521.6 9075 65756.6 685 57079.7 6385 64597.6 9583 56523.8 2645 66894.4 7004 42871.1 52748.0 4848 7470859 .304 7469309 .562 7470379 .323 7470484 .211 7470946 .462 7577140 .706 7470967 .269 7471038 .212 7470613 .333 7471689 .143 7470956 .57 7470690 .807 7470416 .35 7470791 .536 7470850 .753 7588050 .542 7470553 .076 7470687 .15 7503742 .338 7470931 .458 7503137 .586 7470850 .412 7469546 .843 7470993 .297 7469391 .127 7470818 .521 2.60437 2686 11.2262 4646 7.55144 6472 5.78276 1903 6.01478 3825 3.40091 2746 5.03511 2956 5.96200 7286 5.87977 0855 2.70073 5805 4.41019 7699 4.01893 4235 6.00012 105 3.87021 0501 2.98393 2976 13.2098 419 14.0465 5708 6.15198 2051 3.01397 8289 6.63678 3264 3.96796 1776 4.24406 5789 8.54424 9265 5.67031 7002 10.1794 9874 5.38662 6169

393

Vol. 5, Issue 1, pp. 387-398

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Table 2: Testing dataset with some selective attributes
Committ ed byte in use 4.60138 7552 4.94660 698 9.25192 0893 9.87306 6812 7.56782 3945 8.83105 4122 9.11075 792 8.03826 8296 9.81139 4241 13.2414 429 8.74293 7931 8.21368 4021 12.9612 3187 9.59933 8354 12.4230 2175 8.30114 4161 11.6564 8745 13.8432 0911 9.73926 6809 11.2344 4288 3.19629 803 7.09809 2873 7.63337 616 7.64847 9754 8.26295 7542 Availabl e Mbytes Cache faults/se c 26.0404 3396 85.3540 8783 220.428 4527 64.1982 9632 68.2845 7374 54.3164 7998 51.9675 4351 71.3703 7658 87.2102 098 64.6419 0334 32.3115 6115 62.8280 7225 62.8370 6428 83.1764 2964 73.9178 2982 225.526 7105 189.698 7689 64.7662 2069 68.7048 6069 99.7359 2101 1602.59 2741 3708.62 9397 2531.46 6836 1006.76 9784 461.147 8738 Page faults/se c 132.996 7699 371.750 5326 905.328 7534 766.807 9994 324.150 1421 264.238 9394 234.469 843 399.828 3644 427.907 25 316.883 6163 222.820 7108 336.102 0724 446.223 2714 498.384 5806 383.005 4702 470.624 4796 1189.39 3533 314.717 7163 415.767 9794 500.433 7987 4193.39 7458 48843.1 9072 29764.0 9948 1956.87 085 2469.61 1203 Page writes/se c 0.08761 0478 0.28284 9872 0.37617 982 0.42482 7439 0.94456 778 0.32550 4227 0.42992 5858 0.94208 7728 0.70748 8741 0.67535 2729 0.29641 1962 0.53460 3814 0.55942 9696 0.96693 393 1.11480 7342 0.30587 9715 2.62417 4933 0.87648 5557 0.54809 2356 0.32827 3963 1.86552 8638 4.60062 4828 6.08298 2197 6.00021 7242 5.20016 3966 Page op/sec Pool Nonpage d Allocs 29212.3 4328 32166.5 5556 63023.9 1566 48472.4 8454 35413.9 3651 39974.5 8491 49530.9 8042 35642.4 0625 43191.1 5556 53580.3 6842 36625.9 5755 38311.6 6239 39367.7 4279 40983.4 2915 52191.4 6293 40272.8 0297 48923.3 908 58342.5 8105 40609.1 098 58790.2 7542 22480 26335 29131 29534 31214 Pool Paged Allocs 39788.4 8507 43517.9 321 72219.4 2935 57402.4 9828 38624.1 9048 44790.8 8183 54235.6 993 39465.2 7344 52279.0 8889 61298.0 3383 41421.0 7547 44574.3 1197 50919.1 5965 46584.3 1579 57598.2 8571 51679.3 4143 55990.7 0115 66071.5 9158 47562.4 7059 63676.7 8072 28030 31848 37788 40935 45784 System driver total byte 7533002 .507 7503581 .235 7497443 .119 7470780 .261 7470356 .317 7471010 .447 7470840 .481 7470368 7470755 .081 7470985 .945 7470881 .811 7470298 .803 7470895 .113 7470341 .182 7470933 .642 7470849 .727 7468938 .299 7470707 .335 7470734 .557 7509421 .559 7278592 7278592 7372800 7372800 7372800 Write copies/se c 2.94018 4286 7.65501 2219 8.42061 2447 4.50664 9959 5.48903 716 5.51790 3309 3.02198 2506 5.88156 0678 10.6452 3952 4.57260 3869 4.25570 7607 3.90711 0265 5.64607 2064 9.07504 3219 4.48466 5132 3.65573 7869 14.6328 3051 4.62282 402 7.13023 9303 3.99335 076 123.228 5306 504.425 6508 287.251 9371 27.8010 0656 200.472 9878

1700.02 2388 1690.17 9012 1632.75 4655 1584.64 2612 1692.20 6349 1646.63 7537 1635.15 8042 1653.07 0313 1581.92 963 1585.84 5865 1644.18 3962 1634.32 4786 1484.29 9335 1608.68 4211 1585.58 7703 1657.94 197 1522.17 2414 1499.34 1053 1581.82 7451 1540.49 8941 1740 1613 1600 1603 1590

1.40176 7647 4.52559 7951 6.01887 7122 6.79723 9025 15.1130 8448 5.20806 7639 6.87881 372 15.0734 0364 11.3198 1986 10.8056 4366 4.74259 1388 8.55366 1019 8.95087 5138 15.4709 4288 17.8369 1747 4.89407 5434 41.9867 9892 14.0237 6892 8.76947 7698 5.25238 3416 29.8484 5821 73.6099 9725 97.3277 1516 96.0034 7587 83.2026 2346

394

Vol. 5, Issue 1, pp. 387-398

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
3.32467 8577 6.19187 7248 5.87539 7339 7.64619 4342 7.89301 8861 7.84651 569 7.96336 981 5.93630 8543 7.63327 6794 7.84363 4084 5.74721 5523 7.65623 0282 7.89778 8417 12.9385 1347 13.7001 5193 1738 1643 1651 1600 1602 1601 1600 1649 1598 1601 1653 1602 1599 1429 1405 1762.76 4486 3732.79 9556 2049.18 9776 2791.57 5869 295.943 3757 524.727 077 1156.74 4234 1982.85 2701 2817.94 1408 480.204 6148 1971.06 8567 2656.19 9819 866.335 4261 233.867 1284 282.068 5421 4682.11 5926 49552.1 3253 21404.1 4929 31989.2 3505 2465.41 6994 1533.88 9331 2285.15 4868 20715.5 1759 32296.5 4016 1513.81 4548 20561.8 4666 31600.5 7928 2194.67 6453 1094.26 8827 2013.08 0051 1.67807 7508 4.66608 2767 3.52296 2366 2.53379 5515 6.00020 3608 5.99383 6392 6.00005 6544 3.39234 5214 2.53070 6249 6.00005 7662 3.39333 2869 2.53288 8158 5.99401 817 0 0 26.8492 4012 74.6573 2428 56.3673 9785 40.5407 2824 96.0032 5773 95.9013 8227 96.0009 047 54.2775 2342 40.4912 9998 96.0009 2258 54.2933 2591 40.5262 1053 95.9042 9071 0 0 22296 25054 31232 34482 31029 36112 36439 31382 34623 36190 31316 34369 36073 55611 58759 28074 31251 31145 35446 44443 36112 42058 31160 35585 37699 31052 35563 37570 63459 68319 7278592 7278592 7376896 7376896 7372800 7471104 7471104 7376896 7376896 7471104 7376896 7376896 7471104 7471104 7471104 141.014 4466 492.671 6819 297.969 9201 211.705 2831 34.8011 8093 48.5500 7478 44.6004 2031 292.818 6234 196.995 5022 48.6004 6706 284.205 0934 220.161 3049 37.8955 1487 67.8001 3385 76.8005 1064

4.1. False alarm rate False alarm rate and detection rate can be calculated using confusion matrix. Confusion matrix is shown below Predicted Class C NC TN FP Actual Class C NC FN TP
Fig. 1 Confusion matrix

where, C Anomaly class, NC Normal class TN True Negative, FN False Negative TP True Positive, FP False Positive Recall (R) = TP / (TP+ FN), Precision (P) = TP / (TP+FP) F-measure = (RP(1+ 2)) / (R 2+P), where R and P denote Recall and Precision, respectively; is the relative importance of precision vs. recall and it is usually set to 1. Therefore, we can have F-measure as 2RP/(R+P).

395

Vol. 5, Issue 1, pp. 387-398

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963 4.2. Comparative results
Table 3: Comparative results of PCA, Chi-square distribution and cluster with Gaussian mixture distribution Name of Used Techniques Detection rate False alarm rate Principal Component Analysis 97.5% 2.5% Chi-square distribution 90% 10% Cluster with Gaussian Mixture Distribution 97.5% 2.5%

Here, we have used three different techniques for our experiments. We have got the results for PCA technique and Gaussian Mixture Distribution each, which is 97.5% detection rate. We have also got good result in Chi-distribution method. Using these above techniques we can easily detect hardware based intrusion. All these three techniques are shown by implementing for HIDS on the basis of performance log and two (i.e. Principal Component Analysis and Cluster with Gaussian Mixture Distribution) among these three give good results. The PCA and Gaussian mixture distribution give the detection rate maximum 97.5% each and the Chi-square distribution gives 90%. Generally, anomaly detection systems suffer from false alarm rate. Another problem in anomaly detection systems is that they are not able to classify the boundary data i.e. sometimes detection systems show the normal data as intrusion data and vice versa. Thats why we have used confusion matrix that gives accurate detection rate and false alarm rate. We can apply these techniques for any type of outlier detection and huge dataset also.

V.

FUTURE WORK

In future work, we will try to produce better results in terms of better detection rate and lesser false alarm rate. We will also explore other statistical techniques that can be used in our work for improving the results.

VI.

CONCLUSIONS

In this paper, we have analyzed an intrusion detection system using Chi-square distribution and Gaussian mixture distribution. We have shown the comparative results of Principal Component Analysis, Chisquare distribution and Gaussian mixture distribution. Our experimental results show that the PCA and Gaussian mixture distribution each give detection rate maximum 97.5% and the Chi-square distribution 90%. Generally, anomaly detection system suffers from false alarm rate. We can apply these methods for any type of outlier detection and we can apply these techniques for huge dataset also.

ACKNOWLEDGEMENT
The authors express their sincere thanks to Prof. S. Chand for his invaluable comments and suggestions.

REFERENCES
[1] M.L . Shyu, S.C. Chen, K. Sarinnapakorn, and L. Chang, A novel anomaly detection scheme based on principal component classier, IEEE Foundations and New Directions of Data Mining Workshop, pp. 172 179, Nov. 2003. D.E. Denning, An Intrusion Detection Model, IEEE Trans. on Software Engineering, SE-13, No. 2, pp. 222-232, 1987. D.M. Hawkins, The Detection of Errors in Multivariate Data Using Principal Components, Journal of the American Statistical Association, Vol. 69, No. 346, pp. 340-344, 1974. N. Ye and Q. Chen, An Anomaly Detection Technique Based on a Chi-Square Statistic for Detecting Intrusions into Information Systems, Quality and Reliability Eng. Intl, Vol. 17, No. 2, pp. 105-112, 2001. J. McHugh, Testing Intrusion Detection Systems: A Critique of the 1998 and 1999 DARPA Intrusion Detection System Evaluations as Performed by Lincoln Laboratory, ACM Trans. on Information and System Security, Vol. 3, No. 4, pp. 262-294, 2000.

[2] [3] [4] [5]

396

Vol. 5, Issue 1, pp. 387-398

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
N.J. Puketza, K. Zhang, B. Mukherjee and R.A. Olsson, Testing Intrusion Detection Systems: Design Methodologies and Results from an Early Prototype, Proc. 17 th National Computer Security Conference, Vol. 1, pp.1-10, Oct. 1994. [7] F.N.M. Sabri, Md. Norwawi, and K. Seman, Identifying False Alarm Rates for Intrusion Detection System with Data Mining, International Journal of Computer Science and Network Security, Vol.11 No.4, Apr. 2011. [8] Z.K. Baker and V. K. Prasanna, Efcient Hardware Data Mining with the Apriori Algorithm on FPGAs, In Proceedings of the Thirteenth Annual IEEE Sym. on Field Programmable Custom Computing Machines 2005. [9] J.D. Jobson, Applied Multivariate Data Analysis, Volume II: Categorical and Multivariate Methods. Springer- Verlag, N Y, 1992. [10] I.T. Jolliffe, Principal Component Analysis, Springer- Verlag, N Y, 2002. [11] J. Song, H. Takakura and Y. Okabe, A proposal of new benchmark data to evaluate mining algorithms for intrusion detection, In 23rd Asia Pacic Advanced Networking Meeting, 2007. [12] N. Athanasiades, R. Abler, J. Levine, O. Henry and G. Riley, Intrusion detection testing and benchmarking methodologies, In IEEE International Information Assurance Workshop, 2003. [13] H. Song and J.W. Lockwood, Efcient packet classication for network intrusion detection using FPGA, Intl. Symp. on Field Programmable Gate Arrays, Feb. 2005. [14] N. Ye, S.M. Emran, Q. Chen, and S. Vilbert, Multivariate Statistical Analysis of Audit Trails for Host-Based Intrusion Detection, IEEE Trans. on Computers, Vol-51, No.7, 2002. [15] R.A. Johnson and D.W. Wichern, Applied Multivariate Data Analysis, 3rd Edition, Prentice-Hall, Inc., Englewood Cliffs, N.J., U.S.A., 1992. [16] H. Om and T.K. Sarkar, Neural network based intrusion detection system for detecting changes in hardware profile, Journal of Discrete Mathematical Sciences & Cryptography, vol. 12(4), pp. 451-466, 2009. [17] P. Filzmoser, C. Reimann, and R.G. Garrett, Multivariate outlier detection in exploration geochemistry, Technical report TS 03-5, Department of Statistics, Vienna University of Technology, Austria, Dec. 2003. [18] R.G. Garrett, The chi-square plot: A tool for multivariate outlier recognition, Journal of Geo chemical Exploration, Vol. 32, pp. 319-341, 1989. [19] D. Gervini, A robust and ecient adaptive reweighted estimator of multivariate location and scatter, Journal of Multivariate Analysis, Vol. 84, pp. 116-144, 2003. [20] P.J. Rousseeuw, and K. Van Driessen, A fast algorithm for the minimum covariance determinant estimator, Technometrics, Vol. 41, pp. 212-223, 1999. [21] Rousseeuw P.J., Van Zomeren B.C., Unmasking multivariate outliers and leverage points, Journal of the American Statistical Association, Vol. 85 (411), pp. 633-651, 1990. [22] H. Gascon, A. Orfila, and J. Blasco, Analysis of update delays in signature-based network intrusion detection systems, Computers & Security, Vol. 30, pp. 613-624, 2011. [23] J.J. Davis and A.J. Clark, Data preprocessing for anomaly based Network Intrusion Detection: Review, Computers & Security, Vol. 30, pp. 353-375, 2011. [24] S. Jin, D.S. Yeung, and X. Wang, Network intrusion detection in covariance feature space, Pattern Recognition, Vol. 40, pp. 2185-2197, 2007. [25] D. Yeung, and Y. Ding, Host-based intrusion detection using dynamic and static behavioral models, Pattern Recognition, Vol. 36, pp. 229-243, 2003. [26] M. Hussein and M. Zulkernine, Intrusion detection aware component-based systems: A specification-based framework, Journal of Systems and Software, Vol. 80, pp. 700-710, 2007. [27] W. Wang, X. Zhang, and S. Gombault, Constructing attribute weights from computer audit data for effective intrusion detection, Journal of Systems and Software, Vol. 82, pp.1974-1981, 2009. [28] C.M. Chen, Y.L. Chen, and H.C. Lin, An efficient network intrusion detection, Computer Communications, Vol. 33, pp. 477-484, 2010. [29] P. Casas, J. Mazel, and P. Owezarski, Unsupervised Network Intrusion Detection Systems: Detecting the Unknown without Knowledge, Computer Communications, Vol. 35, pp. 772-783, 2012. [30] K. Triantafyllopoulos, On the central moments of the multidimensional Gaussian distribution," The Mathematical Scientist, Vol. 28, pp. 125-128, 2003. [31] L. Trailovic and L. Y. Pao, Variance Estimation and Ranking for Gaussian Mixture Distributions in Target Tracking Applications, Proc. Conf. Decision and Ctrl., Vol. 2, pp. 2195 2201, Dec. 2002. [32] S. Dasgupta, Learning Gaussian Mixtures, Proc. of IEEE Symp. on Foundations of Computer Science, p.634, Oct. 17-18, 1999 . [6]

397

Vol. 5, Issue 1, pp. 387-398

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
A.P. Dempster, N.M. Laird, and D.B. Rubin, Maximum-Likelihood from Incomplete Data Via the EM Algorithm, J. Royal Stat. Soc. Ser. B, vol. 39, no. 1, pp. 1-38, 1977. [34] H. Om and T. Hazra, Design of Anomaly Detection System for Outlier Detection in Hardware Profile Using PCA, International Journal on Computer Science and Engineering, Vol. 4, Issue 9, pp. 1623-1632, Sept. 2012. [35] V. Chandola, A. Banerjee, and V. Kumar, Anomaly detection: A survey, ACM Computing Surveys, vol. 41, no. 3, 2009. [36] P. G. Teodoro, J. D. Verdejo, G. M. Fernandez, and E.Vazquez, Anomaly-based network intrusion detection: Techniques, systems and challenges, computer & security, Vol. 28, Issues 12, pp. 18-28, Feb. March 2009. [33]

AUTHORS
Hari Om is presently working as Assistant Professor in the department of Computer Science & Engineering at Indian School of Mines, Dhanbad, India. He earned his Ph.D. in Computer Science from Jawaharlal Nehru University, New Delhi, India. He has published more than 50 research papers in International and National Journals including various Trans. of IEEE, Springer, Elsevier etc., International and National conferences of high repute. His research areas are Video-onDemand, Cryptography, Network security, Data Mining, and Image Processing.

Tanmoy Hazra has completed his M.Tech in Computer Applications, department of Computer Science and Engineering from Indian School of Mines, Dhanbad in 2012. He is presently working as Assistant Professor in the department of Computer Engineering at ISB & M School of Technology, Pune. His research interest is mainly on Intrusion Detection Systems.

398

Vol. 5, Issue 1, pp. 387-398

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

ABRASIVE WEAR BEHAVIOUR OF BAMBOO-GLASS FIBER REINFORCED EPOXY COMPOSITES USING TAGUCHI APPROACH
Raghavendra Yadav Eagala1, Allaka Gopichand1, Gujjala Raghavendra2, Sardar Ali S3
1

Mechanical Department, Swarnandra College of Engg., Narsapur, Andhra Pradesh, India 2 Department of Mechanical Engineering, NIT Rourkela, Rourkela, India 3 Mechanical Department, Intellectual Engg. College, Ananatapur, Andhra Pradesh, India

ABSTRACT
Recently hybrid composite are new emerging material. Hybrid composite materials are the material which contains two or more different types of fiber in which one type of fiber could balance with what are lacking in the other. Hybridization of natural fiber with stronger and high corrosion resistance synthetic fibers like glass can improve the various properties such as strength, stiffness etc. As an attempted, a set of epoxy based composites reinforced with both glass and bamboo fiber are fabricated. The goal of the present work is to study the physical and abrasive wear behaviour of the composites. To minimize the time, cost and mainly parametric analysis for abrasion wear characteristics Taguchis experimental design is selected.

KEYWORDS: glass fiber, bamboo, abrasive, taguchi

I.

INTRODUCTION

Replacing the heavy metal material with the polymer material reinforced with synthetic fibers such as glass, carbon, and aramid is a great achievement in the field of materials. Present generation researchers are focussing to reduce the utilization of traditional fillers and increase the utilization of bio waste natural fiber material. All shows interest in developing the new natural fiber instead of traditional fibers because of their low cost, combustibility, lightweight, low density, high specific strength, renewability, non-abrasivity, non-toxicity, low cost and biodegradability. Despite these advantages, the widespread use of natural fiber-reinforced polymer composite has a tendency to decline because of their high initial costs, their use in non-efficient structural forms and most importantly their adverse environmental impact. Still yet many challenges to overcome in order to become largely used as reliable engineering materials for structural elements. However, their use is steadily also increasing and many large industrial corporations are planning to use, or have yet commencing to use, these materials in their products [1]. Recently, a series of works have been done to replace the traditional synthetic fiber with natural fiber composites [2-6]. For a moment, hemp, sisal, jute, cotton, flax and broom are the most commonly fibers used to reinforce polymers like polyolefins [7, 8], polystyrene [9], and epoxy resins. To be more, fibers like sisal, jute, coir, oil palm, bamboo, wheat and flax straw, waste silk and banana [4, 5, 10-12] have proved to be good and effective reinforcement in the thermoset and thermoplastic matrices. Combination of traditional and natural fiber leads to hybrid composites.

399

Vol. 5, Issue 1, pp. 399-405

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Many researchers studied the tribological and mechanical properties of hybrid composites; they had also studied the erosion rate for different parameters in erosion test [13, 14]. Nayak et.al [15] studied on influence of short bamboo/glass fiber on the thermal, dynamic mechanical and rheological properties of polypropylene hybrid composites. Prasad et.al [16] studied on the tensile properties of bamboo and glass fibers reinforced epoxy hybrid composites. It was found that the hybrid composites exhibit good tensile properties. To accomplish the accurately and constantly of the certain values of the abrasive rate, the parameters which influence of the process have to be controlled accordingly. As the number of input parameters is too large, statistical methods can be employed for precise identication of signicant control parameters for optimization. Taguchi method has become a widely accepted methodology for reducing the time, cost number of experiments and improving efficiency. [17-19]. In the present investigation a new hybrid composite were prepared with a different volume fraction of glass and bamboo reinforced with a polymer epoxy were prepared. The characterization and the abrasive wear behaviour of the composite by using the taguchi approach.

II.

MATERIALS AND METHODS

2.1. Specimen preparation


A New cylindrical discontinues Hybrid Composite pins are prepared with diameter of 10 mm by using high strength E-glass fiber and natural bamboo fiber with Epoxy as a matrix by using metal mold. Different set of composites are prepared those are given in table 1.the E-glass fiber, bamboo fiber and Epoxy possesses a density of 2.56 gm/cm3, 0.72 gm/cm3 and 1.1 gm/cm3 respectively. Composites were prepared by using resin to hardener ratio as 10:1. The different composites tested are given in table 1.
Table 1. Composite sequence and names. Sl .no 1 2 3 Composite C1 C2 C3 Name eroded surface Epoxy Epoxy+bamboo10%+glass 10% Epoxy+bamboo20%+glass 20%

2.2. Abrasive wear test


Two-body abrasive wear tests were performed using a single pin-on-disc wear testing. Cylindrical samples which are fabricated by hand layup technique were tested under different testing conditions. Test samples were polishing to dimensions 10 mm diameter and 32mm length. The composite sample was abraded against the water proof silicon carbide (SiC) abrasive papers of 320 grit sizes at a different running speed of 0.837, 1.256 and 1.6752 m/s in multipass condition. Different types of loads applied in this test are 5, 10 and 15N. The abrasive wear rate was calculated by equation 1.

w a wb Sd

(1)

W is the wear rate in cm3/m, wa and wb are the weight of the sample after and before the abrasion test in gm is the density of the composite and Sd is the sliding distance in m.

2.3. Taguchi method


Taguchi experimental design is an important tool for robust design. It offers a simple and systematic approach to optimize the design parameters because it can significantly minimize the overall testing time and the experimental costs. In this robust design, two major tools are used: signal to noise ratio (S/N), which measures quality with emphasis on variation and orthogonal array. The numbers of experiments are very high with all these parameters in order to reduce the time and cost a taguchi method L27 method is used. The abrasive wear and SN ratio results are shown in table 2.

400

Vol. 5, Issue 1, pp. 399-405

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Table 2. Experimental lay out and Abrasive test results. Sl. no 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 Fiber content 0 0 0 10 10 10 20 20 20 10 10 10 20 20 20 0 0 0 20 20 20 0 0 0 10 10 10 load 5 5 5 10 10 10 15 15 15 15 15 15 5 5 5 10 10 10 10 10 10 15 15 15 5 5 5 Sliding velocity 0.837 1.256 1.675 0.837 1.256 1.675 0.837 1.256 1.675 0.837 1.256 1.675 0.837 1.256 1.675 0.837 1.256 1.675 0.837 1.256 1.675 0.837 1.256 1.675 0.837 1.256 1.675 Wear rate 0.0000149 0.0000160 0.0000238 0.0000132 0.0000164 0.0000217 0.0000085 0.0000107 0.0000136 0.0000160 0.0000184 0.0000218 0.0000072 0.0000082 0.0000143 0.0000199 0.0000216 0.0000258 0.0000096 0.0000110 0.0000127 0.0000570 0.0000645 0.0000713 0.0000136 0.0000121 0.0000222 SN ratio 96.536 95.918 92.468 97.589 95.703 93.271 101.412 99.412 97.329 95.918 94.704 93.231 102.853 101.724 96.893 94.023 93.311 91.768 100.355 99.172 97.924 84.883 83.809 82.938 97.329 98.344 93.073

2.4. Density
The density of composite materials in terms of volume fraction is found out from the following equations

s ct

w0 w o w a wb

(2)

Where sct represents specific gravity of the composite, W0 represents the weight of the sample, Wa represents the weight of the bottle + kerosene; Wb represents the weight of the bottle + kerosene + sample. Density of composite = Sct * density of kerosene The density values are plotted in table 3.
Table 3. Density of different composites samples. Sl .no Composite Density 1 C1 1.1 2 C2 1.336 3 C3 1.294

(3)

2.5. Micro-hardness
Micro-hardness of composite specimens is made using Leitz micro-hardness tester. A diamond indenter in the form of a right pyramid of a square base of an angle 136 between opposite faces under a load F is forced into the specimen. After removal of the load, the two diagonals of the indentation (X and Y) left on the surface of the specimen are measured and their arithmetic mean L is calculated.

401

Vol. 5, Issue 1, pp. 399-405

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
The load considered in the present study is 24.54N and Vickers hardness is calculated using the following equation:
Hv 0.1889 F L2

and

X Y 2

(4)

Where F is the applied load (N), L is the diagonal of square impression (mm), X is the horizontal length (mm) and Y is the vertical length (mm). The hardness values are plotted in table 4.
Table 4. Hardness of different composites samples. Sl .no 1 2 3 Composite C1 C2 C3 hardness 18.094 20.85 22.67

III. RESULTS AND DISCUSSIONS


The density of the composites increases when the reinforcement is increases due to the glass density is more when compared to epoxy density still it is not increases to greater extent because the density of bamboo is less than the remaining two. The density of the 20% reinforced is less when compared to 10 % this may causes due to void content. The hardness values also increase with increasing in fiber content.

Figure 1. Main effects plot of S/N ratio for untreated coir fiber. Table 5. Response table for S/N ration- smaller is better. level 1 2 3 Delta Rank fiber 90.63 95.46 99.67 9.05 1 load 97.24 95.90 92.63 4.61 2 velocity 96.77 95.79 93.21 3.56 3

The calculated S/N ratio for three factors on the abrasive wear rate for hybrid composite for each level is shown in Figure 1. As shown in Table 5 and in figure 1 fiber is a dominant parameter on the abrasive wear rate. Based on the above discussion and also evident that the optimum conditions for the abrasive wear resistance are (a): 20 % fiber (b):5 load (c): 0.837 m/s velocity.

402

Vol. 5, Issue 1, pp. 399-405

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Figure 2. 3D surface graph of treated fiber abrasive wear rate Vs fiber and velocity.

Figure 2 shows the 3D graphs drawn for wear rate vs. fiber and sliding velocity. From the figure it is clear that the wear rate is more polymer composites without any reinforcement and also observed that as the reinforcing of glass and bamboo is increased the wear rate is decreased to a greater extent. It is also noticed that as the velocity increases the wear rate increases but more is observed only in the neat epoxy composites.

Figure 3. 3D surface graph of treated fiber abrasive wear rate Vs fiber and load

IV.

CONCLUSIONS
Experiments were carried out to study the Abrasive wear behaviour of different composites of bamboo and glass fiber reinforced hybrid composite with silica sand abrasive paper. Based on the studies the following conclusions are made. 1. New hybrid cylindrical composites were successfully prepared. 2. The composite with 20% reinforced gives the best wear resistance when compared to 3. The density is more for the composite prepared with the 10%. The hardness is more for the composites prepared with the 20 % reinforced composites. 4. The wear increases with increasing in load and the maximum wear occurs at 15 N. 5. From the SN ratio taguchi analysis the fiber is the dominating factor for wear resistance.

403

Vol. 5, Issue 1, pp. 399-405

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

REFERENCES
[1]. Schuh T & Gayer U. (1997) Automotive applications of natural bercomposites. Benets for the environment and competitiveness with man-made materials. In: Lea o AL, Carvalho FX, Frollini E, editors.Lignocellulosicplastics composites. Botucatu, Brazil: Unesp Publishers. p. 18195 [2]. A.K.Bledzki & J.Gassan, (1999) Composites reinforced with cellulose based fibre, Prog. Polym. Sci 24, pp 221274. [3]. A.K.Mohanty, M.Misra & L.T.Drzal, (2002) Sustainable bio-composites from renewable resources: opportunities and challenges in the green materials world, J. Polym. Environ 10 ,pp 1926. [4]. S.Joseph, M.S.Sreekalab, Z.Oommena & P.Koshyc, S. A.Thomas, (2002) Comparison of the mechanical properties of phenol formaldehyde composites reinforced with banana fibres and glass fibres, Compos. Sci. Technol 62, pp 18571868. [5]. P.J.Roe & M.P.Ansel, (1985) Jute reinforced polyester composites, J. Mater. Sci 20, pp 4015. [6]. X.Lua, M.Qiu Zhang, M.Zhi Rong, G.Shia & G.Cheng Yang, (2003) Self reinforced melt processable composites of sisal, Compos. Sci. Technol 63, pp 177186. [7]. A.Valadez-Gonzales, J.M.Cetvantes-Uc, R.Olayo & P.J.Herrera Franco, (1999) Effect of fibre surface treatment on the fibre-matrix bond strength of natural fibre reinforced composites, Composites, Part B 30 (3), pp 309320. [8]. A.K.Rana, B.C.Mitra & A.N.Banerjee, (1999) Short jute fibre-reinforced polypropylene composites: dynamic mechanical study, J. Appl. Polym. Sci 71, pp 531539. [9]. K.C.Manikandan Nair, S.M.Diwan, & S.Thomas, (1996) Tensile properties of short sisal fibre reinforced polystyrene composites, J. Appl. Polym. Sci 60, pp 14831497. [10]. M.Jacoba, S.Thomasa & K.T.Varugheseb, (2004) Mechanical properties of sisal/oil palm hybrid fiber reinforced natural rubber composites,Compos. Sci. Technol 64, pp 955965. [11]. L.A.Pothana, Z.Oommenb & S.Thomas, (2003) Dynamic mechanical analysis of banana fiber reinforced polyester composites, Compos. Sci. Technol 63(2), pp 283293. [12]. B.F.Yousif & N.S.M.EL-Tayeb, (2006) Mechanical and tribological characteristics of OPRP and CGRP composites, in: The Proceedings ICOMAST, GKH Press, Melaka, Malaysia, pp 384387, ISBN 983-42051-1-2. [13]. A.K.Sabeel & S.Vijayarangan, (2008) Tensile, flexural and inter laminar shear properties of woven jute and jute-glass fabric reinforced polyester composites, Journal of Materials Processing Technology 207(1-3), pp 330-335. [14]. C.Santulli & A.P.Caruso, (2008) A Comparative Study on Falling Weight Impact Properties of Jute/Epoxy and Hemp/Epoxy Laminates, Malaysian Polymer Journal, 4(1), pp 19-29. [15]. S.K.Nayak, S.Mohanty & S.K.Samal, (2009), Influence of short bamboo/glass fiber on the thermal, dynamic mechanical and rheological properties of polypropylene hybrid composites, Materials science and engineering A 523, pp 32-38. [16]. V.V.Prasad & M.L.Kumar, (2011) Chemical resistance and tensile properties of bamboo and glass fibers reinforced epoxy hybrid composites, International Journal of Materials and Biomaterials Applications 1 (1), pp 17-20. [17]. B.K.Prasad, S.Das, A.K.Jha, O.P.Modi, R.Dasgupta & A.H.Yegneswaran., (1997) Factors Controlling the Abrasive Wear Response of a Zinc-based Alloys Silicon Carbide article Composite, Composites, PartA 28A, pp 301-8. [18]. M.S.Chua, M.Rahman, Y.S.Wong & H.T.Loh, (1993) Determination of Optimal Cutting Conditions Using Design of Experiments and Optimization Techniques, Int. J. Mach Tools Manuf, 32(2), pp 297305. [19]. G.Taguchi, (1990) Introduction to Quality Engineering, Tokyo. Asian Productivity Organization.

AUTHORS
Raghavendra Yadav Eagala is a M-Tech student, swarnandra college of engineering, Narsapur.

Allaka Gopichand is an associate Prof in Swarnandra College of engineering, Narsapur

404

Vol. 5, Issue 1, pp. 399-405

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Gujjala Raghavendra is a Ph.D Research Scholar in the Department of Mechanical Engineering, National Institute of Technology Rourkela, India. He has more than 1 year of experience in teaching and research. His current area of research includes Tribology, Composite materials and Nano Technology. He has also presented more than 15 research articles in national and international conferences

Sardar Ali S is Assistant Prof in intellectual engineering college.

405

Vol. 5, Issue 1, pp. 399-405

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

A MULTIPLE KERNEL FUZZY C-MEANS CLUSTERING ALGORITHM FOR BRAIN MR IMAGE SEGMENTATION
M. Ganesh1 and V. Palanisamy2
1

Assistant Professor, Department of ECE, Info Institute of Engineering, Coimbatore, India 2 Principal, Info Institute of Engineering, Coimbatore, India

ABSTRACT
In spite of its computational efficiency and wide spread popularity, the FCM algorithm does not take the spatial information of pixels into consideration. In this paper, a multiple kernel fuzzy c-means clustering (MKFCM) algorithm is presented for fuzzy segmentation of magnetic resonance (MR) images. By introducing a novel adaptive method to compute the weights of local spatial values in the objective function, the new multiple kernel fuzzy clustering algorithm is capable of utilizing local contextual information to impose local spatial continuity, thus improving the classification accuracy and reduces the number of iterations. To estimate the intensity in homogeneity, the global intensity is introduced into the coherent local intensity clustering algorithm. Our results show that the proposed MKFCM algorithm can effectively segment the test images and MR images. Comparisons with other FCM approaches based on number of iterations and time complexity demonstrate the superior performance of the proposed algorithm.

KEYWORDS:

Fuzzy C-means (FCM), Image segmentation, Kernel function, Multiple kernel fuzzy c-means clustering (MKFCM), Magnetic resonance (MR) imaging.

I.

INTRODUCTION

Magnetic resonance (MR) imaging has several advantages over other medical imaging methods, including high contrast among different soft tissues, relatively high spatial resolution across the entire field of view and multi-spectral characteristics. Therefore, it has been widely used in quantitative brain imaging studies. Quantitative volumetric measurement and three-dimensional (3D) visualization of brain tissues are helpful for pathological evolution analyses, where image segmentation plays an important role. The size alterations in brain tissues often accompany various diseases, such as schizophrenia [1]. Thus, estimation of tissue sizes has become an extremely important aspect of treatment which should be accomplished as precisely as possible. This creates the need to properly segment the brain MR images into gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) and also to identify tumors or lesions, if present [2]. The main difficulties in brain segmentation are the intensity inhomogeneities and noise. In fact, intensity inhomogeneity occurs in many realworld images from different modalities [3, 4]. In particular, it is often seen in medical images, such as X-ray radiography/ tomography and MR images. For example, the intensity variation across the image, which arises from radio-frequency (RF) coils or acquisition sequences. Thus the resultant intensities of the same tissue vary with the locations in the image. The noise in MR images is Rician distributed and can affect significantly the performances of classification methods. The best solutions

406

Vol. 5, Issue 1, pp. 406-415

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
consist of either filtering the image prior to classification or embedding spatial regularization inside the classifier itself. This paper is organized into six sections. Section 1 gives introduction to the proposed work, Section 2 deals with Conventional FCM clustering algorithm, Section 3 deals with Kernel Fuzzy C-means algorithm, Section 4 describes the proposed method of Multiple kernel fuzzy c-means clustering, Section 5 describes the quantitative comparison of the accuracy of those segmentation results with other segmentation algorithms and finally conclusion is presented in section 6.

II. RELATED WORK


Image segmentation is an important and challenging problem and a necessary first step in image analysis as well as in high-level image interpretation and understanding such as robot vision, object recognition, and medical imaging. The goal of image segmentation is to partition an image into a set of disjoint regions with uniform and homogeneous attributes such as intensity, colour, tone or texture, etc. Many different segmentation techniques have been developed and detailed surveys can be found in references [2123]. Segmentation subdivides an image into different regions or objects based on the information found about objects in imaging data. In the segmentation of medical images, the objective is to identify different regions, organs and anatomical structures from data acquired via MRI or other medical imaging technique. Initially segmentation has been done based manually by human experts. But manual segmentation is a difficult and time consuming task, which makes an automated breast cancer segmentation [15] method desirable. The automated segmentation [16] of MR images into anatomical tissues, fluids, and structures is an interesting field in medical image analysis. Automated segmentation methods based on artificial intelligence techniques were proposed by [17], [18]. A method that detects deviations from normal brains using a multilayer Markov random field frame work [19]. In the last decades, fuzzy segmentation algorithms, especially the fuzzy c-means algorithm (FCM), have been broadly used in the image segmentation [9] and such a success mostly attributes to the introduction of fuzziness for the belongingness of each image pixel. Fuzzy c-means [14] allows for the ability to make the clustering methods able to retain more information from the original image than the crisp or hard segmentation methods [20]. Clustering is used to panel a set of given observed input data vectors or image pixels into clusters so that components of the same cluster is similar to one another than to members of other clusters where the number of clusters is usually predefined or set by some weight criterion or a priori knowledge. Fuzzy c-means segmentation methods are having significant profit in segmentation of medical images [10], because they could retain a more information from the original image than hard c-means segmentation methods. The main advantage in fuzzy c-means algorithm is it allows pixels to belong to multiple clusters with reasonable degrees of membership grades. However, there are some disadvantages in using fuzzy c-means; the membership of an object has not strong enough or significantly high for a particular cluster, it means that the equation of calculating membership is not an effective, and sometimes the equation for updating prototypes has incapable to work with data which greatly affected by noise. Thus the equation for updating prototypes leads the result of clustering might be uncorrected. The main reason for underlying drawbacks of above is, fuzzy c-means employs based on existed Euclidean distance measures. Computer aided brain tumor segmentation system is an important application in medical image analysis [27]. Developing a medical image analysis system not only can lighten the workload and decrease the errors of the doctors, but also can provide a quantitative measure about variation of the brain tumor throughout its whole therapeutic treatment. However, it is still a difficult problem to automatically segment brain tumor regions from MRI multi-sequences because of many existing types of tumors with morphological variability, a variety of shapes and appearance properties among individuals, the deformation near the structures in the brain which results in an abnormal geometry also for healthy tissues, and lack of prior knowledge about them [26]. Therefore, it is practically meaningful to focus on semi-automatic or fully-automatic segmentation methods on multiple MRI scans for medical research, disease monitoring, therapeutic control and so on. Different MRI sequences from different excitations can respectively provide different and partly independent information about different tissues, and reflect pathologic information about the tumors in the brain.

407

Vol. 5, Issue 1, pp. 406-415

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
As a tumor consists of different biologic tissues, one type of MRI cannot give complete information about abnormal tissues. Combining different complementary information can enhance the segmentation of the tumors. Therefore, radiology experts always combine the multi-spectral MRI information of one patient to make a decision on the location, extension, prognosis and Clustering is a process of classifying objects or patterns in such a way that the samples in the same group are more similar than the samples in different groups. Based on the fuzzy theory, the fuzzy clustering method [5], which produces the idea of partial membership of belonging. As a soft clustering method, fuzzy clustering has been extensively studied and successfully applied to image segmentation. One of the most important and widely used fuzzy clustering methods is the fuzzy c-means (FCM) algorithm [6], and promoted as the general FCM clustering algorithm [7]. The main purpose of the FCM algorithm is to make the vector space of a sample point be divided into a number of sub-spaces in accordance with a distance measure [8]. However, the FCM algorithm does not take the local spatial property of images into consideration, and hence suffers from high sensitivity to noise. To improve its robustness, many modifications to the FCM algorithm that incorporate spatial information into clustering have been proposed. The FCM objective functions with a penalty term and resulted in spatially smoothed membership functions.

III.

CONVENTIONAL FCM CLUSTERING ALGORITHM

Multiresolution segmentation is a bottom up region merging technique starting with one-pixel objects. In numerous subsequent steps, smaller image objects are merged into bigger ones. Throughout this pair wise clustering process, the underlying optimization procedure minimizes the weighted heterogeneity of resulting image objects, where n is the size of a segment and h an arbitrary definition of heterogeneity [3]. The algorithm is developed by modifying the objective function of the standard FCM algorithm with a penalty term that takes into account the influence of the neighboring pixels on the centre pixels [25]. In each step, that pair of adjacent image objects is merged which stands for the smallest growth of the defined heterogeneity. If the smallest growth exceeds the threshold defined by the scale parameter, the process stops. Doing so, multi-resolution segmentation is a local optimization procedure. The entropy based methodology for segmentation of satellite images is performed as follows. Images are divided into square windows with a fixed size L, the entropy is calculated for each window, and then a classification methodology is applied for the identification of the category of the respective windows. The classification approach can be supervised or non-supervised. Supervised classification needs a training set composed by windows whose classes are previously known (prototypes), such as rural and urban areas. Given a data set X = {x1,x2,.xn}, where the data point xj ,Rp(j = 1, . . . , n), n is the number of data, and p is the input dimension of a data point, traditional FCM [3] groups X into c clusters by minimizing the weighted sum of distances between the data and the cluster centers or prototypes defined as
m Q uij x j oi i 1 j 1 c n 2

(1)

Here,

is the Euclidean distance. uij is the membership of data x j belonging to cluster i, which is

represented by the prototype oi .The constraint on uij is coefficient.

u
i 1

ij

1 and m is the fuzzification

IV.

KERNEL FUZZY C-MEANS ALGORITHM

When applying the KFCM framework in image-segmentation problems, the multiresolution segmentation may end up with local optimization procedure. Global mutual fitting is the strongest constraint for the optimization problem and it reduces heterogeneity most over the scene following a pure quantitative criterion. Its main disadvantage is that it does not use the treatment order and builds first segments in regions with a low spectral variance leading to an uneven growth of the image objects over a scene. It also causes an unbalance between regions of high and regions of low spectral

408

Vol. 5, Issue 1, pp. 406-415

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
variance. Comparison of global mutual fitting to local mutual fitting results show negligible quantitative differences, the former always performs the most homogeneous merge in the local vicinity following the gradient of the degree of fitting. The growth of image objects happens simultaneously as well in regions of low spectral variance as in regions of high spectral variance. KFCM confines that the prototypes in the kernel space are actually mapped from the original data space or the feature space. That is, the objective function is defined as
m Q uij ( x j ) (oi ) i 1 j 1 c n 2

(2)

The objective function is then reformulated as


m Q uij (1 k ( x j , oi )) i 1 j 1 c n

(3)

Here, (1 k ( x j , oi )) can be considered as a robust distance measurement derived in the kernel space.

V.

PROPOSED ALGORITHM

Fuzzy c-means clustering method will be largely limited to spherical clusters only. By applying kernel fuzzy c-means algorithm attempts to solve this problem by mapping data with nonlinear relationships to appropriate feature spaces. Kernel combination, or selection, is crucial for effective kernel clustering. For most of the applications, it is not easy to find the right combination. In this paper a multiple kernel fuzzy c-means (MKFC) algorithm which extends the fuzzy c-means algorithm with a multiple kernel learning setting. By using multiple kernels and automatically adjusting the kernel weights, MKFC is more important to ineffective kernels and irrelevant features. It makes the choice of kernels less crucial. Experiments on both synthetic and real-world data demonstrate the effectiveness of the proposed MKFC algorithm. [24] The application of multiple or composite kernels in the FKCM has its advantages. In addition to the flexibility in selecting kernel functions, it also offers a new approach to combine different information from multiple heterogeneous or homogeneous sources in the kernel space. Specifically, in imagesegmentation problems, the input data involve properties of image pixels sometimes derived from very different sources. Therefore, we can define different kernel functions purposely for the intensity information and the texture information separately, and we then combine these kernel functions and apply the composite kernel in MKFCM to obtain better image-segmentation results. Examples that are more visible could be found from multitemporal remote sensing images. The pixel information in these images inherits from different temporal sensors. As a result, we can define different kernels for different temperature channels and apply the combined kernel in a multiple-kernel learning algorithm. The general framework of MKFCM aims to minimize the objective function
m Q uij com ( x j ) com (oi ) i 1 j 1 c n 2

(4)

To enhance the Gaussian-kernel-based KFCM-F by adding a local information term in the objective function
m m Q uij (1 k ( x j , oi )) uij (1 k ( x j , oi )) i 1 j 1 i 1 j 1 c n c n

(5)

where xj is the intensity of pixel j. In the new objective function, the additional term is the weighted sum of differences between the filtered intensity xj (the local spatial information) and the clustering prototypes. The differences are also measured using the kernel-induced distances. Such kind of enhanced KFCM-based algorithm is denoted as AKFCM (with A standing for additional term).

409

Vol. 5, Issue 1, pp. 406-415

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
It is worth pointing out that k1or k2 in the first variant of MKFCM-K-based image segmentation can be changed to any other Mercer kernel function for the information related to image pixels. This empowers the flexibility to the segmentation algorithm in kernel function selections and combinations. For example, a composite kernel that joins different shaped kernels can be defined as kcom = k1 + k2 where k1 is still the Gaussian kernel for pixel intensities k1(xi, xj) = exp(|xi xj |2/r2) k2 is a polynomial kernel for the spatial information k2(xi, xj) = (xi xj + d)2 If kcom = k1 + k2 is the composite kernel, the minimized objective function of the MKFCM is derived as
m Q uij com ( x j ) oi i 1 j 1 c n 2

(6)

(7)

(8)

(9)

For example, the input image data x j is set to be xj =[xj , xj, sj ] R3, the same as the third variant of MKFCM, then the composite kernel is designed as
b b kL w1b k1 w2 k2 w3 k3

(10)

The MKFCM algorithm evaluates the centroids so as to minimize the influence of outliers. Unlike FCM, it does not attempt fuzzification for elements having membership values above the calculated threshold. This reduces the computational burden compared to FCM; also there is an absence of external user-defined parameters. The removal of this initial trial and error factor makes MKFCM more robust, as well as insensitive to the fluctuations in the incoming data. The elevation and reduction of the membership values to 1 and 0, respectively, results in contrast enhancement in the observability of the incoming data. This helps in focusing on the ambiguous boundary region; thereby gaining in terms of the quality of segmentation. To further improve the performance of segmentation, MKFCM that linearly combines three kernels, i.e., the first two kernels are the kernels for intensities and the local spatial information. To sum up, the merit of MKFCM-based image-segmentation algorithms is the flexibility in selections and combinations of the kernel functions in different shapes and for different pieces of information. After combining the different kernels in the kernel space, there is no need to change the computation procedures of MFKCM. This is another advantage to reflect and fuse the image information from multiple heterogeneous or homogeneous sources. MKFCM-based image-segmentation algorithms are inherently better than other KFCM-based image segmentation methods. We can demonstrate the MKFCMs significant flexibility in kernel selections and combinations and the great potential of this flexibility could bring to image segmentation problems. In the MKFCM framework, we can easily fuse the texture information into segmentation algorithms by just adding a kernel designed for the texture information in the composite kernel. As in the satellite image-segmentation and two-texture image-segmentation problems, simply adding a Gaussian kernel function of the texture descriptor in the composite kernel of MKFCM leads to better segmentation results.

VI. EXPERIMENTAL RESULTS


The multiple kernels FCM based segmentation and for synthetic images and MR images. We test and compare the proposed method (MKFCM) with some other reported algorithms on several synthetic images and synthetic brain MR images from two aspects. The performance of FCM-type algorithms depends on the initialization, this paper does the initialization and iterations depend upon the input images and choose the one with the best objective function value. This increases the reliability of

410

Vol. 5, Issue 1, pp. 406-415

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
comparison results acquired in the simulations. The main goals of an image segmentation algorithm are optimization of segmentation accuracy and its efficiency. Considering accuracy, the proposed method is concentrated on obtaining a robust segmentation for noisy images) and a correct detection of small regions. Generally, incorporating of spatial information into the segmentation process will dramatically increase the algorithms computational complexity. To compare the computational complexity of the FCM, KFCM, and MKFCM segmentation algorithms to the 512 512 Lena image. Each segmentation was and the computational complexity of each algorithm was measured in terms of the average iteration number and average running time. The test images Lena and cameraman are segmented and the results are shown Fig. 1.

(a)

(b)

(c) (d) Fig.1 a) Original Lena image b) Original cameraman image c) Segmented Lena image d) segmented cameraman image

In this paper, the parameter is a constant, which controls the influence of the global intensity force and local intensity force [4]. When the intensity inhomogeneity is severe, the bias estimation relies on the local intensity force. In such case, we should choose small , as the weight of the global intensity force. Otherwise, the bias field estimation may perform poorly. For images with minor inhomogeneity, the accuracy of segmentation relies on the global intensity force. In this case, we can use relatively larger, as the weight of global intensity. Thus, the global intensity reduces the misclassification for the pixels around the edges. The multiple kernel FCM has been applied to segment the images shown in Fig. 2.

(a) (b) Fig. 2 (a) Original MR images (b) Segmented images

411

Vol. 5, Issue 1, pp. 406-415

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Table1. No. of iterations and time complexity of the proposed algorithm Cluster Centre Cluster Centre No. of Time Image Value1 Value2 Iterations Consumption(sec) Lena Image 74.21 159.04 15 32 Camera Man Image 21.23 201.22 10 21 MR Image 51.24 223.14 9 19

The quantitative comparison of the accuracy of those segmentation results was given in Table 1. It reveals that our MKFCM algorithms achieve not only the highest accuracy in all three cases, but also the best robustness to noise. This experiment demonstrates again that the proposed algorithm has a better ability to resist the influence of noise.
Table2. Comparison between the proposed algorithm with other FCM algorithms FCM Image Lena Image Camera Man Image MR Image No. of Iterations 40 34 22 Time Consumption (sec) 45 41 35 No. of Iterations 35 26 15 KFCM Time Consumption (sec) 45 33 29 MKFCM Time No. of Consumption Iterations (sec) 15 10 9 32 21 19

The size of image patches is an important parameter in our MKFCM algorithm. It determines how much spatial information will be used, and hence represents a trade-off between the image information and the spatial smoothness constraint. Table 2 shows the segmentation accuracy of the MKFCM algorithm with image patches of different size. Comparison between the proposed algorithm with other FCM algorithms based on number of iterations as shown in Fig 3 and also based on time required for segmentation as shown in Fig 4.It reveals that the accuracy of the algorithm decreases with the increase of the level of noise for all size of image patches.

Fig. 3 Comparison based on number of iterations

412

Vol. 5, Issue 1, pp. 406-415

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Fig. 4 Comparison based on time required for segmentation

Generally, incorporating of spatial information into the segmentation process will dramatically increase the algorithms computational complexity. To compare the computational complexity of the FCM, KFCM and MKFCM algorithms. Each segmentation was performed 20 times, and the computational complexity of each algorithm was measured in terms of the average iteration number and average running time.

VII. CONCLUSION
A modified adaptive fuzzy c-means clustering algorithm is presented for fuzzy segmentation of MR images that have been corrupted by intensity inhomogeneities and noise. We propose an adaptive method to compute the weights for the neighborhood of each pixel in the image. The proposed adaptive method can not only overcome the effect of the noise effectively, but also prevent the edge from blurring. To address intensity inhomogeneity, the proposed algorithm introduces the global intensity into the algorithm and combines the local and global intensity information into account to ensure the smoothness of the derived optimal bias field and improve the accuracy of the segmentations. The proposed model can segment a brain MR mage in 9-10 iterations within 20 seconds. With good initialization, the model may need less iteration and can obtain results in less time. A variety of images, including synthetic images, synthetic brain MR images and real brain MR images are used to compare the performance of the proposed algorithm.

REFERENCES
[1]. Ho BC.,MRI brain volume abnormalities in young, nonpsychotic relatives of schizophrenia probands are associated with subsequent prodromal symptoms, Schizophrenia Research, 2007; 96(1):113. Sikka K, Sinha N, Singh PK, Mishra AK., A fully automated algorithm under modified FCM framework for improved brain MR image segmentation, Magnetic Resonance Imaging 2009; 27(7):9941004. Awate S, Tasdizen T, Foster N, Whitaker R.,Adaptive Markov modeling for mutual-informationbased, unsupervised MRI brain-tissue classification, Medical Image Analysis 2007; 10(5):72639. Wong W, Chung A., Bayesian image segmentation using local isointensity structural orientation. IEEE Transactions on Image Processing 2005; 14(10):151223. L.A. Zadeh, Fuzzy sets, Information and Control 8 (1965) 338435. J.C. Dunn, A fuzzy relative of the ISODATA process and its use in detecting compact well separated cluster, Journal of Cybemet 3 (1974) 3257. J.C. Bezdek, Pattern Recognition with Fuzzy Objective Function Algorithms, Kluwer Academic Publishers, Norwell, MA, USA, 1981.

[2].

[3]. [4]. [5]. [6]. [7].

413

Vol. 5, Issue 1, pp. 406-415

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
J.C. Bezdek, L.O. Hall, L.P. Clarke, Review of MR image segmentation techniques using pattern recognition, Medical Physics 20 (4) (1993) 10331048. [9]. Xing, Y., Ou, Y., Englander, S., Schnall, M., & Shen, D. (2007), Simultaneous estimation and segmentation of T1 map for breast parenchyma measurement, In Fourth IEEE international symposium on biomedical imaging (pp. 332335). [10]. D. Q. Zhang and S. C. Chen, A novel kernelized fuzzy C-means algorithm with application in medical image segmentation, Artif. Intell. Med., vol. 32, no. 1, pp. 3750, Sep. 2004. [11]. Krishnapuram R, Keller JM, The possibilistic c-means algorithm: insights and recommendations, IEEE Transactions on Fuzzy Systems 1996;4(3):38395. [12]. Dunn JC. A fuzzy relative of the ISODATA process and its use in detecting compact well separated cluster, Journal of Cybernet 1974; 3(3):3257. [13]. Bezdek JC. Pattern recognition with fuzzy objective function algorithms, Norwell, A, USA: Kluwer Academic Publishers; 1981. [14]. Bezdek JC, Hall LO, Clarke LP. Review of MR image segmentation techniques using pattern recognition, Journal of Cybernet 1999; 20(4):103348. [15]. Chen, W., Giger, M. L., & Bick, U. (2006),A fuzzy c-means (FCM)-based approach for computerized segmentation of breast lesions in dynamic contrast-enhanced MR Images, Academic Radiolory, 13(1), 6372. [16]. Ketsetzis, G., & Brady, M. (2004), Automatic segmentation of T1 parametric maps of breast MR images via a hidden Markov random field, In Proceedings of medical image understanding and analysis. [17]. Clark M.C., L.O.Hall, D.B. Goldgof, R. Velthuizen, F.R Murtagh, M.S. Silbiger, Automatic tumor-segmentation using knowledge-based techniques, IEEE Transactions on Medical Imaging, vol. 117 187201, (1998). [18]. Fletcher-Heath L.M., L.O. Hall, D.B. Goldgof, F.R.Murtagh, Automatic segmentation of nonenhancing brain tumors in magnetic resonance images, Artifical Intelligence in Medicine, vol. 21, pp. 4363, (2001). [19]. Gering D.T., Grimson W.E.L., Kikinis R., Recognizing deviations from normalcy for brain tumor segmentation, Medical Image Computing and Computer- Assisted Intervention MICCAI, Springer, vol. 2488, pp. 388-395, (2002). [20]. Pham D. L., C. Y. Xu, and J. L. Prince, A survey of current methods in medical image segmentation, Annual Review of Biomedical Engineering, vol. 2, pp. 315-337, (2000). [21]. Fu, S.K.Mui, J.K,A Survey on Image Segmentation. Pattern Recognition, Vol. 13, 1981, pp. 3 16. [22]. Haralick, R.M.Shapiro, L.G, Image Segmentation Techniques, Comput. Vision Graphics Image Process. Vol. 29, 1985, pp. 100132. [23]. Pal, N.Pal, S,A Review on Image Segmentation Techniques, Pattern Recognition, Vol. 26, 1993, pp. 12771294. [24]. Hsin-Chien Huang, Yung-Yu Chuang and Chu-Song Chen, Multiple Kernel Fuzzy Clustering, IEEE TRANSACTIONS ON FUZZY SYSTEMS (2011). [25]. Yong Yang, Shuying Huang, Image Segmentation by Fuzzy C-means Clustering algorithm with a novel penalty term, Computing and Informatics, Vol. 26, 2007, 1731. [26]. Amini L, Soltanian-Zadeh H, Lucas.C.Automated Segmentation of Brain Structure from MRI, Proc. Intl.Soc.Mag.Reson.Med.11(2003). [27]. K.Selvanayaki*, Dr. M. Karnan, CAD System for Automatic Detection of Brain Tumor through Magnetic Resonance Image-A Review, International Journal of Engineering Science and Technology Vol. 2(10), 2010, 5890-5901 [8].

AUTHORS BIOGRAPHIES
Mani Ganesh obtained his Bachelors degree in Electronics and Communication Engineering from Arunai Engineering College, Thiruvannamalai. Then he obtained his Masters degree in Applied Electronics from Sathyabamma University, Chennai and doing Ph.D degree in Digital Image Processing from Anna University, Coimbatore. Currently, he is a Assistant professor at the department of Electronics and Communication Enginering, INFO Institute of Engineering, Coimbatore. His specializations include Image segmentation and Enhancement for satellite Image.

414

Vol. 5, Issue 1, pp. 406-415

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Veeraappa Gounder Palanisamy was born nearby Village at Namakkal in July 1949 and had his schooling at Namakkal. He completed his B.E. Electronics & Communication Engineering in the year 1972 at P.S.G. College of Technology. He completed his M.Sc. (Engg) in the field of Communication Systems at College of Engineering, Guindy, (presently Anna University, Chennai) in the year 1974. He was sponsored by Government of Tamilnadu to do his Ph.D in Communication-Antenna theory. At Indian Institute of Technology, Kharapur,West Bengal in the year 1981 and successfully completed the same. He retired as Principal, Government College of Technology, Coimbatore and presently working as Principal in Info Institute of Engineering, Coimbatore. He is a member in number of Academic boards, AICTE & University inspection committee.

415

Vol. 5, Issue 1, pp. 406-415

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

FEATURE EXTRACTION USING HISTOGRAM OF RADON TRANSFORM FOR PALMPRINT MATCHING


Jitendra Chaudhari1, Pradeep M. Patil2, Y. P. Kosta3
2

Charotar Uni. of Science and Technology, Changa, Gujarat, India RMD Sinhgadh Tech. Insti. Campus, Pune, Maharashtra State, India 3 Marwadi Education Foundation Rajkot, Gujarat, India

ABSTRACT
This paper presents principle line based palm print matching model as principle lines are easily extracted in even in low resolution images. This paper proposes use of histogram of Radon transform (HRT) to extract the features of palm. However the HRT is sensitive to the rotation and scaling due to normalization process. Therefore here logarithm is employed while discarding the normalization process and its histogram is utilized as feature vector. To compare the palm prints, the calculation of correlation coefficient of this logarithm histogram is proposed in the paper. The proposed model is applied to polyU database and results are analyzed in terms of receiver operating characteristic.

KEYWORDS: Palmprint, Matching, Radon Transform, Histogram

I.

INTRODUCTION

Biometric plays major role to get ones identity. One of the example of biometric are fingerprint [1] as it is hardly possible to find the similar fingerprint for any two individual. One of the crucial properties required for identity is inimitability. Other equally important aspect is being present in all individuals for life time. They must be easy to extract. Similar biometric characteristics are iris, palm, retinal structure, face and hand writing. Out of these, palm based identification have been intensively developed because of its crucial advantage over other features. Palm region can be identified even in low resolution images. In such cases, the distinguishable features rely on palm lines and textures patterns. High resolution image also contains ridges and wrinkles which can be utilized as classification and matching features. The main objective of this paper is to proposed identification system based on palm print feature matching. Different palm print methods can be been classified according to the process they utilized. Preprocessing step involves the cropping of region of interest (ROI) form hand geometry. Second step involves the feature extraction method. Third step is feature reduction from extracted features and finally classification step is involved for individuals identity. Numbers of algorithms have been proposed using different combination of each of above defined stage like in [2], wavelet based line orientation information are extracted. Along with orientation, energy of sub bands also has been utilized to describe the palm. Zernike moments based feature extraction method was proposed by Pang et al. [3], where higher order moments were compared for identification. Feature extraction method associated with its spatial location exhibit better performance, i.e. principle line based approach. In most of the method, the ROI is cropped [4, 5] and corresponding feature extraction is

416

Vol. 5, Issue 1, pp. 416-421

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
performed on ROI. If the coordinate systems are well aligned for different images then corresponding to palm area, comparison between the feature vectors is meaning full with regards to spatial information. Present approach in this paper utilized ROI area from aligned palm print images along same coordinate system. To extract the principle lines, wavelet and directional context modeling based algorithm was proposed in [6]. Similarly integration of kernel based edge detection and morphological model also have been proposed in [7]. Similarly, H B Kekre et.al [20] presented the efficiency of various wavelet transform for palm print recognition. Other wavelet based model can be find in [17, 18]. In both the model, the features size is large and it needs to be reduced. Radon transformation has been utilized in [8] to extract the principle lines. In [8], a filter based approach based on Radon transform is implemented to detect lines. Superposition is used to match palmprints. In [22], Wei Jei et.al proposed a novel orientation based scheme, in which three strategies, the modified finite Radon transform, enlarged training set and pixel to area matching, have been designed to further improve its performance. In [21], author proposed the local binary pattern (LBP) based model where LBP descriptor is applied to the energy or direction representation of palmprint extracted by MFRAT. As principle lines are most robust and unique features and due to easiness in extraction from ROI, this paper also present line extraction based model for palmprint matching. In this paper, palm print database from polyU [16] is utilized which provides the extracted palm ROI from the hand. Thus paper is more devoted to the palm print matching than the extraction of ROI. In all literature the radon transform is combined with other model and then similarity model is employed in terms of Euclidean distance. While this paper presents a novel approach to characterize the palmprint using histogram of radon transforms only. The histogram of radon (HRT) is widely utilized to represent the shape of an object [10]. This is most robust toward the scaling and rotation of object. Therefore this paper proposed a model to extract the rotation and scaling invariant feature extraction method using HRT. Rest of paper is organized as follows: Section-II discusses the proposed model implementation where first part is explain the histogram of Radon based feature extraction method and second part explain the matching process. Based on that in section-III corresponding results are explain and accuracy of model is presented in term of false acceptance and false rejection ration. Finally conclusion and feature work is proposed related to proposed model.

II.

PROPOSED MODEL

2.1 Radon transform based lines feature extraction:


In palm ROI, the principle lines can be considered as a straight line. To extract the straight lines radon [19] is widely utilized as it is able to transform two dimensional lines into possible line parameter. Thus each line in the 2-D domain generates maximum value at the corresponding line parameters in radon domain. Another strong capability of radon transform is able to extract the lines from vary noisy environment. Another useful property of radon is that each peak in radon domain reflects the value of individual lines. i.e. from the radon transform shown in figure 1(b), the crossing lines makes no problem in separation of peaks.

(a) (b) Figure 1: Image having cross lines and corresponding radon transform.

417

Vol. 5, Issue 1, pp. 416-421

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Based on that so many algorithms [9-11] have been presented to extract the palm features using radon transform. Thus in [10], finite radon transform is applied to extract the lines features and wavelet transform is utilized to extract the corresponding point from radon domain. Similarly in [11] the author modified the radon transform with consideration of energy and direction to extract the palm lines. In [12], modified radon transform is utilized along with iterative closet point method for line based feature extraction. Thus in all radon based approach, the radon domain is integrated with other model to extract the features. In this paper, use of histogram of radon domain is proposed to extract the lines features.

2.2 Feature extraction using histogram of radon transform:


Presently histogram of radon transform (HRT) plays vital role in shape analysis as introduced by S. Tabbone [13]. The HRT represents the shape length at each orientation. It is also translation and rotation invariant. Thus HRT gives similar response to the palm having either rotation or translation. This is the most advantage in compared to finite radon or modified radon transforms presented [11, 12]. In [13], normalization of radon image and histogram is utilized to achieve scaling invariance. This is highly sensitive to the noise. Therefore in [14] logarithm conversion and phase correlation is utilized by avoiding the normalization process. This LHRT (logarithm HRT) is invariant to the noise. Therefore this paper utilized this property to extract the features of palm lines using LHRT. Let I(x,y) be an binary image (after extraction of palm ROI). Its radon transform is defined as:
R( , )

I ( x, y) ( x cos y sin )dxdy

(1)

Where () is Dirac delta function, [0 2 ] and [ A / 2, A / 2] , A is the size of image diagonal. Thus radon gives summation over the line defined at angle . The radon transforms are shown in figure 1. Then logarithm is applied on equation 1.
R f ( , ) ln R( , )

(2)

Corresponding HRT is calculated as


LHRT ( , y) H ( R f ( , ))( y)

(3)

Where H is histogram of radon in direction . During the calculation of H normalization process is avoided to make it translation and rotation invariant. The features of palm are obtained by calculating the phase correlation of the obtained LHRT. Where the Fourier transform of LHRT is calculated and corresponding correlation function is calculated for the identification process. The detail of matching process is explained in coming section.

2.3 Palm Matching:


To match the two images, phase correlation [14] is utilized. The correlation function is defined as:
C

G(u, v)1 G(u, v) 2

(4)

Where G (u,v) is inverse Fourier transform of LHRT, subscript 1 indicate query palm image features and 2 indicates palm features from the database . To match two palms, this correlation function is compared with specified threshold. If C is higher than the defined threshold then we can say that two palms are same.

III.

EXPERIMENTAL RESULTS AND DISCUSSION

To evaluate the performance, Polyu [16] database is utilized, which provide palm images. For local hand images, it is required to extract the ROI (palm) and to correct the rotation, The proposed model have been implemented on Pentium IV processor with 1GB RAM, 2.8 GHz PC under MATLAB

418

Vol. 5, Issue 1, pp. 416-421

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
environment. There are so many algorithms have been suggested in various literature surveys [14, 15]. To get the features, following steps are performed: Step:1 Calculate the radon coefficients. Step:2 Obtain the logarithm of radon coefficients. Step:3 Calculate histogram of radon coefficients. Step: 4 for matching purpose calculate the correlation function using equation 4 and compare it with defined threshold. As shown in figure 2, palms images of two persons are shown, where first row shows the palm images of single person having variation in intensities, containing noise and small rotation. Similarly second row is the palms images for another person.

Figure 2 Palm images from utilized database

The proposed model is executed using MATLAB version 6.3 on P-IV, 1.2 GHz computer. As earlier said, radon transform is obtained on binary image of palm. Binary image of palm gives principle line. As discussed in section 1, in most of radon based model, the radon coefficients are combined with other features. While proposed model requires calculation of histogram only and obtained histogram is utilized as feature vector for comparison. The performance of model is tested on 30 X 4 palm images, where 4 images are of single person. The following table shows the comparison of palm image with each other. In table 1, it can be seen that when palm image 1 is compared with its sub images, correlation coefficients values is nearer to 98, while the correlation coefficients for palm image 1 with other palm image is less than 97.
Table 1: Palm image correlation coefficients values Plam Image 1 2 3 4 1 100 95.24 96.38 96.30 2 98.24 94.86 95.77 95.91 3 98.43 95.27 95.25 95.85 4 98.29 95.26 95.28 95.85

This performance is evaluated using receiver operating characteristic. This consists of false rejection rate and false acceptance rate. These FAR and FRR can be measured at different threshold. For every possible combination the model has been tested to calculate FAR and FRR as shown in figure 3.

419

Vol. 5, Issue 1, pp. 416-421

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Figure 3 Performance evolution

IV.

CONCLUSION AND FUTURE WORK

Novel approach for palmprint matching is proposed in this paper. The proposed model is based on principle line as this principle line can be easily extracted even in low resolution images. This principle lines can be considered as small lines therefore radon transform is utilized which have capability of extracting lines even in noisy and overlapping lines images. The histogram of radon is not much utilized as a feature vector generation and less attention is given in term of shape identification only. This paper proposed the use of HRT to extracts the features of palm lines. Again to make histogram scale and rotation invariant, normalization is avoided and logarithm is calculate before histogram calculation as suggested in [14]. The performance evolution in terms of receiver operating characteristic is obtained to shows the accuracy of proposed model. Again in compared to other radon based model, the proposed model is computational less as it requires calculation of radon coefficients and its histogram. The proposed model gives promising result but still it requires computation of accuracy by comparing the proposed model with other radon based model. Similarly in proposed model, extraction of radon coefficients is highly sensitive to the principle line identified in spatial domain. So the proposed model can be integrated with the model which is able to extract the principle line in more robust way.

REFERENCES
[1] A. Jain, Lin Hong, and R. Bolle. (1997), On-line fingerprint verification. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 19(4):302 314, [2] Lei Zhang and D. Zhang, (2004)., Characterization of palmprints by wavelet signatures via directional context modeling. Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on, 34(3):1335 1347. [3] Ying-Han Pang, T. Connie, A. Jin, and D. Ling,(2003), Palmprint authentication with zernike moment invariants. In Signal Processing and Information Technology, 2003. ISSPIT 2003. Proceedings of the 3rd IEEE International Symposium on, pages 199 202. [4] Anil K. Jain, Jianjiang Feng, (2009), Latent Palmprint Matching, IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 31, NO. 6, pp 1032 --1047 [5] Eisa Rezazadeh Ardabili , Keivan Maghooli, Emad Fatemizadeh ,(2011)), Contourlet Features Extraction and AdaBoost Classification for Palmprint Verification, Journal of American Science, ;7(7), pp 353 - -362. [6] L. Zhang, D. Zhang, (2004), Characterization of palmprints by wavelet signatures via directional context modelling, IEEE Trans. Syst. Man Cybernet., Part B 34 (3),pp: 13351347. [7] C.C. Han, H.L. Cheng, C.L. Lin, K.C. Fan, (2003), Personal authentication using palmprint features, Pattern Recognition. 36 (2) pp. 371381. [8] D.S. Huang, W. Jia, D. Zhang,,(2008), Palmprint verification based on principal lines, Pattern Recognit. 41 (4), pp.12151428. [9] Amel Bouchemha, Amine Nait-Ali and Nourredine Doghmane.,(2010), A Robust Technique to Characterize the Palmprint using Radon Transform and Delaunay Triangulation., International Journal of Computer Applications 10(10):pp. 3542, [10] Wenjun Huai, Li Shang, (2010), Palm Line Extraction Using FRIT Proceedings of 6th

420

Vol. 5, Issue 1, pp. 416-421

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
International Conference on Intelligent Computing, ICIC 2010, Changsha, China, August 18-21. [11] Wei Jia, Bin Ling, Kwok-Wing Chau , Laurent Heutte (2008),, Palmprint identification using restricted fusion Applied Mathematics and Computation 205 ,pp. 927934 [12] Wei Li; Lei Zhang; Zhang, D.; Jingqi Yan; ,(2009), "Principal line based ICP alignment for palmprint verification," Image Processing (ICIP), 16th IEEE International Conference on , vol., no., pp.1961-1964. [13] S. Tabbone, O. Ramos Terrades, and S. Barrat, (2008), Histogram of Radon transform. A useful descriptor for shape retrieval, ICPR, pp.1-4, Tampa. [14] Ajay Kumar, Helen C. Shen,(2005), Palmprint Identification using PalmCodes, Proceedings of the Third International Conference on Image and Graphics, IEEE, pp 258 261. [15] A Bouchemha, A Nait-Ali and N Doghmane, (2010), A Robust technique to Characterize the Palmprint using Radon Transform and Delaunay Triangulation, International Journal of Computer Applications , vol. 10 no. 10, pp. 35-42. [16] PolyU Palmprint database, Available: http://www.comp.polyu.edu.hk/~biometric/ [17] Guang-Ming Lu, Kuan-Quan Wang, and D. Zhang (2004). Wavelet based independent component analysis for palmprint identification., In Machine Learning and Cybernetics, Proceedings of 2004 International Conference on, volume 6, pages 3547 3550 vol.6. [18] Qingyun Dai, Ning Bi, Daren Huang, Dvaid Zhang, and Feng Li (2004). M-band wavelets application to palmprint recognition based on texture features. In Image Processing, 2004. ICIP 04. 2004 International Conference on, volume 2, pages 893 896 Vol.2, oct. [19] Andrew Kingston and Imants Svalbe (2007). Generalised finite radon transform for nn images Image and Vision Computing, 25(10):1620 1630. [20] Kekre, H.B, Tanuja Sarode, K, Tirodkar, A.A.( 2012) , "A study of the efficacy of using Wavelet Transforms for Palm Print Recognition," Computing, Communication and Applications (ICCCA), 2012 International Conference on , vol., no., pp.1-6, 22-24 [21] Yang Zhao; Wei Jia; RongXiang Hu; Jie Gui;( 2011) , "Palmprint Identification Using LBP and Different Representations," Hand-Based Biometrics (ICHB), 2011 International Conference on , vol., no., pp.1-5, 17-18 Nov. [22] Wei Jia; De-Shuang Huang,( 2007) "Palmprint Verification Based on Robust Orientation Code," Neural Networks, 2007. IJCNN 2007. International Joint Conference on , vol., no., pp.25102514, 12-17.

AUTHORS
J P Chaudhari received the B.E. degree in Electronics Engineering from Nagpur University, Nagpur and M.E. degree in Industrial Electronics engineering from M. S. University, Baroda, Gujarat, India in 1996 and 2001 respectively. Presently he is working as Associate Professor in Department of Electronics and Communication Engineering at Charotar University of Science and Technology, Changa, Gujarat. He is member of IE(I) and IETE. His research interest includes Image Processing and Power Electronics.

Pradeep Mitharam Patil received his B. E. (Electronics) degree in 1988 from Amravati University, Amravati, (India) and M. E. (Electronics) degree in 1992 from Marathwada University, Aurangabad, (India). He received his Ph.D. degree in Electronics and Computer Engineering in 2004 at Swami Ramanand Teerth Marathwada University, (India). Presently he is working as Director at R M D Sinhgad Technical Institute Campus Warje, Pune, (India). He is member of various professional bodies like IE, ISTE, IEEE and Fellow of IETE.. His research areas include pattern recognition, neural networks, fuzzy neural networks and power electronics. His work has been published in various international and national journals and conferences. Yogesh P. Kosta is an SCPM from Stanford University, California, USA. He did his M. Tech. in Microwave Electronics from Delhi University Delhi, and his Ph.D. in Electronics and Telecommunication. He is a member of IETE and IEEE. He worked as a scientist and designer at the Space Application Canter ISRO Ahmedabad, and as a Sr. Designer at Teledyne USA. Presently he is Director of Marwadi Group of Institutions. His research areas are RF, Wireless Satellite Systems and Information Communications. He has guided several M. Tech students. At present, six research scholars are currently pursuing their Ph.D under his guidance. He has published many research papers and articles in referred journals and international conference proceedings.

421

Vol. 5, Issue 1, pp. 416-421

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

STUDY OF MOBILE NODE BASED COVERAGE RECOVERY PROCESS FOR WSN DEPLOYED IN LARGE FOOD GRAIN WAREHOUSE
Neha Deshpande1 & A. D. Shaligram2
2

A.G.College, Pune, Maharshtra, India Dept. Of Electronic Science, University of Pune, Pune, Maharashtra, India

ABSTRACT
As the demand for food quality, health benefits, and safety increases, more stringent scrutiny on the inspection of agro-food products have become mandatory. Also being increasingly demanded is traceability, which requires not only rigorous inspections, but also systematic detection, and recording of quality and safety parameters. Wireless sensors allow otherwise impossible sensor applications, such as monitoring dangerous, hazardous, unwired or remote areas and locations. This technology provides nearly unlimited installation flexibility for sensors and increased network robustness. Furthermore, wireless technology reduces maintenance complexity and costs. This promising technology of wireless sensor network (WSN) is anticipated to offer an extensive range of applications, such as environmental monitoring, smart buildings, military applications and so on. The coverage problem is a elementary issue in WSN, which primarily alarms with a basic question: How well the area under consideration is covered by the deployed sensors? To achieve optimum network coverage, the traditional approach is to deploy a large amount of stationary sensor nodes and then to schedule their sensing activities in an efficient manner. Recently mobile nodes have proved to be very useful as large coverage can be achieved using a few mobile nodes. One can also use them for flexible extension of the network. When we consider large number of sensors distributed in a food grain warehouse, there may be a problem of power distribution and maintenance of these sensors. In this case we can use mobile wireless sensor nodes. These sensors can be placed in transporting vehicles to monitor the environment. If the base station is placed at center or in some corner, there may be loss of signal due to presence of grain and environmental factors. The network may lose connectivity and hence there may be reduction in coverage due to battery life, broken links or excessive attenuation. A network with provision of a set of standby mobile nodes that will reach the location wherein connectivity is affected and will take over the job of the static nodes, is proposed in this paper. This team of mobile node scan gives some time to the network controller to repair the network problems without loss of data.

KEYWORDS: WSN, Mobile sensor node, Food Grain Warehouse, coverage recovery, holes, voronoi cells

I.

INTRODUCTION

As the demand for food quality, health benefits, and safety increases, more stringent scrutiny on the inspection of agri-food Godowns that enable scientific storage of food grain. This can be done by maintaining these parameters at predefined level and monitoring of the storage space. In this paper we present the concept of food grain warehouses monitored by hybrid wireless sensor networks with emphasis on optimization and redundancy.[1,2,3,4] The large food grain warehouses are distributed over the area of few acres. The activity inside each godown is monitored by a set of ad hoc wireless

422

Vol. 5, Issue 1, pp. 422-429

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
sensor network that reads the temperature, humidity and carbon dioxide levels after fixed time intervals. It is also products have become mandatory. Also being increasingly demanded are rigorous inspections, and systematic detection and recording of quality and safety parameters. After producing huge quantities of food grains, as is the case with Indian Agriculture, the next challenge is to provide an effective, safe and viable storage method. We need to protect these food grains from problems such as unpredictable weather conditions, high humidity and weeds growth. [5] The mobile node will also work in the situation where in any one of the fixed nodes is not working; may be due to low battery, excessive attenuation etc. The battery operated nodes are connected as multihop ad hoc networks. This network may suffer loss of connectivity due to several reasons such as battery life, broken linkages, attenuation due to obstacles .There may be some acute places where it is not possible to achieve connectivity and hence coverage[4]. In this paper we propose use of mobile wireless sensor nodes that will reach such locations where static nodes are not sufficient to provide coverage due to one or the other reason. The mobile node will reach the location and continue data transfer to the base station and meanwhile, the network problem can be repaired. The mobile sensor node will be a stopgap arrangement. The rest of the paper is organized as follows: section II takes in to account the related work done so far. Section III explains the need of this work. The reasons and mathematical modeling is covered in section IV. Proposed model with a fleet of mobile nodes is discussed in section V. Section VI concludes the paper with a statement of scope for future work. The references are given at the end of the paper.

II.

RELATED WORK

It is observed that the widely used sensor coverage model is the circle model where a sensor can cover a circle centred at itself with a radius equal to a xed sensing range. In many cases, we may interpret the coverage concept as a positive mapping between the space points and the sensor nodes the deployed network. For example, given the sensing circular model, the area covered by a set of sensor nodes is the complete set of their sensing circles. According to the subject to be covered, three coverage types can be identied, namely, area coverage, target coverage and barrier coverage.[6] Area coverage addresses the problem of how the whole sensor eld is covered. Target coverage, on the other hand, mainly deals with how to cover a set of discrete targets with known locations. Barrier coverage concerns with nding a penetration path across the sensor eld with some desired property. All the three coverage types are important to the success of running a WSN and they have been intensively researched in the literature. Huang and Tseng [7] present a brief review on some barrier cover- age problems and area coverage problems. Cardei and Wu [8] survey some energy-efficient algorithms for improving area and target coverage. Both of them only address the coverage problems in the context of stationary WSN where all nodes are considered stationary once deployed. Wireless sensors can be either deterministic placed or randomly deployed in a sensor eld. Deterministic sensor placement can be applied to a small to medium sensor network in the field. When the network size is large or the sensor eld is remote and hostile, random sensor deployment might be the only choice, e.g., scattered from an aircraft. It has been shown that a critical sensor density exists beyond which a sensor eld can be completely covered almost surely in every random deployment [9,10]. To guarantee complete coverage in one random deployment, it is often assumed that the numbers of scattered sensors are more than that required by the critical sensor density. However, this normally requires a great number of sensor nodes to be deployed. Another way to improve network coverage is to leverage mobile sensor nodes. Mobile sensor nodes are equipped with locomotive platforms and can move around after initial deployment, for example, the mobile sensor nodes Robomote [11] and iMouse [12]. Force based, grid based and computational geometry based coverage solutions are discussed in [13]. Although in general a mobile sensor node is more expensive than its stationary compeer, it can serve many functionalities such as a data relay or collector, and can greatly improve many network performances such as enhancing timeliness of data report. In this paper the effort is to provide the standby arrangement of the mobile node is case of failure and avoid loss of data during the repair work of the network. A discussion on how the mobility has been used to get better area coverage is given in the next section. Also target coverage is discussed in the following sections. In different

423

Vol. 5, Issue 1, pp. 422-429

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
network scenarios, the objectives of node mobility are different. In a hybrid network consisting of both stationary and mobile sensor nodes, the goal is mainly to provide stop gap arrangement with mobile nodes to recover the coverage holes caused due to battery failure, broken linkages or obstacles. In a mobile network consisting of only mobile nodes, the objective is to maximize or to optimize the coverage of these mobile nodes. And in event monitoring application where some short-lived events may appear in different locations, the objective is to dispatch mo bile nodes to monitor the event sources for better event coverage. We have discussed the latest trends on how the mobility can be used to recover the broken networks.

III.

NEED ANALYSIS

Over the years India with constant efforts of scientists could produce sufficient food grains to feed the people in the country. India with its present agricultural produce of thousands of tonnes of food grains stands self sufficient in the world. In India, food grain warehouses would mainly belong to Food Corporation of India, Warehouse Corporation of India, Public Distribution System, Agriculture Markets, Railway, Sea Port, Traders. These warehouses ( Normal + temperature Controlled ) can be used for storage of Food grains, Perishable Fruits, Perishable Vegetables, Perishable Flowers, Fish, Meat Products, Dairy Products, Processed Food. But in India the temperature and humidity levels are varying with seasons. The food grain storages can range from a small room to a very huge warehouse. We need to protect these food grains from problems such as unpredictable weather conditions, high humidity and weeds growth.[4] This can be done by maintaining these parameters at predefined level. Wireless sensor networks are application-specific, and therefore they have to involve both software and hardware. They also use protocols that relate to both the application and to the wireless network. The proprietary networks operate in the ISM (Industrial, Scientific and Medical) bands. Applications such as remote temperature monitoring, pressure and actuation are many times best handled via ISM band. Users are demanding devices, appliances, and systems with better capabilities and higher levels of functionality. Sensors in these devices and systems are used to provide information about the measured parameters or to identify control states, and these sensors are candidates for increased builtin intelligence.

Fig. 1: Wireless monitoring of food grain warehouse.

Fig. 2: Network of distributed warehouses

A wide range of electronic sensor based gadgets have been developed, which are effectively used for periodic recording of environmental parameters. The technologies mentioned should be made available to the Indian farmers in a form which suits their economic and environmental conditions. This would give better control over parameters responsible for food grain damage and would maintain the storages in low cost in terms of energy and man power requirement. Measurements of temperature and humidity at various locations, to ensure that they fall within a prescribed range, are performed either visually by inspectors or automatically at warehouses storing food. In order to implement a system to automatically verify the temperature and humidity with wired links within a warehouse, where inspectors visually read measurements, construction work needs to be performed for the power supply and communication facilities. Also, measurement points become fixed and the system will not be able to respond to changes of measurement points required by the replacement of items stored in the warehouse or changes with the layout. Human errors and delays are also possible in such manual systems. If wireless sensor terminals are used in such a case, a system can be built by simply placing

424

Vol. 5, Issue 1, pp. 422-429

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
wireless sensor terminals at measurement points. Furthermore, it is possible to respond flexibly to the replacement of items stored in the warehouse or changes to the layout as shown in the figures 1 and 2.

IV.

HOLES IN COVERAGE

Voronoi diagram can be used to detect a coverage hole and calculate the size of a coverage hole [14,15]. A Voronoi diagram for N sensors s1; s2; .... ; sN in a plane is dened as the subdivision of the plane into N cells each for one sensor, such that the distance between any point in a cell and the sensor of the cell is closer than that distance between this point and any other sensors. Two Voronoi cells meet along a Voronoi edge and a sensor is a Voronoi neighbour of another sensor if they share Voronoi edge. For more discussions on Voronoi diagram and its applications, please refer [16].

Fig 3: Illustration of using Voronoi diagram to detect a coverage hole and decide the hole size.

A Voronoi diagram is initially constructed for all stationary sensor nodes, assuming that each node knows its own and its neighbours coordinates. Wang et al. [15] proposes a localized construction to construct a local Voronoi diagram: Each node constructs its own Voronoi cell by only considering its 1-hop neighbours. After the local Voronoi diagram construction, the sensor eld is divided into sub regions of Voronoi cells and each stationary node is within a Voronoi cell. A node is a Voronoi neighbour of another one if they share a Voronoi edge. Figure 3 illustrates a Voronoi diagram in a bounded sensor eld, where the boundaries of the sensor eld also contribute to a Voronoi cell. According to the property of a Voronoi diagram, all the points within a Voronoi cell are closest to only one node that lies within this cell. Therefore, if some points of a Voronoi cell are not covered by its generating node, these points will not be covered by any other sensor and contribute to coverage holes. If a sensor covers all of its Voronoi cells vertices, then there are no uncovered points within its Voronoi cell; otherwise, uncovered points exist within its Voronoi cell. Ghosh [14] describes how to compute the uncovered area within a Voronoi cell. They call a triangle consisting of a node and its two adjacent Voronoi vertices a Voronoi triangle. For example, the Voronoi triangle in Figure 3. The line is the perpendicular bisector of the line and the area of can be computed as The area of the Voronoi cell for a node is the sum of the area of such Voronoi triangles contained within the Voronoi cell. However, the exact area of the uncovered portion of a Voronoi cell is not equal to the area of this Voronoi cell minus the area of the sensing circle. This is because the sensing circle of a sensor node may protrude its Voronoi cell. For example, in Fig. 3 some of the s1s sensing circle is also located at the s4s Voronoi cell. The protrusion depends on the relations between the sensing range and the distance between two Voronoi neighbours and the lengths of the Voronoi triangle sides. The above calculation for the exact area of uncovered portion within a Voronoi cell is complicated. Wang et al. [13] propose to use the distance between itself and its furthest Voronoi vertex to decide a coverage hole and the size of the coverage hole. If such a distance is larger than its sensing range, then there exists a coverage hole and the size of the hole is considered as proportional to the distance.

425

Vol. 5, Issue 1, pp. 422-429

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

V.

PROPOSED NETWORK WITH MOBILE NODES

In this section, we discuss how mobility is used to improve area coverage in a kind of combination networks. In a combination network consisting of both stationary and mobile sensor nodes, coverage holes may exit if the number of the deployed stationary nodes are not large enough in one random deployment. The main objective for using mobile sensor nodes is to recover coverage holes after the initial network deployment, such that the area coverage can be maximized .The mobile node team can reach such hole locations and provide coverage support to recover the network. The concept is illustrated in figure 6. The main challenges to the connected coverage after deployment arise due to limited battery life of nodes, attenuation caused by various obstacles in the signal path, broken linkages, acute places where nodes cannot be placed. The situations are shown in figure 3.

Original range

nge
Attenuation due to Obstacle

(a)

(b)

(c)

Figure 4: a) connected network with full coverage b) reduction in range of some nodes due to low battery c) Excessive attenuation in single node range due to obstacle.

As shown in figure 4(a), the network is fully connected and there are no holes. Over the time the battery of each node will be utilized and some nodes may end up with very low power resulting in broken links. This can affect total connectivity and coverage. This is demonstrated in figure 4(b). Figure 4(c) shows a situation where the range is attenuated because of obstacle in the path of the network. The network will try to overcome these problems by using routing techniques and establishing network with alternate paths. But a stage will reach when these solutions will not work as the links will be totally broken and connectivity cannot be established. These challenges can be handled with the help of a team of mobile nodes that will reach the problem location one by one till connectivity is established and complete coverage is achieved. The coverage degradation is observed due to two reasons: a) Holes generated due to node failure as shown in figure 3b. And b) Holes are generated due to deployment issues as discussed in section IV . The graph in figure 5 shows the effect of holes generated due to node failure on the coverage area.
100 %coverage area 80 60 40 20 0 0 20 40 60 80 100 %node failure Figure 5: Graph of % node failure Vs % coverage area

426

Vol. 5, Issue 1, pp. 422-429

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
As the % number of nodes goes on increasing one can clearly observe that there is not much change in the covered area but as the number increases, there is significant reduction in the coverage area.
Sensor Node

Work Volume

Fig 6: Proposed Mobile sensor Nodes

In order to improve coverage area, a fleet of mobile nodes is proposed to be used. Figure 6 shows a large region divided into sections and mobile nodes are deployed in each section. They will be activated when a hole is developed and network may start losing data. This stop gap arrangement of mobile nodes will give some time to repair the network and avoid loss of data during this period.

(a)

(b)

(c)

Fig.7: (a) Network failed due to 2 node failures (b) recovery with one mobile node (c) Recovery with two mobile nodes.

Role of mobile nodes in network recovery is illustrated in figure 7. Initially the network is disconnected in a major region due to failure of two nodes as shown in figure 7(a). Now first mobile node enters the scenario and takes over the job of one node and network is recovered to some extent as shown in figure 7(b). Still more recovery is observed when the second node comes into picture and as shown in figure 7(c) network is recovered enough. The graph in figure 5 shows the degradation of network. Which can be recovered with mobile nodes and complete coverage can be obtained. This will provide some breathing time for the network control team and the failed nodes can be repaired or replaced without losing data during repair work.

VI.

CONCLUSION AND FUTURE WORK

The team of mobile sensor nodes works as a reserve force in the wireless sensor network deployed in large food grain warehouse. When the battery operated nodes fail over the time, it may create broken links in spite of routing. This will affect coverage loss. The mobile node will reach at this position and take over the job and establish connectivity. If required, more mobile nodes can reach the location and

427

Vol. 5, Issue 1, pp. 422-429

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
continue till complete coverage is achieved. This will provide a mobile standby arrangement for coverage recovery of the wireless sensor network. Future work is planned for the determination of the movement plan of the mobile nodes. It is necessary to decide where and how the mobile nodes should be moved. Design of the movement strategies of the mobile nodes towards the holes in the network will be the next step.

REFERENCES
[1]. Prof. Yu-Chee Tseng, Efficient Placement and Dispatch of Sensors in a Wireless Sensor Network,IEEE transaction on mobile computing,Feb 2008, vol 7 No 2, pp 262-274 [2]. Rajagopal Iyengar ,Koushik Kar*Madison,Low-coordination wake-up algorithms for multiple connected-covered topologies in sensornets.Int. J. Sensor Networks, Vol. 5, No. 1, 2009 [3]. Qingchun Ren, and Quilling Liang , Throughput and Energy-Efficiency Aware protocol for Ultrawideband Communication in Wireless Sensor Networks: A Cross Layer Approach. IEEE transactions on Mobile Communications , Vol 7, No 6, June 2008. [4]. J. Kim, H. Byun, and C. Hong, Mobile robot wirh artificial olfactory function, Transaction on control, automation and systems engineering, vol. 3, No. 4, pp 223-229, 2001. [5]. Role of moisture, temperature and humidity in safe storage of food grains. Reference material, IGMRI, Hapur, India. [6]. Bang Wang *, Hock Beng Lim, Di Ma, A survey of movement strategies for improving network coverage in wireless sensor networks, Computer Communications 32 (2009) 14271436 [7]. C.-F. Huang, Y.-C. Tseng, A survey of solutions to the coverage problems in wireless sensor networks, Journal of Internet Technology 6 (1) (2005) 18. [8]. M. Cardei, J. Wu, Energy-efcient coverage problems in wireless ad hoc sensor networks, Computer Communications 29 (4) (2006) 413420. [9]. H. Zhang, J. Hou, On deriving the upper bound of a-lifetime for large sensor networks, in: ACM International Symposium on Mobile Ad Hoc networking and Computing (MobiHoc), 2004, pp. 121 132. [10]. S. Kumar, T.H. Lai, J. Balogh, On k-coverage in a mostly sleeping Sensor network, in: ACM International Conference on Mobile Computing and Networking (Mobicom), 2004, pp. 114158. [11]. G.T. Sibley, M.H. Rahimi, G.S. Sukhatme, Robomote: a tiny mobile Robot platform for large-scale ad-hoc sensor networks, in: IEEE International Conference on Robotics and Automation, 2002, pp. 11431148. [12]. Y.-C. Tseng, Y.-C. Wang, K.-Y. Cheng, Y.-Y. Hsieh, iMouse: an integrated mobile surveillance and wireless sensor system, IEEE Computer 40 (6) (2007) 7682. [13]. Nor Azlina Ab. Aziz, Kamarulzaman Ab. Aziz, and Wan Zakiah Wan Ismail, Coverage strategies for Wireless Sensor Networks World Academy of Science, Engineering and Technology 26 2009. [14]. A. Ghosh, Estimating coverage holes and enhancing coverage in mixed sensor networks, in IEEE International Conference on Local Computer networks, 2004, pp. 6876. [15]. G. Wang, G. Cao, P. Berman, T.F.L. Porta, Bidding protocols for deploying mobile sensors, IEEE Transactions on Mobile Computing 6 (5) (2007) 515528. [16]. F. Aurenhammer, Voronoi diagrams a survey of a fundamental geometric data structure, ACM Computing Surveys 23 (4) (1991) 345406.

AUTHORS
Neha Deshpande has 22 years teaching experience for under-graduate and postgraduate students. Currently working as Associate Professor in Abasaheb Garware College - Pune and working for Ph.D on application of Wireless Sensor Networks for last 5 years in the Department of Electronic Science University of Pune. She has also contributed in the development of a software tool WSN Planner Version 1 using Matlab. She has to her credit , about 18 research papers presented at state level, national and international conferences. Two papers published in IEEE Explore Conference proceedings. One paper at ICSEM conference international journal proceedings. She is Life member of Indian Association of Physics Teachers (IAPT)and SPEED (Society for Promotion of Excellence in Electronics. Recently she was invited as P.G. Research Intern at NIMBUS centre of Cork Institute of Technology, Cork, Ireland.

428

Vol. 5, Issue 1, pp. 422-429

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
A. D. Shaligram, Professor and Head, Department of Electronic Science at University of Pune, has a professional experience of more than 25 years. His main fields of research interest are Embedded systems and VLSI design, Optoelectronic sensors and systems, Wireless Sensor Networks, Simulation software development, Biomedical Instrumentation and sensors, PC/Microcontroller based instrumentation, e-learning resource development. He has published more than 278 research papers in National/ International Journals and conference proceedings. Guided 22 students for Ph.D. 15 students for M.Phil and over 150 students for their M.Sc. thesis. Founder Chairman of Society for Promotion of Excellence in Electronics Discipline (SPEED).

429

Vol. 5, Issue 1, pp. 422-429

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

LBG ALGORITHM FOR FINGERPRINT CLASSIFICATION


Sudeep Thepade1, Dimple Parekh2, Unnati Thapar3, Vandana Tiwari3
2

Prof., Department of Computer Engineering, PCCOE, Pune, India Asst. Prof., Deptt. of Information Tech., MPSTME, NMIMS Univ., Mumbai, India 3 B.Tech (I.T), 4th year , MPSTME, NMIMS Univ., Mumbai, India

ABSTRACT
Fingerprints are unique to each individual and can be used as a means to differentiate one individual from another. Therefore, it serves as an identity of an individual. Fingerprint Classification is done to relate a given fingerprint to one of the existing classes. Fingerprints are classified into pre-defined classes such as left loop, right loop, arch, tented arch and whorl. Classifying fingerprint images is a very complex pattern recognition problem, due to the minute interclass variability. The objective is to reduce response time, computation complexity and search space while classifying an image. In this research paper a novel technique based on vector quantization for fingerprint classification using Linde Buzo Gray (LBG) also called as Generalized Lloyd Algorithm (GLA) is proposed. Vector Quantization is a lossy technique for data compression and is used in various applications. For vector quantization to be efficient a good codebook is required. Classification is done on fingerprint images using LBG codebooks of sizes 4. The proposed technique takes lesser computations as compared to usual fingerprint classification techniques. It is observed that the method provides an accuracy of 80%. KEYWORDS: Vector Quantization, Linde-Buzo-Gray (LBG), Generalized Lloyd Algorithm (GLA), Fingerprint Classes.

I.

INTRODUCTION

Biometrics refers to the identification of humans by their characteristics or traits. In information technology, biometrics refers to technologies that measure and analyze human body characteristics, such as DNA, fingerprints, eye retinas and irises, voice patterns, facial patterns and hand measurements, for authentication purposes. While designing fingerprint identification system the major challenge is to determine what features are to be extracted and how these features can be used to categorize fingerprints into their respective classes. Poor quality and noisy fingerprint images result quite often in false or missing singular points which decreases the overall efficiency of the identification system. Fingerprint classification not only reduces comparisons of fingerprints, but also improves the overall efficiency of fingerprint identification system [14, 15]. The paper proposes a scheme to classify fingerprint images into their respective classes without preprocessing images or locating singular points. A technique called Vector Quantization (VQ) is used for classification. Several VQ techniques differ from one another on the method used to form the clusters [12, 13, 16]. LBG is the simplest Vector Quantization technique which involves the computation of Euclidean distance to form clusters.

430

Vol. 5, Issue 1, pp. 430-435

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
The paper is organized as follows. Section II gives a brief description of various fingerprint types, section III explains LBG algorithm, section IV consists of results and discussions and section V concludes the paper.

II.

FINGERPRINT TYPES

Fingerprints can be classified into the following types: loop which further can be sub-divided into right loop and left loop, arch which includes tented arch and plain arch and whorl [1, 3]. Majority fingerprint images fall into the loop category about 60-65% while arch and whorl comprise of 30-35% and 5% respectively. Figure 1 shows different fingerprint categories.

a) b) Figure 1: Fingerprint Types. a) Loop b) Arch c) Whorl

c)

2.1. Loop
Loops occur in about 60-70 % of fingerprint patterns encountered. One or more of the ridges enters on either side of the impression, re-curves, touches or crosses the line running from the delta to the core and terminates on or in the direction of the side where the ridge or ridges entered. Loops can more specifically be classified as right loop and left loop by observing the left hand. If the ridges flow in the direction of the thumb, it can be classified as right loop and if it flows in the direction of the little finger then it can be categorized as left loop.

2.2. Arches
The arch pattern is made up of ridges lying one above the other. The ridges enter on one side and flow or appear to flow out from the other side. The tented arch consists of at least one protruding ridge which tends to bisect other ridges at right angles. Plain Arch has a wave like structure as compared to tented arch which has a sharp rise at the center.

2.3. Whorl
The whorl pattern consists of one or more free recurving ridges which could be spiral, oval or circular and two delta points. The line of the fingerprint disc will bisect at least one of the ridges if it is placed on the delta points.

III.

LINDE BUZO GRAY (LBG)

There are several Vector Quantization algorithms which differ from one another on the process used for cluster formations. The simplest VQ algorithm used to generate codebook is the Linde-Buzo-Gray (LBG) algorithm which is also called as Generalized Lloyd Algorithm (GLA). The image is divided into blocks of size 8X8 which form the training vector. Then centroid C1 is calculated for the training vector. Constant Error is added and subtracted to the code vector C1 to generate two vectors V1 and V2. Euclidean distance is computed for all the training vectors with vectors V1 and V2. If the Euclidean distance of the training vector with vector V1 is smaller as compared to the Euclidean distance with vector V2, then the training vector is put into V1 cluster otherwise into V2 cluster. Thus, we have two clusters at the end of the first iteration as shown in Figure 2.a. The same procedure is repeated with each cluster. V1 is split into two clusters V11 and V12. Similarly, V2 is split into two

431

Vol. 5, Issue 1, pp. 430-435

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
clusters V21 and V22 [4, 5, 6, 7, 8, 9]. Thus four clusters are obtained at the end of the second iteration as shown in Figure 2.b.

a) Figure 2: LBG Algorithm for 2 dimensional case. a) First Iteration b) Second Iteration

b)

IV.

CLASSIFYING FINGERPRINT USING LBG PROPOSED TECHNIQUE

LBG is applied on input image from each class in the database. Codebook of size 4 was used for classification. Features are extracted, collected and stored. Test image features are extracted, collected in the same way and stored. Euclidean distance is used to calculate the difference between features of stored images and test images. Minimum distance is calculated and the class to which the feature vector belongs is assigned accordingly. The percentage of classification accuracy is used to compare the performance of the proposed fingerprint classification method [2].

The LBG algorithm can be illustrated by the following diagrammatic representation as shown in Figure 3.

Figure 3: Block Diagram for LBG Algorithm

432

Vol. 5, Issue 1, pp. 430-435

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

V.

RESULTS AND DISCUSSION

The database on which LBG has been tested consists of 50 images each of size 256x256. The images used can be classified as left loop, right loop, arch, tented arch and whorl. Codebook of size four has been used for classification. The overall accuracy for LBG is 80% and from the graph shown in Figure 4 it can be further observed that LBG gives the best results for left loop class and poor results for arch class. In Figure 5 LBG results are compared with already existing KFCG results [1, 2, 10, 11]. For both vector quantization codebook generation methods the codebook size is 4 and window size taken is 8x8. It is observed that for each of the considered fingerprint classes except the left loop (LL) and tented arch (TA), the proposed LBG based fingerprint classification has given superior percentage accuracy over the existing KFCG based classification. Overall accuracy of the LBG based method (80%) outshines the KFCG based method (70%) by a large margin.

Figure 4: Results of LBG - 4

Figure 5: Performance Comparison of proposed fingerprint classification using LBG with the existing fingerprint classification using KFCG.

VI.

CONCLUSIONS

Classification is a significant task for the successful realization of any fingerprint identification system. Linde Buzo Gray (LBG) also called as Generalized Lloyd Algorithm (GLA) is one of the vector quantization codebook generation technique. It provides an overall accuracy of 80% for codebook of size 4 and window size 8x8 which is 10% higher than the existing KFCG based

433

Vol. 5, Issue 1, pp. 430-435

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
classification (accuracy 70%). Vector Quantization proves to be an efficient technique for classification of fingerprint images as it provides fast and efficient results. Future work consists of testing the proposed approach on a larger database.

REFERENCES
[1] Dr. H. B. Kekre, Dr. Sudeep D. Thepade, Dimple A Parekh, Fingerprint Classification using KFCG Algorithm, Internation Journal of Computer Sciences and Informaiton Security (IJCSIS), Vol. 9 No. 12, Dec 2011, pp 78-81. [2] Dr. H. B. Kekre, Dr. Sudeep D. Thepade, Dimple A Parekh, Comparison of Fingerprint Classification using KFCG Algorithm with various window sizes and codebook sizes, Internation Journal of Computer Applications (IJCA), Vol. 46, No. 17, May 2012. [3] Dimple Parekh, Rekha Vig, Review of Fingerprint Classification methods based on Algorithmic Flow, Journal of Biometrics, Vol. 2, Issue 1, 2011. [4] Dr. H. B. Kekre, Dr. Tanuja K. Sarode, Dr. Saylee Gharge, Detection and Demarcation of Tumor using Vector Quantization in MRI images, International Journal of Engineering Science and Technology (IJEST), Vol.1, Issue 2, 2009, pp 59-66. [5] Dr. H. B. Kekre, Sudeep D. Thepade, Tanuja K. Sarode, Shrikant Sanas, Image Retrieval using texture features extracted using LBG, KPE, KFCG, KMCG, KEVR with assorted color spaces, International Journal of Advances in Engineering & Technology (IJAET), Vol. 2, Issue 1, Jan 2012, pp. 520-531. [6] H. B. Kekre, Tanuja Sarode, Sudeep D. Thepade, Supriya Kamoji, "Performance Comparison of Various Pixel Window Sizes for Colorization of Greyscale images using LBG, KPE, KFCG and KEVR in Kekre's LUV Color Space", International Journal of Advances in Engineering & Technology (IJAET), Vol. 1, Issue 2, Dec 2011. [7] H. B. Kekre, Sudeep D. Thepade, Tanuja K. Sarode and Vashali Suryawanshi Image Retrieval using Texture Features extracted from GLCM, LBG and KPE. International Journal of Computer Theory and Engineering, Vol. 2, No. 5, October, 2010. [8] H. B. Kekre, Kamal Shah, Tanuja K. Sarode, Sudeep D. Thepade, Performance Comparison of Vector Quantization Technique KFCG with LBG, Existing Transforms and PCA for Face Recognition, International Journal of Information Retrieval (IJIR), Vol. 02, Issue 1, pp.: 64-71, 2009 [9] H. B. Kekre, Tanuja K. Sarode, Sudeep D. Thepade, Image Retrieval using Color-Texture Features from DCT on VQ Codevectors obtained by Kekres Fast Codebook Generation,ICGST-International Journal on Graphics, Vision and Image Processing (GVIP), Volume 9, Issue 5, pp.: 1-8, 2009. [10] H.B.Kekre, Tanuja K. Sarode, Sudeep D. Thepade, Vaishali Suryavanshi,Improved Texture Feature Based Image Retrieval using Kekres Fast Codebook Generation Algorithm, Springer-International Conference on Contours of Computing Technology (Thinkquest-2010), Babasaheb Gawde Institute of Technology, Mumbai, 13-14 March 2010, The paper will be uploaded on online Springerlink. [11] H. B. Kekre, Tanuja K. Sarode, Sudeep D. Thepade,: Image Retrieval using Color-Texture Features from DCT on VQ Codevectors obtained by Kekres Fast Codebook Generation. In.: ICGST-Int. Journal GVIP, Vol. 9, Issue 5, pp. 1-8, (Sept 2009). [12] R. M. Gray, Vector quantization, In.: IEEE ASSP Mag., pp.: 4-29, (Apr. 1984). [13] Y. Linde, A. Buzo, and R. M. Gray, An algorithm for vector quantizer design, In.: IEEE Trans. Commun., vol. COM-28, no. 1, pp.: 84-95. (1980). [14] H. B. Kekre, Tanuja K. Sarode, New Fast Improved Codebook Generation Algorithm for Color Images using Vector Quantization, International Journal of Engineering and Technology, vol.1, No.1, pp.: 67-77, September 2008. [15] Sir Edward R. Henry, "Classification and Uses of Finger Prints". London: George Rutledge & Sons, Ltd., 1900 http://www.clpex.com/Information/Pioneers/henry-classification.pdf. [16] M.Chong, T.Ngee, L.Jun, R.Gay, Geometric framework for fingerprint image classification, Pattern Recognition, volume 30, No. 9,pp.1475-1488, 1997.

AUTHORS BIOGRAPHIES
Sudeep D. Thepade has Received B.E.(Computer) degree from North Maharashtra University with Distinction in 2003. M.E. in Computer Engineering from University of Mumbai in 2008 with Distinction, Ph.D. from SVKMs NMIMS in 2011, Mumbai. He has about 09 years of experience in teaching and industry. He was Lecturer in Dept. of Information Technology at Thadomal Shahani Engineering College, Bandra(w), Mumbai for nearly 04 years. Currently working as Associate Professor and HoD Computer Engineering at Mukesh Patel School of Technology Management and Engineering, SVKMs NMIMS, Vile

434

Vol. 5, Issue 1, pp. 430-435

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Parle(w), Mumbai, INDIA. He is member of International Advisory Committee for many International Conferences, acting as reviewer for many referred international journals/transactions including IEEE and IET. His areas of interest are Image Processing and Biometric Identification. He has guided five M.Tech. projects and several B.Tech projects. He more than 125 papers in National/International Conferences/Journals to his credit with a Best Paper Award at International Conference SSPCCIN-2008, Second Best Paper Award at ThinkQuest2009, Second Best Research Project Award at Manshodhan 2010, Best Paper Award for paper published in June 2011 issue of International Journal IJCSIS (USA), Editors Choice Awards for papers published in International Journal IJCA (USA) in 2010 and 2011. Dimple A Parekh currently working as Asst. Professor in IT Department has completed M.Tech(I.T) from Mukesh Patel School of Technology and Engineering, SVKMs NMIMS Deemed to be University in 2011, B.Tech(I.T) from Thakur College of Engineering and Technology in 2005. She has worked in the area of Fingerprint Classification. Her areas of interest are Image processing, Computer Vision and Data Mining. She has 9 papers in International Conferences/Journals to her credit with Best Paper Award at IJCSIS, May 2011 and Best Research Project at Manshodhan 2011.

Unnati Thapar is currently pursuing her B.Tech degree in Information Technology from Mukesh Patel School of Technology and Engineering, SVKMs NMIMS Deemed to be University. Her areas of interest include operations research, computer networks and image processing. She has completed and presented various projects and seminars in the above mentioned domains.

Vandana Tiwari is currently pursuing her B.Tech degree in Information Technology Mukesh Patel School of Technology and Engineering, SVKMs NMIMS Deemed to be University. Her areas of interest include computer networks and image processing. She has worked on projects and seminars in the above mentioned domain.

435

Vol. 5, Issue 1, pp. 430-435

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

OPTIMAL PLACEMENT OF SVC AND STATCOM FOR VOLTAGE STABILITY ENHANCEMENT UNDER CONTINGENCY USING CAT SWARM OPTIMIZATION
G. Naveen Kumar1, M. Surya Kalavathi2 and R. Harini Krishna3
1

Department of EEE, VNRVJIET, Hyderabad, India 2 Department of EEE, JNTUH, Hyderabad, India 3 MITS, Chittor, India

ABSTRACT
Due to continuous expansion of power system in accordance with the growing demand, stability studies have become a fascinated area for research in the modern day. The aim of this paper is to identify the optimal location and size of shunt FACTS controllers in an interconnected power system under N-1 contingency for voltage stability analysis. As the size and the cost of the FACTS devices are high, an optimal location of FACTS along with its size needs to be identified before they are actually installed. In this process, we are trying to improve the voltage profile and Maximum loading Parameter while maintaining the losses under control using FACTS controllers based upon Cat Swarm Optimization (CSO).

KEYWORDS: SVC, STATCOM, voltage Stability, CSO, CPF.

I.

INTRODUCTION

Power system stability [1] is a very Complex subject that has been challenging the power system engineers in the past two decades. Due to the continuous expansion of power systems to cater the needs of growing population, power system stability problems also are a continuous and fascinating area of study. When a bulk power transmission network is operated close to the voltage stability limit, it becomes difficult to control the reactive power demand for that system. Voltage stability is of major concern in power systems stability [5]. Main reason for the cause of voltage instability is the sag in reactive power at various locations in an interconnected power system. Voltage stability is a problem in power systems which are heavily loaded, faulted or have a shortage of reactive power. The problem of voltage stability concerns the whole power system, although it usually has a large involvement in one critical area of the power system. Voltage stability is concerned with the ability of a power system to maintain steady voltages at all buses in the system under normal operating conditions, and after being subjected to a disturbance. Instability that may occurs in the form of a progressive fall or rise of voltage of some buses. The Possible outcome of voltage instability is loss of load in the area where voltages reach unacceptably low values, or a loss of integrity of the power system. A power system at a given operating state is small disturbance voltage stable if, following any small disturbance such as unbalanced loads and load variations, voltages near loads are identical or close to the pre-disturbance values. Large disturbance voltage stability [2] refers to the systems ability to maintain steady voltages following large disturbances such as system faults, loss of generation, or circuit contingencies. The voltages at various points after such a disturbance may reach the pre-disturbance values or not, leading to voltage sag at certain points.

436

Vol. 5, Issue 1, pp. 436-447

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Though in India, power transmission and distribution systems have been centralized and cause of power system instability is very minimal, the line outages caused due to weather conditions is still being considered a serious problem. Reactive power deficiency and voltage degradation is serious during such situations. There is a necessity to throw light in this area to assess the voltage stability of an interconnected power system affected by such a contingency. Using FACTS controllers [3, 4] one can control the variables such as voltage magnitude and phase angle at chosen bus and line impedance where a voltage collapse is observed. Introducing FACTS devices is the most effective way for utilities to improve the voltage profile and voltage stability margin of the system. With the ongoing expansion and growth of the electric utility industry, including deregulation in many countries, numerous changes are continuously being introduced to a once predictable business. Although electricity is a highly engineered product, it is increasingly being considered and handled as a commodity. Flexible AC Transmission Systems (FACTS), provide proven technical solutions to address these new operating challenges being presented today. FACTS technologies allow for improved transmission system operation with minimal infrastructure investment, environmental impact, and implementation time compared to the construction of new transmission lines. The potential benefits of FACTS equipment are now widely recognized by the power systems engineering and T&D communities. The aim of this paper is to identify the optimal location and size of SVC and STATCOM in an interconnected power system under N-1 contingency for voltage stability analysis. As the size and the cost of the facts devices are high, an optimal location and size has to be identified before they are actually installed. We are trying to improve the voltage profile and Maximum loading Parameter while maintaining the losses under control using FACTS controllers. Optimization techniques find a variety of use in many fields. As Artificial intelligence techniques are improving day by day, the use of these techniques in power systems is playing an important role for the optimal location of FACTS devices. We are using Cat Swarm Optimization (CSO) [15, 16, and 17] to identify the optimal location and size of FACTS controllers. This is the first paper to introduce CSO for voltage stability analysis under contingency for optimal placement of FACTS. The organization of this paper goes like this. Section 2 briefs out the problem statement. Section 3 defines the objective function. Section 4 gives the details of the test systems and the softwares used. Section 5 details the CSO. Section 6 presents the details of the results. And finally sections 7 and 8 give the Conclusion and Future scope of the work.

II.

PROBLEM STATEMENT

A contingency is the failure or loss of an element (e.g. generator, transformer, transmission line, etc.), or a change of state of a device (e.g. the unplanned opening of a circuit breaker in a transformer substation) in the power system. Contingency Analysis is essentially a "preview" analysis. It simulates and quantifies the results of problems that could occur in the power system in the immediate future. CA is used for the off-line analysis of contingency events, and show operators what would be the effects of future outages. This allows operators to be better prepared to react to outages by using preplanned recovery scenarios. An "outage" is the removal of equipment from service. Line contingency refers to the removal of transmission line from the system. Whereas in the case of generator contingency we assume that the particular generator is no longer a part of the system and usually the voltage variation is high. Both line contingency and generator contingency come under large disturbances. In this paper we are doing (N-1) line outages contingency analysis and we are trying to improve the voltage profile and compensate reactive power losses through the use of FACTS devices. (N-1) contingency refers to removal of transmission lines individually for (N-1) cases. At any instant only one particular line can be removed. Exhaustive Search [10] and PSO [11] techniques have been investigated before the application of CSO to the present problem.

III.

OBJECTIVE FUNCTION

The objective function which we have assumed is F = {F1, F2, F3} The functions F1, F2, F3 are defined and used in optimization process. F = 1F1 + 2F2 + +n Fn (1)

437

Vol. 5, Issue 1, pp. 436-447

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
In our study, the fitness function is defined as a sum of three terms with individual criteria. The first part of the objective function concerns the voltages level. It is favorable that buses voltages be as close as possible to 1 p.u. Equation (2) shows the voltage deviation in all buses. F1=Fv= [(Vi-1)2]1/2 (2) Where nb is the number of buses and Vi is the voltage of bus i . The second one is related to power system total loss and minimizing it in power systems that are given by PLk = Psending Precieving FL = PL_total = Floss =Plk (3) Where Plk indicates the loss in line ending to buses l and k, and PL = Floss represents the total loss of power network. F3-This function represents minimum size of FACTS controller.

IV.

TEST SYSTEMS & SOFTWARE USED

We are testing our algorithm here on two test systems: The 3-bus system and IEEE 14-bus system. The specifications of 3 bus system can be given as: Total Number of Buses used here are 3, total number of Lines used is 3, total Number of Generators is 1 and the total Number of Loads is 2. The specifications of IEEE 14 bus system can be given as: the number of buses being 14, the number of Lines being 16, the generator count is 5 (including slack bus) and the number of loads being 11. An MVA base of 100 is assumed for the two bus systems. All the analysis and testing here is being done in MATLAB [7].

Fig 1: 3-bus power system

438

Vol. 5, Issue 1, pp. 436-447

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Fig 2: Standard IEEE 14-bus system

V.

CAT SWARM OPTIMIZATION AND FACTS

5.1 INTRODUCTION TO CSO:


In the field of optimization, many algorithms were being proposed recent years, e.g. Genetic Algorithm (GA), Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO), and Simulated Annealing (SA) etc. Some of these optimization algorithms were developed based on swarm intelligence. Cat Swarm Optimization (CSO), the algorithm, is motivated from PSO and ACO. According to the literatures, PSO with weighting factor usually finds the better solution faster than the pure PSO, but according to the experimental results, Cat Swarm Optimization (CSO) [15, 16 and 17] presents even much better performance.

5.2 PROPOSED ALGORITHM


In Cat Swarm Optimization, we first model the major two behaviors of cats into two sub-models, namely, seeking mode and tracking mode [15].

5.2.1 THE SOLUTION SET IN THE MODEL -- CAT


Solution set must be represented in some manner. For example, GA uses chromosome to represent the solution set; ACO uses ant as the agent, and the paths made by the ants depict the solution sets; PSO uses the positions of particles to delineate the solution sets. We use cats and the model of behaviors of cats to solve the optimization problems, i.e. we use cats to portray the solution sets. In CSO, we first

439

Vol. 5, Issue 1, pp. 436-447

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
decide how many cats we would like to use, then we apply the Cats into CSO to solve the problems. Every cat has its own position composed of M dimensions, velocities for each dimension and a flag to identify whether the cat is in seeking mode or tracing mode. The final solution would be the best position in one of the cats due to CSO keeps the best solution till it reaches the end of iterations.

5.2.2 SEEKING MODE


This sub-model is used to model the situation of the cat, which is resting, looking around and seeking the next position to move to. In seeking mode, we define four essential factors: seeking range of the selected dimension (SRD), counts of dimension to change (CDC), and self-position considering (SPC). SMP is used to define the size of seeking memory for each cat, which indicates the points sought by the cat. The cat would pick a point from the memory pool according to the rules described .SRD declares the mutative ratio for the selected dimensions. These factors are all playing important roles in the seeking mode.SPC is a variable, which decides whether the point, where the cat is already Standing will be one of the candidates to move to. How the seeking mode works can be described in 5 steps as follows: Step1: select the total number cat that has to be considered. Step2: for each cat a fixed range of velocities has to be assumed. Step3: Calculate the fitness values (FS) of all candidate points. Step4: Select how many cats to be available in seeking mode. Step5: Randomly pick the cat from the total number of cats and apply to seeking mode.

5.2.3 TRACING MODE


Tracing mode is the sub-model for modeling the case of the cat in tracing some targets. Once a cat goes into tracing mode, it moves according to its own velocities for every dimension. The action of tracing mode can be described in 3 steps as follows: Step1: Update the velocities for every dimension (vk, d) according to equation. Step2: Check if the velocities are in the range of maximum velocity. In case the new velocity is over range, set it be equal to the limit. Step3: Update the position of catk and again calculate the best fitness value. Proceed till the best fitness value is obtained and the corresponding cat location and velocity are the best values.

5.2.4 ALGORITHM FOR THE CAT SWARM OPTIMIZATION


As described in the above subsection, CSO includes two sub-models, the seeking mode and the tracing mode. To combine the two modes into the algorithm, we define mixture ratio (MR) of joining seeking mode together with tracing mode. While they are resting, they move their position carefully and slowly, sometimes even stay in the original position. The algorithmic flow routine for the CSO can be explained through the flow chart in figure 3.

5.3 FACTS
Flexible AC Transmission Systems (FACTS) [4] controllers have been used in power systems since the 1970s with the objective of improving system dynamic performance. Due to the environmental, right of- way, and cost problems in both bundled and unbundled power systems, many transmission lines have been forced to operate at almost their full capacities worldwide. FACTS controllers enhance the static performance viz. increased loading, congestion management, reduced system loss, economic operation, etc., and dynamic performance viz. increased stability limits, damping of power system oscillation, etc. The concept of FACTS involves a family of fast acting, high power, and electronic devices, with advanced and reliable controls. By using FACTS controllers one can control the variables such as voltage magnitude and phase angle at chosen bus and line impedance. Flexible alternating-current transmission systems (FACTS) defined as ac transmission systems incorporating power electronics based and other static controllers to enhance controllability and increase power transfer capability.

440

Vol. 5, Issue 1, pp. 436-447

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Fig 3: Flow chart for the CSO technique

5.3.1 SVC
A static VAR compensator [4] consists of a capacitor bank in parallel with a thyristor controlled reactor as shown in figure 4. It is used to stabilize a bus bar voltage and improve damping of the dynamic oscillation of power systems. In this model, a total reactance bSVC is assumed and the following is the differential equation. The model is completed by the algebraic equation expressing the reactive power injected at the SVC node. bSVC = (Kr (Vref + vPOD V) bSVC)/Tr Q = bSVC*V2 (4)

Fig 4: Structure of SVC

441

Vol. 5, Issue 1, pp. 436-447

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963 5.3.2 STACOM
A static synchronous compensator (STATCOM) [4] as shown in figure 6 is a regulating device used on alternating current electricity transmission networks. It is based on power electronic voltage source converter and can act as either a source or sink of reactive AC power to an electricity network. If connected to a source of power it can also provide active AC power. The mathematical model of the differential equation and the reactive power to be injected at the STATCOM node are given, respectively as follows. iSH = (Kr (Vref + vPOD V) iSH)/Tr Q = iSH V (5)

Fig 5: Structure of STATCOM

VI.

IMPLEMENTATION, RESULTS AND DISCUSSIONS

The implementation of the present problem and its solution can be explained as follows. We run the CPF [14] for the base case that is the pre disturbance case and the voltages at various buses are noted. And now we introduce the contingency i.e., the line outage and rerun the CPF routine to know the deterioration of voltages. Similarly we continue this for N-1 cases and the maximum loading parameter and the voltages at the respective buses is noted. From the N-1 line contingency analysis we identify the three critical cases for which there is maximum deviation in the voltages. After identifying the worst locations for line contingencies, now, using the CSO technique, which has been described earlier, the two shunt FACTS controllers are introduced at appropriate places with chosen VAR ratings to improvise the maximum loading limit of the system and also to bring the system voltages back to the pre disturbance values (or) near pre-disturbance values.

6.1 RESULTS FOR 3-BUS SYSTEM


Three bus system: The theoretical and practical results of a 3-bus system without considering line outage are shown. The theoretical values were found out using Newton-Raphson method. In N-R method the active and reactive power equations are given as follows: Pi= ViVk Yik*cos (i-k-ik) Qi= ViVk Yik*sin (i-k-ik) Using the above equations the voltages at their respective buses are obtained as follows
Table 1: 3-bus system values without line outage Bus numbers 1 2 3 Theoretical Values V(p.u) 1.05 0 1.03 -0.2298 0.9475 -0.3202 Practical Values V(p.u) 1.05 0 1.03 -0.23305 0.93896 -0.3301

(6) (7)

442

Vol. 5, Issue 1, pp. 436-447

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Table 2: 3-bus system values with line outage Bus numbers 1 2 3 Theoretical Values V(p.u) 1.05 0 1.03 -2.77916 1.0248 -1.9259 Practical Values V(p.u) 1.05 0 1.03 -2.8517 1.0248 -1.946

For the line outage contingency case, we find the theoretical values using Newton-Raphson Method. We assume that for this case the transmission line between bus-1 and bus-3 is removed. Now we place the FACTS controller between bus-3 and bus-2 and note down the respective voltages and line flows between the buses. The following tables show the voltage profile and power flow when SVC is incorporated between bus-3 and bus-2.

Fig 6: 3-bus system with SVC placed at bus3 Table 3: 3-bus system values with SVC placed at bus-3 Bus numbers 1 2 3 V(p.u) with SVC placed at bus-3 1.05 0 1.03 -0.17476 1.0256 -0.25122

Similarly the reactive power flow between bus-3 and bus-2 for SVC and STATCOM is tabulated as shown below.
Table 4: Reactive Power Flow between bus-3 & bus-2(SVC) Reactive Power Flow between bus-3 & bus-2 Theoretical Practical Q(p.u) -1.65 -1.0037

Table 5: Reactive Power Flow between bus-3 & bus-2(STATCOM) Reactive Power Flow between bus-3 & bus-2 Theoretical Practical Q(p.u)

1 0.91941

443

Vol. 5, Issue 1, pp. 436-447

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963 6.2 RESULTS FOR IEE 14-BUS SYSTEM FOR LINE 10 OUTAGE CONTINGENCY: 6.2.1 FACTS USED: SVC (FOR LINE 10 OUTAGE)
Using CPF routine [14] basing on the same NR method, voltage stability of the IEEE 14 bus test system is investigated. The behavior of the test system with and without FACTS devices under different loading conditions is studied. The critical buses are identified as buses 12, 13, 14 by performing a line outage contingency i.e., line 10 going out of service. Bus 13 has the weakest voltage profile and hence its profile is needed to be improved using Facts devices. The best location and optimal size of SVCS for line 10 outage is identified using CSO at the locations 12, 13, and 14 with the FACTS size equal to 0.01kvar, 0.01kvar, and 0.05kvar respectively. The following table shows the improvement in voltage profile and maximum loading parameter for line 10 contingency before and after contingency when 3 SVCS are placed at the locations 12, 13, and 14 using CSO technique.
Table 6: voltage profile before and after line 10 contingency BUS NO V(P.U) BEFORE CONTINGENCY (WITH OUT FACTS) 1.0572 0.93179 0.85811 0.77903 0.79614 0.82196 0.79451 0.93818 0.72039 0.71231 0.75452 0.7663 0.74451 0.66134 2.375 7.1003 5.167 72.77 V(P.U) AFTER CONTINGENCY (line 10) 1.0577 0.94915 0.88577 0.80139 0.82107 0.8462 0.79031 0.93788 0.70069 0.70156 0.76107 0.66922 0.50774 0.53445 2.1732 6.463 4.6955 72.65 V(P.U) AFTER CONTINGENCY (WITH 3 FACTS) 1.0564 0.88829 0.77827 0.75435 0.77839 0.97179 0.8646 0.98203 0.86105 0.85587 0.90063 1.045 1.045 1.045 2.5799 7.2481 7.2043 99.39

01 02 03 04 05 06 07 08 09 10 11 12 13 14 M L.P(max) QGEN QLOSS %QLOSS

Fig 7: Voltage magnitude profile before and after placement of SVCS for line 10 contingency

444

Vol. 5, Issue 1, pp. 436-447

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Fig 8: P-V curves before and after placement of SVCS for line 10 contingency.

6.2.2 FACTS USED: STATCOM (FOR LINE 10 OUTAGE)


The best location and optimal size of STATCOM for line 10 contingency is between the locations 1206, 13-12, and 14-13 with size equal to 0.01kvar, 0.01kvar, and 0.01kvar each as derived from CSO.

Fig 9: Voltage magnitude profile before and after placement of STATCOMS for line 10 contingency

Fig 10: P-V curves before and after placement of STATCOMS for line 10 contingency Table 7: voltage profile before and after line 10 contingency BUS NO V(P.U) BEFORE CONTINGENCY (WITH OUT FACTS) 1.0572 0.93179 0.85811 0.77903 V(P.U) AFTER CONTINGENCY (line 10) 1.0577 0.94915 0.88577 0.80139 V(P.U) AFTER CONTINGENCY (WITH 3 FACTS) 1.0564 0.8891 0.77948 0.75471

01 02 03 04

445

Vol. 5, Issue 1, pp. 436-447

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
05 06 07 08 09 10 11 12 13 14 M L.P(max) QGEN QLOSS %QLOSS 0.79614 0.82196 0.79451 0.93818 0.72039 0.71231 0.75452 0.7663 0.74451 0.66134 2.375 7.1003 5.167 72.77 0.82107 0.8462 0.79031 0.93788 0.70069 0.70156 0.76107 0.66922 0.50774 0.53445 2.1732 6.463 4.6955 72.65 0.77881 0.96988 0.86258 0.98081 0.8572 0.85229 0.89786 1.0399 1.0375 1.0339 2.578 7.247 7.1477 98.63

VII.

CONCLUSION

CSO was successfully implemented for a sample 3 bus and an IEEE 14 bus test systems. This latest AI technique which was never applied to voltage stability problems earlier produced the best results as compared to Exhaustive Search technique and PSO as can be seen from the above tables. From the results it is clear to state that the voltage magnitude profile, MLP have been improved as compared with Exhaustive Search technique [10] and PSO [11] keeping the losses under control.

VIII.

FUTURE SCOPE

The future scope of this work includes the testing of this algorithm on an IEEE 30 bus and IEEE 118 bus test systems to find the optimal location for SVC and STATCOM and comparing it with PSO and Exhaustive search technique.

REFERENCES
[1] Operation and Control in Power systems by P S R Murty, BS Publications. [2] C. W. Taylor, Power System Voltage Stability. New York, Mc Graw-Hill, 1994. [3] How FACTS Controllers Benefit AC Transmission Systems, John J. Paserba. [4] Hingorani NG, Gyugyi L (2000) Understanding FACTS: concepts and technology of flexible AC transmission systems. IEEE Press, New York. [5] Power Systems dynamics and stability by Prabha kundur. [6] Modern Power System Analysis, I. J. Nagrath. [7] F. Milano, "Power System Analysis Toolbox," Version 1.3.4, Software and Documentation, July 14, 2005.PSAT manual written by Federico Milano. [8] Proposed terms and definitions for flexible AC transmission system (FACTS), IEEE Transactions on Power Delivery, Volume 12. [9] Particle Swarm Optimization Algorithm for Voltage Stability Enhancement by Optimal Reactive Power Reserve Management with Multiple TCSCs, S.Sakthivel. [10] CPF, TDS based Voltage Stability Analysis using Series, Shunt and SeriesShunt FACTS Controllers for Line Outage Contingency, G. Naveen Kumar, Dr. M Surya kalavathi, ICPS 2011, IIT Madras. [11] Optimal Placement of Static VAr Compensators (SVCs) Using Particle Swarm Optimization, K Sundareswaran, Hariharan B, Fawas Palasseri Parasseri, Daniel Sanju Antony, and Binyamin Subair,IEEE,2010. [12] Optimal Placement of Static VAR Compensators (SVCs) Using Particle Swarm Optimization, Power, Control and Embedded Systems (ICPCES), 2010 International Conference, Page(s): 1 4. [13] Comparison of STATCOM, SVC, TCSC, and SSSC Performance in Steady State Voltage Stability Improvement, NAPS, 2010. [14] Comparison of SVC, STATCOM, TCSC, and UPFC Controllers for Static Voltage Stability Evaluated by Continuation Power Flow Method, Mehrdad Ahmadi Kamarposhti, Mostafa Alinezhad, Hamid Lesani, Nemat Talebi, 2008 IEEE Electrical Power & Energy Conference. [15] Enhancing the Performance of Watermarking Based on Cat Swarm Optimization Method, IEEEInternational Conference on Recent Trends in Information Technology, ICRTIT 2011, IEEE, MIT, Anna University, Chennai. June 3-5, 2011. [16] CSO and PSO to Solve Optimal Contract Capacity for High Tension Customers, IEEE, PEDS- 2009.

446

Vol. 5, Issue 1, pp. 436-447

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
[17] Cat Swarm Optimization for Clustering, 2009 International Conference of Soft Computing and Pattern Recognition.

BIOGRAPHY OF AUTHORS
G Naveen Kumar is Assistant Professor in EEE Department at VNRVJIET, Hyderabad, India. He studied his B.Tech and M.Tech from J.N.T.University, Hyderabad. Currently he is working towards his PhD at J.N.T.University.

M Surya Kalavathi is Professor in EEE department at Jawaharlal Nehru Technological University, Hyderabad, India. She received her PhD from J.N.T.University, Hyderabad and Post Doctorate degree from prestigious Carnegie Melon University, USA.

R Harini Krishna is presently working towards her Masters Degree at MITS, Chittor.

447

Vol. 5, Issue 1, pp. 436-447

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

AUTONOMIC TRAFFIC LIGHTS CONTROL USING ANT COLONY ALGORITHM


Wadhah Z. Tareq, Rabah N. Farhan
Department of Computer science, Computer College, Anbar University, Iraq

ABSTRACT
The increase in the number ofpopulation, especiallyin large citiesled to the problem oftraffic jams,wherejamsrequiresqualifiedexisting systems, especially systemsof artificial intelligence.In this research proposed control method management intersections automatically using the concepts of autonomic systems and the concept of self-organization in the ant colony algorithm to increase the flexibility of the system and accreditationon variables associated with the environment directly, also implementedadaptation algorithm and the fixed algorithm for comparison with ant colony algorithm.

KEYWORDS: Autonomic system, traffic light, Ant colony, self-organization.

I.

INTRODUCTION

The delay time that occurs in the traffic intersections and its negative aspects led to thinking over the past few years to implement new controls method that reduce this delay time especially in high traffic flow times such as in the morning. The traffic lights systems are different from other systems that it cannot be a fixed system depends on a few static data change. On the contrary, it is depended on high degrees of adaptive values and could not be predicted so its need to intelligent systems that able to any change occurs. The problems of traffic lights are studied in many researchesusing different search algorithms and intelligent systemssuch as using genetic algorithm, fuzzy system, self-organization and swarm intelligent. Ant colony is one of the important algorithms of swarm intelligent that found in literatures and have all the characteristics of adaptive system like traffic control systems. One of the most surprising behavioral patterns exhibited by ants is the ability of certain ant species to find what computer scientists call shortest paths. Biologists have shown experimentally that this is possible by exploiting communication based only on pheromones, an odorous chemical substance that ants may deposit and smell. It is this behavioral pattern that inspired computer scientists to develop algorithms for the solution of optimization problems [1]. So in this research used the ant colony algorithm to find the optimal green time to reduce the delay time that occurs in the intersection. Computer simulation provides that this work performed and give good result. The rest of the paper is structured in following manner. In section 2 a brief background to the related work is provided. Section 3model of traffic intersection explained. Section 4 introduces an introduction about swarm intelligence with examples. In section 5 the ant colony algorithm introduced. Section 6presents the T-Test function. Section 7explains applying ant algorithm in traffic light signal. Section 8 summarizes the simulation results. The conclusion and future work appear in the final section of the paper.

448

Vol. 5, Issue 1, pp. 448-455

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

II.

BACKGROUND

There are several systems that implemented for management traffic lights problem during previous years. In the 1996 Kok Khiang Tan et.al. discussed the implementation of an intelligent traffic lights control system using fuzzy logic technology which had the capability of mimicking human intelligence for controlling traffic lights.The fuzzy logic traffic lights controller performed better than the fixed-time controller due to its flexibility [2]. In 2004Marco Wiering et. al. used reinforcement learning with road-user-based value functions was used to determined optimal decisions for each traffic light. The decision was based on a cumulative vote of all road users standing for a traffic junction, where each car votes using its estimated advantage (or gain) of setting its light to green. They were performed three series of experiments, using the Green Light District traffic simulator [3]. In 2008 R. Foroughi et.al. proposed a new ant colony based optimizer to improve the traffic flow in a city. They used an ant colony optimizer as its main part to select the optimum path from origin to destination. To applied ACO on this problem they have changed the original version of ACO and the modified algorithm can be used for other applications like designing intelligent data routers, intelligent data mining, etc[4]. In 2009 David Renfrew and Xiao-Hua Yu implemented a new approach to found the optimal signal timing plan for a traffic intersection is investigated using ant colony optimization algorithm. They considered two different ACO algorithms, namely, the Ant System (AS) and the Elitist Ant System (EAS) algorithm. The two algorithms are applied to control signals at traffic intersection to reduce the vehicle waiting time. Rolling horizon algorithm was also employed to achieve real-time adaptive control [5]. In 2011 Carlos Gershenson and David A. Rosenblueth used elementary cellular automaton following rule 184 to mimic particles in one direction at a constant speed. They extended studied and evaluated to behavior of two different kinds of traffic light controller for a grid of six-way streets allowing for either two or three street intersections. They implemented three different types of traffic lights control method: A green-wave method, this method has advantages, e.g. when most of the traffic flows in the direction of the green wave at low densities. A self-organizing method, each intersection independently follows the same set of rules, based only on local traffic information. And a random method, which simply changed the lights with fixed periods but random phases. The simulations presented showed that the self-organizing method is highly scalable, and has a graceful degradation of performance as density increases [6].

III.

MODEL OF TRAFFIC INTERSECTION

The modeling of computer simulation have single intersection Corresponds to the traffic intersection in real life. It contains four streets each street with two directions and each direction have a width of two cars. All cars in any street are able to turn around in any direction such as in the real intersection. One street green light and three red lights.

Fig. 1 Traffic intersection

449

Vol. 5, Issue 1, pp. 448-455

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
It is assumed that the intersection is clear when the simulation starts (i.e., zero initial conditions, or no queue at the beginning).It is also assumed that the number of vehicles at the intersection is known, i.e., sensor type detectors are available at the intersection. We choose the maximum and minimum green time to be 30 seconds and 0 seconds, respectively. Both arrival and departure headway are 0 seconds. Loss time (human reaction time) is 0 second. Time is the most important value in the system is the measure of efficient. In Non-adaptive traffic system each street has a fixed green time equal to 30 second and there is no sensor, the green light move from street to other in order. In intelligent traffic system (Adaptable) each street have an account of cars and wait time and depending on these two values the green time can be calculated. Each street work as a line and the sensor calculates cars number in the line in a given time such as Li (t), where i represent the street number in the time t. The total number of all cars in the intersection is calculated by the sum of the four accounts of the streets: L total=[L 1count+L 2count+L 3count+ L 4count ] (1) When the system give one street green signal and the three others red, the line of the three red streets will increase and Not less so the line length denote: Lired (t2-t1) = Li (t1)+ ac (2)

Where ac is number of arrival cars and for each second the ac may be 0 or 1 if sensor senses a car give 1 and if not give 0, hypothesis that all the cars are moving at a constant speed. The length line for a street with green light can be denoted as: Lgreen(t2-t1)= Lgreen +

]2

(3)

Where 2 is thenumbers of departure cars and is set to two as we know that the street width with two cars and all cars are moving at a constant speed so the line of green light will increase in random values between (0,1) and decrease by two cars in each second. For each time the traffic intersection have four states for four streets each state have one green light and three red lights and can be categorized: S1=[L1+ac- 2 , L 2+ac, L 3+ac, L4+ac] S2=[L1+ac , L2+ac-2, L3+ac, L 4+ac] S3=[L1+ac , L 2+ac, L 3+ac-2, L 4+ac] S4=[L1+ac , L 2+ac, L 3+ac, L 4+ac-2] For a special case in adaptable traffic system that if there is a high load in a single or multiple street with a lower in another street and the last one cannot arrival to the max street to get green time, In this case the wait time will used to determine the time in which this street will arrival to green time as:if wait time equal to or greater than 50 seconds. The system will check the wait time for each loop and depending on this, the system will sure that if a few cars in some street will not take an infinity wait time. The wait time for each loop can be calculated depending on this equation: L i WAIT (t) = max i FST i + gt(t-1)

(4)

Where FST is a first sense time (The oldest car arrival time), and gt is green time for pervious loop. From above the dynamic equation of traffic system will be: L (t)= LiC + LiIN Li OUT
i=1 i=1 4 4

(5).
i

Where L C is the number of currant cars in the intersection, L IN is the number of arrival cars that arrived to intersection in time t and Li OUT is the number of departure in a green street.

450

Vol. 5, Issue 1, pp. 448-455

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

IV.

SWARM INTELLIGENCE

The expression "swarm intelligence" was first used by Beni, Hackwood, and Wang in the context of cellular robotic systems, wheremany simple agents occupy one- or two-dimensional environments to generate patterns and self-organize through nearest-neighbor interactions[7]. The mechanisms identifying swarms behavior are: 1. Multiple interactions among the individuals; 2. Retroactive positive feedback (increase of pheromone when food is detected); 3. Retroactive negative feedback (pheromone evaporation); 4. Increase of behavior modification (increase of pheromone when new path is found)[8]. There are several examples of swarm intelligence, where the role of nature is the most important sources of examples of swarm intelligencesuch as:fish schooling, bees, Termites nest,Wasps nest and ant colony etc. which has been relied upon in the construction of many algorithms.

V.

ANT COLONY ALGORITHM (AC)

Ant colony algorithm is one of a metaheuristics approach used to solve complex computing problems and find the optimal solutions. The algorithm inspired from the behavior of ants in the real world. AC Algorithm is a multi-agents system each agent called Artificial Ant, It is one of the most examples of intelligent swarms systems that used to solve several types of problems such as TSP and routing problems in networks [5]. Ant colony algorithm proposed by M. Dorigo in 1991 in his Ph. D. dissertation then this approach has become widespread and has since start development until reached several successful developments such as ELITIST AS in 1992, ANT-Q in 1995, ANT Colony system in 1996, MAX-MIN AS in 1996, RANK-BASED AS in 1997, ANTS in 1999, BWAS in 2000 and HYPER-CUBE AS in 2001 etc [9] but all ant colony optimization algorithms share the same idea which can be summarized in four steps[1][5][10]: 1. Initialization: pheromone is the most important characters of ant algorithm and must set to a constant value in each node. 2. Solution construction : the ant location and the probability of ants moving from one node to another are important in ant algorithm, The probability of ants moving from node i to node j can be denote: P ij= { (6)

Where Ni is the set of neighborhood nods of i . tij is the pheromone value between node i and j ; and ij represents the heuristic information (which is available already) . 3. Update pheromone: The algorithm updates pheromone on each path using the following equation: t ij = t ij + t ij (7)

where tij is the value of pheromone changing in each loop. This is the standard updating of pheromone value and different for each types or developments of ant colony algorithms. 4. The above solution construction and pheromone update procedures (i.e., step 2 and 3) are repeated until a stop criterion is met.

VI.

T-TEST STATICS FUNCTION

The t-test was developed by W. S. Gossett, a statistician employed in a factory. However, because the factory did not allow employees to publish their research, Gossett's work on the t-test appears under the name "Student" (and the t-test is sometimes referred to as "Student's t-test.")[11]. There are two types of t-test one simple t-test and two simple t-test. In this research the two simple ttest is used and the purpose is to know is there are a difference between the two methods value.

451

Vol. 5, Issue 1, pp. 448-455

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

VII.

APPLYING ANT ALGORITHM IN TRAFFIC LIGHT SIGNAL

In each intersection there are a four streets and number of cars have a wait time in each one. The goal is find an optimal green time for each loop that reduce the wait time and cars number in the intersection. In Ant colony algorithm number of artificial ants that represent the problem. In this model there (m) ants distributed in the streets randomly and (k) ant work as an observed ant move from one street to another depending on the probability of Ant algorithm moving. Each ant of m ants calculate the cars number and wait time coming from sensor and depending on it the ant deposit a pheromone represent the green time to this street, the equation of ant pheromone value can be denoted: gt mi = L icount / 2 (8) where gt is a green time for ant (m)for street i, the cause of divided over tow driving from the model of computer simulation that each street have width of two cars and all cars move under constant speed so at each green second the cars in green street will less tow cars. The ant k which is an observed ant as remember move from street to another and give the green signals to max street such as in adaptive method but under the condition of probability of ants moving from street i to street j: P ij= { (9)

Where Ni is the set of neighborhood street of iand not visited yet. tij is the pheromone value of each neighborhood street which divided on the summation of all pheromone value of other street. The probability of ants movingabove means in each loop the observe ant (k) will not repeat it visitor to any street until visited the another streets. The result of two method give a convergent result but in the special case that talking about it before, The Ant colony algorithm give good result and more efficient from adaptive method. The proposed method is tested and compared using t-test function. In the next section the simulation result will be explain.

VIII.

RESULTS

The two proposed method are tested in the simulation and compared between themresults. Table (1) shows the value of cars number and wait time in the intersection when it running for the two method. The two methods gave a good result and reduce the waiting time but when compare between the two method, the ant method result approximately similar to adaptive method and denoted that using t-test where the compare between two sets of value gave p = 0.47 and the initial p value is 0.05 so there is no different or too small different. When applying the two methods and from observing them we found that in a special case occurring in the system where in which one street has less load from the other three streets, and when calculated the result of this special case, results show that the Ant colony algorithm is bester from adaptive method and the wait time has reduce too mach. Also ant algorithm resultfor the special case compared and tested using t-test where the compare between two sets of value gave p = 0.00013 and the initial p value is 0.05 so there is a different between the two methods. Table (2) show the value of cars number and wait time in the intersection when it running for the two method in a special case.
Table 1: Adaptive and Ant colony method result Adaptive traffic Ant traffic Car num Wait time Car num Wait time 4 0 4 0 8 1 9 0 13 3 13 1 24 6 22 3 35 9 33 7

452

Vol. 5, Issue 1, pp. 448-455

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
48 59 69 80 108 122 158 197 226 251 319 336 403 462 530 618 645 711 14 19 20 24 33 38 50 66 81 94 120 128 156 184 211 244 256 287 45 64 77 89 93 104 120 142 158 189 215 319 336 411 504 620 653 713 11 17 22 28 30 33 36 47 53 65 78 117 122 147 190 237 248 271

Table 2: Adaptive and Ant colony method result for special case Adaptive traffic Ant traffic Car num Wait time Car num Wait time 2 0 2 0 6 0 4 0 11 1 7 0 16 2 10 0 25 4 15 1 32 7 21 2 37 9 28 4 45 12 34 6 52 15 39 8 58 17 47 10 64 19 58 14 69 22 67 17 80 26 82 24 108 43 135 30

The array of green traffic lights also shows the different between the methods:
0 2 2 1 3 0 3 3 Table 3: Adaptive signals array 0 2 0 3 2 0 3 2 0 2 0 1 0 3 0 3 1 2 0 0 0 2 2 3 3 2 1 0 0

0 0 0 1

2 1 2 0

3 0 1 2

Table 4: Ant signals array 1 0 3 2 1 2 2 3 3 3 0 1 1 2 0 0 1 2 2 0 3 3

3 1 2

453

Vol. 5, Issue 1, pp. 448-455

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
The street number 1 is a lower street cars of this special case and explain the different between the two method by street 1 getting green signal.

IX.

CONCLUSION AND FUTURE WORK

Traffic light is one of the most important problems in the life. In this research two methods were proposed to solve this problem. They are adaptive and ant algorithm methods.By applying T-test on thecomputer simulation results show that the two methods are efficient and there are no much different between them, but when applying the two algorithms on a special case in which one streethas alessTrafficratiofrom otherstreetsthere were differencesin the obtained results and thettest showed that the ant algorithm is more efficient and reduce delay time when comparing it to the adaptive method. There are several suggestions given below that could be implemented in the future to make the project more optimal: - There are no amber lights in the current model. The behavior behind amber lights is equivalent to that behind red lights, i.e. vehicles should stop. - Pedestrians were not considered in our simulation. - The proposal simulation implemented one intersection only, for future work the simulation can include several intersections instead of one intersection. - Applying other metaheuristics algorithm and comparing with our ASAS algorithm. - Fixed vehicle speed where it can take several fast instead of one and the response humans can be taken into consideration.

ACKNOWLEDGMENTS
We would like to express my thanks to Dr. Belal Al-Khateeb for his guidance, useful and profound discussions during the period of this research.

REFERENCE
[1] Marco Dorigo, Thomas Stutzle, "Ant colony optimation", Massachusetts Institute of Technology, 2004. [2] KokKhiang Tan, Marzuki Khalid and RubiyahYusof, " intelligent traffic lights control by fussy logic", Malaysian Journal of Computer Science, Vol. 9 No. 2, December 1996, pp. 29-35. [3] Marco Wiering, Jelle van Veenen, JillesVreeken, and Arne Koopman,"Intelligent Traffic Light Control", technical report UU-CS-2004-029. [4] R. FOROUGHI, GH. A. MONTAZERAND R. SABZEVARI, "Design of a new urban traffic control system using modified Ant colony optimization approach", Iranian Journal of Science & Technology, Transaction B, Engineering, Vol. 32, No. B2, pp 167-173, 2008. [5] David Renfrew, Xiao-Hua Yu, "Traffic Signal Control with Swarm Intelligence", IEEE, 2009. [6] Carlos Gershenson and David A. Rosenblueth, "Self-organizing traffic lights at multiple-street intersections",Cornell university library,2011. [7] E. Bonabeau,M. Dorigo and G. Theraulaz"swarm intelligence from natural to artificial systems", Oxford University Press, 1999. [8] Giovanna Di MarzoSerugendo, Marie-Pierre Gleizes and Anthony Karageorgos, "Self-Organisation and Emergence in MAS: An Overview ", Informatics (Slovenia), 2006. [9] Marco Dorigo, Mauro Birattari, and Thomas Stutzle,"Ant Colony Optimization Artificial Ants as a Computational Intelligence Technique", IEEE, 2006. [10] Li Xin, Yu Datai and Qin Jin, "An Improved Ant Colony Algorithm and Simulation", IEEE, 2009. [11] The TTEST Procedure, SAS Institute Inc., Cary, NC, USA, 2008.

AUTHORS
Rabah N. Farhan has received Bachelor Degree in Computer Science, Almustanseria University, 1993, High Diploma in Data Security/Computer Science, University of Technology, 1998. Master Degree in Computer Science, University of Technology, 2000.PHD Degree in Computer Science, University of Technology, 2006. Undergraduate Computer Science Lecturer, University of Technology, 2002 to 2006. Under-graduate and postgraduate Computer Science Lecturer, Graduate Advisor, Computer College, University of Al-Anbar, 2006 -till now.

454

Vol. 5, Issue 1, pp. 448-455

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Wadhah Z. Tareq has received B.Sc in Computer Science, Al-Anbar University, Iraq, (20062010). M.Sc student (2011- tell now)in Computer Science Department, Al-Anabar University. Fields of interest: Autonomic system governance, algorithm search and related fields. Wadhah taught many subjects such as cryptography operation system, computer vision, image processing.

455

Vol. 5, Issue 1, pp. 448-455

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

CPW FED SLOT COUPLED WIDEBAND AND MULTIBAND ANTENNAS FOR WIRELESS APPLICATIONS
Mahesh A. Maindarkar and Veeresh G. Kasabegoudar
P. G. Department, MBES College of Engineering, Ambajogai, India

ABSTRACT
A circular shaped CPW fed capacitive coupled monopole antenna is presented. Ground dimensions of the CPW feed are used to tune the proposed antennas input impedance (bandwidth). Also, these dimensions (ground) can be used to work antenna either in ultra wideband or multiband mode. The capacitive gap introduced on the circular stub will also decide the working of antenna as either wideband or multiband operation. Placement of capacitive gap may be at any point on the circular stub. In this work we investigated its effect at three different places i.e., lower end, at the center, and at the upper end of the geometry. More than 100% (2-12GHz band) impedance bandwidth was achieved for UWB antenna design where as for multiband antenna design presented; triple bands with impedance bandwidth of 76.58, 35.73, and 23.11% respectively were obtained. Similar results were obtained for all the cases studied. The simulation studies presented here indicate the wideband and multi band operations with good radiation characteristics.

KEYWORDS: Microstrip Antennas, Capacitive Coupling, and Wideband Antennas.

I.

INTRODUCTION

In recent years, research in the area of ultra wideband system (UWB) has generated a lot of potential in wireless applications [1]. It may be recalled that new definition of FCC on UWB bandwidth (3.1GHz-10.6GHz) has led to rapid growth in wireless applications in this range of frequencies [2]. To meet these demands CPW fed monopole antennas are the right candidates and are most popular in integrated circuits applications because of their ease of integrability into system on chip (SOC) applications. There are several UWB antennas reported in literature [1-13]. For example [1] explains the bandwidth enhancement technique using modified ground plane with diagonal edges. However, the antenna uses finite ground on the back side and hence bandwidth is limited to 18.3 %. In another work [3], asymmetric ground plane is used to obtain multiband operations which cover various commercial wireless applications. On the other hand antenna reported in [4] is a CPW fed slot antenna yields the impedance bandwidth of 52% and bidirectional radiation patterns. In another effort, antenna presented in [5] is suitable for dual band applications whereas antenna reported in [8] uses meta-materials for the antenna design which yields multiband characteristics. In yet another effort, A. K. Panda et. al., [9] have demonstrated the multiband operation using fractal geometry. In most of the cases reported in literature, either they yield less bandwidth or difficult to fabricate and assemble them. In this paper, we propose a CPW fed capacitive gap coupled wideband antenna which is suitable for ultra wideband and multiband applications. The capacitive gap is used to tune the antenna operation as either for UWB or for multi band operation. Section 2 presents the basic geometry and its working. Simulation results and antenna geometry with capacitive gap have been presented in Section 3 followed by conclusions in Section 4.

456

Vol. 5, Issue 1, pp. 456-461

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

II.

BASIC GEOMETRY

Figure 1 shows the basic geometry of the antenna. The geometry is basically a CPW fed monopole antenna. The substrate used for design and analysis is a glass epoxy material whose properties are chosen as listed in Table 1. Effective dielectric constant can be calculated from the design expressions listed in [6, 7]. The antenna was optimized using the Ansoft HFSS [14] which is the commercially available electromagnetic software. The physical dimensions of the antenna are listed in Table 1. From the Figure 2 it can be noted that the basic geometry offers the ultra wideband operation (212GHz). This corresponds to more than 100% impedance bandwidth with good gain and radiation characteristics throughout the band of operation. In order to obtain ultra wideband operation, ground dimensions of the antenna have been varied and optimized for the ultra wideband operation. It may also be noted that these dimensions (L and W) can be varied and optimized for multiband operation. The capacitive gap introduced will also help in tuning the ultra wideband operation into multiband operation and is explained in next section (Section 3).

(a) Basic geometry (b) Simulation setup in HFSS Figure 1: Basic geometry of the CPW fed monopole antenna and its simulation setup in HFSS. Table 1: Optimized dimensions of the proposed antenna Parameter Value Length of ground (L) 17.0mm Width of ground (W) 15.5mm Radius of circle (r) 16.2mm CPW gap (g) 0.5mm Slot gap width (d) 0.1mm Dielectric constant (r) 4.4 Loss Tangent (tan) 0.001 Height of substrate (h) 1.6mm
0

-10

Return Loss (dB)

-20

-30

-40 2 3 4 5 6 7 Frequency (GHz) 8 9 10 11 12

Figure 2: Simulated return loss characteristics of geometry shown in Figure 1.

457

Vol. 5, Issue 1, pp. 456-461

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

(a) E and H plane patterns at 2.4 GHz

(b) E and H plane patterns at 3.5 GHz

(c) E and H plane patterns at 6.5 GHz Figure 3: Radiation patterns of antenna showed in Figure1at various frequencies.

III.

SIMULATION SETUP AND DISCUSSIONS

As stated in Section 2, the geometry shown in Figure 1 was simulated using Ansoft HFSS software. All key design parameters (ground width (W), ground length (L), and capacitive slot (d)) have been investigated to analyze the effect on antenna performance and are discussed in the following subsections.

3.1. Effect of Ground Dimensions on Antenna Geometry


In order to merge all individual bands and to get the wideband operation suitable for FCC defined applications, ground dimensions have been varied. In the first step ground length was varied in steps of 2 mm as shown in Figure 4. From Figure 4 it may be observed that L=23.8 mm proves to be the optimum. In the next step width of the ground plane was varied from 25.4mm to 33.4mm keeping optimum value of L=31.8 obtained in the first step. Return loss characteristics for this case are presented in Figure 5. From these two steps optimum values of ground dimensions are L=31.8mm and W=25.4mm. An effort was also made to vary the circular stub radius from the current value of 16mm. However, no significant results were obtained for the cases studied (14mm to 18mm in steps of 1mm).

458

Vol. 5, Issue 1, pp. 456-461

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
0

-10

Return Loss (dB)

-20

-30

L=23.8mm -40 2 3 4 5

L=25.8mm 6 7

L=27.8mm 8 9

L=29.8mm 10

L=31.8mm 11 12

Frequency (GHz)

Figure 4: Return loss characteristics for different values of L keeping W=25.4mm constant. (All other dimensions are as
listed in Table 1).
0

-10
Return Loss (dB)

-20

-30

W=25.4mm -40 2 3 4 5 6

W=27.4mm 7 Frequency (GHz)

W=29.4mm 8 9

W=31.4mm 10

W=33.4mm 11 12

Figure 5: Return loss characteristics for different values of W keeping L=31.8mm constant. (All other dimensions are as
listed in Table 1).

3.2. Effect of Capacitive Gap


The geometries of basic antenna with capacitive slot are presented in Figure 6. Slots were introduced at three different positions. These locations are center position, 5mm above from center (upper slot), and 5mm below from center (lower slot) of the circular geometry. All the cases were investigated with one slot at a time. The slot width (d) was varied in steps of 0.1mm by keeping all other dimensions constant. All the results are presented in Table 2. Return loss characteristic for the optimum case is depicted in Figure 7. From the Table 2 it may be noted that d=0.1mm case proves to be the best one as it offers three frequency bands with optimum bandwidth values. Gap width was not reduced below 0.1mm because it may be difficult to realize the geometry while fabrication. Similar results were obtained for other two cases (Figures 6 (b) & 6(c)). However, it was observed that for the gap placed at the lower side, the gain was not uniform throughout the bands of operation.

459

Vol. 5, Issue 1, pp. 456-461

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

(a) (b) (c) Figure 6: Gap coupled wideband antenna geometries. A capacitive slot is placed on (a) lower side (b) middle (c) upper side. All other dimensions are as listed in Table 1. Table 2: Effects of variation of width of capacitive slot (d) on the VSWR bandwidth of the antenna Frequency Range (GHz) Bandwidth (%) Slot Width (mm) st nd rd 1 2 3 1st 2nd 3rd 0.1 1.95-4.37 5.86-8.41 9.22-11.63 76.58 35.73 23.11 0.2 2.06-4.47 6.06-8.34 9.13-12.0 73.81 31.66 27.16 0.3 2.00-4.19 6.23-8.12 9.44-12.0 70.76 26.34 23.88 0.4 1.93-4.41 6.08-8.56 9.37-11.63 78.23 33.87 21.52 0.5 2.04-4.37 6.19-8.63 9.19-11.95 72.69 32.92 26.11
0

-10

Return Loss (dB)

-20

-30

d= 0.1mm -40 2 3 4 5

d= 0.2mm 6 7

d= 0.3mm 8 9

d= 0.4mm 10

d= 0.5mm 11 12

Frequency (GHz)

Figure 7: Return loss characteristics for different values of capacitive gap.

IV.

CONCLUSIONS & FUTURE SCOPE

A circular shaped CPW fed capacitive coupled monopole antenna has been presented. Ground dimensions of the CPW feed have been varied to obtain UWB operation. The capacitive gap was introduced on the circular stub to obtain multiband operation. Placement of capacitive gap may be at any point the circular stub i.e., lower end, at the center, and at the upper end of the geometry. Similar results were obtained for all the cases studied. The simulation studies indicate the wideband and multi band operations with good radiation characteristics. For wideband operation more than 100% (212GHz) impedance bandwidth with good gain throughout the band of operation was obtained. Triple bands were obtained with impedance bandwidth of 76.58, 35.73, and 23.11% when a capacitive gap was introduced at the middle, and optimized. In future study it is planned to fabricate the proposed antenna and is to be tested for its practical validation, and should be modeled to investigate its

460

Vol. 5, Issue 1, pp. 456-461

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
performance in terms of impedance bandwidth, gain, and radiation efficiency. The antenna presented here proves to be the best candidate for FCC defined UWB operation.

REFERENCES
[1]. N. Prombutr, P. Kirawanich, and P. Akkaraekthalin, Bandwidth increasing technique using modified ground plane with diagonal edges, IETE J. of Research, vol. 55, no. 5, pp. 196-200, 2009. [2]. V. G. Kasabegoudar, D. Upadhyay, and K. J. Vinoy, Design studies of ultra wideband microstrip antennas with a small capacitive feed, Int. J. Antennas and Propagat., pp. 1-8, vol. 2007, 2007. [3]. V. Deepu, R. K. Raj, M. Joseph, M. N. Suma, and P. Mohanan, Compact asymmetric coplanar strip fed monopole antenna for multiband applications, IEEE Trans. Antennas Propagat., vol. 55, no. 8, pp. 2351-2357, 2007. [4]. T. Shanmuganatham, K. Balamaniknandan, and S. Raghavan, CPW fed slot antenna for wideband applications, Int. J. Antennas and Propgat., pp. 1-4, vol. 2008. [5]. D. Prakash, R. Khanna, V. Kumar, and A. Chaudhary, Novel dual-band CPW-fed monopole slot antenna for WLAN/WiMax applications, Int. J. Comp. Sci. & Tech. (IJCST), vol. 1, no. 1, pp. 21-24, 2010. [6]. R. Garg, P. Bhartia, I. Bahl, and A. Ittipiboon, Microstrip Antenna Design Handbook, Artech House, Norwood, Mass, USA, 2001 [7]. G. Kumar and K. P. Ray, Broadband Microstrip Antennas, Artech House, Norwood, Mass, USA, 2003. [8]. L. M. Si, and X. Lv, CPW fed multiband omni-directional planar microstrip antenna using composite meta-material resonators wireless communications, Progress In Electromagnetics Research, vol. 83, pp. 133-146, 2008. [9]. A. K. Panda and Asit K. Panda, A novel design of multiband square patch antenna embedded with gasket fractal slot for WLAN & Wi-Max communication, Int. J. Advances in Engg. & Tech. (IJAET), vol. 3, no. 1, pp. 111-116, 2012. [10]. W. C. Liu, Optimal design of dual band CPW-fed G-shaped monopole antenna for WLAN application, Progress In Electromagnetics Research, vol. 74, pp. 21-38, 2007. [11]. V. G. Kasabegoudar, Dual frequency ring antennas with coplanar capacitive feed, Progress In Electromagnetics Research C, vol. 23, pp. 27-39, 2011. [12]. V. G. Kasabegoudar, Low profile suspended microstrip antennas for wideband applications, Journal of Electromagnetic Waves and Applications, vol. 25, pp. 1795-1806, 2011. [13]. H. Zhang, H. Y. Xu, B. Tian, and X. F. Zeng, CPW-fed fractal slot antenna for UWB application, Int. J. Antennas and Propagat., pp. 1-4, vol. 2012 (Article ID 129852). [14]. Ansofts HFSS Software v.11, Ansys Corporation, USA.

AUTHORS
Mahesh A. Mahendrakar received the bachelors degree from Shivaji University, Kolhapur, in 2009, and currently pursuing his Masters degree from the Collage of Engineering Ambajogai. Since Sept. 2010 he has been working as Lecturer in J. S. P. M. Imperial College of Engineering and Research Pune, India. His research interests include Signal Processing, Microwaves and Antennas. Veeresh G. Kasabegoudar received the Bachelors degree from Karnataka University Dharwad, India, the Masters degree from the Indian Institute of Technology (IIT) Bombay, India, and the Ph.D. degree from the Indian Institute of Science (IISc), Bangalore, in 1996, 2002, and 2009, respectively. From 1996 to 2000, he worked as a Lecturer in the Electronics and Telecommunication Engineering Department, College of Engineering, Ambajogai, India, where, from 2002 to 2006, he worked as an Assistant Professor and, since 2009, he has been a Professor and Dean of PG Department. He has published over 20 papers in technical journals and conferences. His research interests include microstrip and CPW fed antennas, microwave filters, and Image/Signal processing.

461

Vol. 5, Issue 1, pp. 456-461

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

DESIGN AND IMPLEMENTATION OF IEEE 802.16 MAC LAYER SIMULATOR


H. M. Shamitha1, H. M. Guruprasad1, Kishore. M2, Ramesh. K3
Department of Electronics and Communication Proudadhevaraya Institute of Technology, Hospet, India 2 Green Revolution, Hospet, India 3 Department of Computer Science, Karnataka State Womens University, Bijapur, India
1

ABSTRACT
The IEEE 802.16 Wireless MAN is a broadband wireless access network, which provides high-rate network connections to stationary sites, operates over greater distances, provides more bandwidth, takes advantage of a broader range of frequencies, and supports a greater variety of deployment architectures, including non-line-ofsight operation. The medium access control layer protocol includes an initialization procedure designed to eliminate the need for manual configuration. Upon installation, a Subscriber Station begins scanning its frequency list to find an operating channel. It may be programmed to register with a specified Base Station. Systems shall support the applicable procedures for entering and registering a new Subscriber Station or a new node to the network. This project concentrates upon the network initialization procedure to bring up the subscriber and base stations in the 802.16 network. It also provides the Dynamic Service Management procedure for transportation connections. Socket programming has been used to perform simulations.

KEYWORDS: MAC, IEEE 802.16, SAP

I.

INTRODUCTION

The IEEE 802.16 medium access control layer (MAC) protocol is designed for point-to-multipoint broadband wireless access applications. It addresses the need for very high bit rates, both uplink (to the Base station) and downlink (from the Base station). The medium access control layer is capable of supporting multiple physical layer specifications optimized for the frequency bands of the application. This paper deals with various steps for initialization between BS and SS. The 802.16 specification accommodates MAC management messages that allow the base station to query the subscriber station. The objective of this paper is to design a IEEE 802.16 MAC layer for broadband wireless access. It is a complex and efficient protocol. Access and bandwidth allocation algorithms must accommodate hundreds of terminals per channel. Terminals may be shared by multiple end users. The services required by these end users are time-division multiplex (TDM) voice and data, Internet Protocol (IP) connectivity, and packetized voice over IP (VoIP). The 802.16 MAC accommodates both continuous and bursty traffic. Additionally, these services expect to be assigned quality of service in keeping with the traffic types. The IEEE 802.16 offers an alternative to cabled access networks, such as fiber optic links, coaxial systems using cable modems, and digital subscriber line (DSL) links. Wireless systems have the capacity to address broad geographic areas without the costly infrastructure development required in deploying cable links to individual sites, hence the technology

462

Vol. 5, Issue 1, pp. 462-469

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
may prove less expensive and may lead to more ubiquitous broadband access. The IEEE 802.16 MAC gives both broadband access and good quality of service (QoS). With the technology expanding in this direction, it is likely that a standard will evolve to support nomadic and increasing mobile users. The paper aims at analyzing the initialization procedure between Base station (BS) and Subscribe station (SS) and discusses the design for the steps involved in the initialization procedure.

II.

RELATED RESEARCH WORK

The 802.16 medium access control (MAC) layer [1, 2] supports many different physical layer specifications, both licensed and unlicensed. Through the 802.16 MAC, every base station dynamically distributes uplink and downlink bandwidth to subscriber stations using time-division multiple access (TDMA). This is the basic difference from the earlier version of 802.11 MAC. 802.11 MAC operating through the use of carrier sensing mechanisms does not provide effective bandwidth control over the radio link. Figure 1depicts the reference model [1, 2] of IEEE 802.16 MAC. The MAC comprises three sub layers these are service specific convergence sub layer, medium access control common part sub layer (MAC CPS), and privacy sub layer. The service specific convergence sub layer (CS) provides transformation or mapping of external network data, received through the CS service access point (SAP), into MAC SDUs received by the MAC common part sub layer (MAC CPS) through the MAC SAP. This includes classifying external network service data units (SDUs) and associating them to the proper MAC service flow and connection identifier (CID). The MAC CPS provides the core MAC functionality of system access, bandwidth allocation, connection establishment, and connection maintenance. The MAC also contains a separate privacy sub layer providing authentication, secure key exchange, and encryption. Data, physical layer (PHY) control, and statistics are transferred between the MAC CPS and the PHY via the PHY SAP. The PHY may include multiple specifications, each appropriate to a particular frequency range and application.

III.

DESIGN AND IMPLEMENTATION

As the MAC is clearly seen to cycle around some determined states, the complete MAC solution is divided into few state machines, namely network entry state machine and dynamic service flow transition state machine. This section provides different views of the system being designed, with many sequence diagrams to show how messages are passed between different entities during runtime. These are in accordance with the UML based design principles. The features considered for design and implementation are listed below. i) Network entry and initialization entity a) Downlink synchronization b) Uplink parameter acquisition c) Initial ranging d) Capability negotiation e) Registration f) Establish IP connectivity g) Establish time of the day h) Transfer operational parameters

3.1 Down link channel synchronization


When an SS wishes to enter the network, it scans for a channel in the defined frequency list. Normally an SS is configured to use a specific BS with a given set of operational parameters, when operating in a licensed band. If the SS finds a down link channel and is able to synchronize at the physical layer (PHY ) level (it detects the periodic frame preamble), then the MAC layer looks for DCD and UCD to get information on modulation and other DL and UL parameters. The BS sends downlink channel descriptor (DCD) and DLMAP messages periodically for the downlink synchronization with SS. Once SS synchronizes with BS the SS would wait for uplink channel descriptor (UCD) message for uplink channel characteristics for uplink transmission from BS. Once SS gets the UCD and it waits for uplink map (ULMAP) message for initial maintenance interval for initial ranging.

463

Vol. 5, Issue 1, pp. 462-469

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963 3.2 Initial ranging
When an SS has synchronized with the DL channel and received the downlink and UL MAP for a frame, it begins the initial ranging process by sending a ranging request MAC message on the initial ranging interval using the minimum transmission power. If it does not receive a response, the SS sends the ranging request again in a subsequent frame, using higher transmission power. Eventually the SS receives a ranging response. The response either indicates power and timing corrections that the SS must make or it indicates success. If the response indicates corrections, then the SS makes these corrections and sends another ranging request. If the response indicates success, the SS is ready to send data on the upper link.

3.3 Capabilities negotiation


After successful completion of initial ranging, the SS sends a capability request message to the BS describing its capabilities in terms of the supported modulation levels, coding schemes, rates, and duplexing methods. The BS accepts or denies the SS, based on its capabilities. The SS sends the SBCREQ message to negotiate SS basic capabilities and it waits for SBCRSP. Once BS receives SBCREQ message and processes the message, it determines enables SS capabilities and sends SBCRSP message to SS.

3.4 Authentication
After capability negotiation, the BS authenticates the SS and provides key material to enable the ciphering of data. The SS sends the X.509 certificate of the SS manufacturer and a description of the supported cryptographic algorithms to its BS. The BS validates the identity of the SS, determines the cipher algorithm and protocol that is used, and sends an authentication response to the SS. The response contains the key material to be used by the SS. The SS is required to periodically perform the authentication and key exchange procedures to refresh its key material.

3.5 Registration
After successful completion of authentication the SS registers with the network. The SS sends a registration request message to the BS, and the BS sends a registration response to the SS. The registration exchange includes IP version support, SS managed or non-managed support, classification option support, cyclic redundancy check ( CRC) support, and flow control. Once BS authorizes SS, the SS sends registration request (REGREQ) message to BS for registration and waits for REGRSP message. The BS processes the REGREQ message which includes calculating hashed message authentication code (HMAC) over REGREQ message and sets the status of SS as supported in REGRSP.

3.6 IP Connectivity
The SS attains an IP address via dynamic host configuration protocol( DHCP) and establishes the time of the day via the internet time protocol. The DHCP server also provides the address of the TFTP server from which the SS can request a configuration file. This file provides a standard interface for providing vendor-specific configuration information. At this point, the SS will invoke DHCP discover message in order to obtain an IP address and any other parameters needed to establish IP connectivity. If the SS has a configuration file, the DHCP response will contain the name of a file which gives further configuration parameters. Establishment of IP connectivity will be performed on the SSs secondary management connection.

3.7 Establishing time of the day


The SS and BS need to have the current date and time. This is required for time stamping logged events for retrieval by the management system. After DHCP is successful the SS sends time of the day request to time server. The time server processes the request and sends the response with correct

464

Vol. 5, Issue 1, pp. 462-469

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
time of the day offset to create local time. Establishment of time of the day will be performed on the SSs secondary management connection.

3.8 Transfer operational parameters


After DHCP is successful, the SS will download the SS configuration file using trivial file transfer protocol (TFTP) on the SSs secondary management connection. When the configuration file download has been completed successfully, the SS will notify the BS by transmitting a trivial file transfer protocol complete (TFTPCPLT) message on the SSs primary management connection. Transmissions will continue periodically until a TFTP response (TFTPRSP) message is received with OK response from the BS. Once the download of the configuration file from the TFTP server to SS is over, then the SS sends the TFTPCPLT message to the BS and waits for TFTP response.

IV.

EXPERIMENTAL RESULTS

The network entry process for SS is simulated using Linux Berkeley socket interfaces and designed using enterprise architect UML tool.

Figure 1: Downlink synchronization (BS)

Figure 2: Downlink synchronization (SS)

Figure 3: Initial ranging (BS)

465

Vol. 5, Issue 1, pp. 462-469

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Figure 4: Initial ranging (SS)

Figure 5: Capabilities negotiation (BS)

Figure 6: Capabilities negotiation (SS)

Figure 7: Authentication (BS)

Figure 8: Authentication (SS)

466

Vol. 5, Issue 1, pp. 462-469

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Figure 9: Registration (BS)

Figure 10: Registration (SS)

Figure 11: IP connectivity (BS)

Figure 12: IP connectivity (SS)

Figure 13: Connection setup using DSA (BS)

467

Vol. 5, Issue 1, pp. 462-469

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Figure 14: Connection setup using DSA (SS)

Figure 15: Connection modification and termination (BS)

Figure 16: Connection modification and termination (SS)

V.

CONCLUSION

The paper aims at providing a design for the IEEE 802.16 protocol in an efficient manner using object oriented design principles. The IEEE 802.16 is a very complicated standard, featuring high adaptiveness to maximize air link usage therefore, it requires sophisticated algorithms. At the same time, its implementation must be easy for users and provide adequate quality of service. The message post mechanism and the packet queuing mechanisms prove to be valuable addition to the way data is passed between upper and lower layer in the stack. This also helps the stack to handle inter module interactions in a clear manner. The simulation studies show that the proposed solution provides quality of service support in terms of bandwidth and delay bounds for all types of traffic classes as defined by the standard. We are currently working on connection admission control and classifier modules which are part of convergence layer of the standard and contribute greatly to quality of service provisioning. The key contribution of this research paper is in the development of a network entry and dynamic service management. The above discussion makes it easy to see why so much anticipation surrounds IEEEs 802.16 standard. Service providers will be free from the substantial upfront costs and risks associated with network build out, allowing them to provide cheaper broadband access to more consumers. Finally, the interoperability and variety of services supported by Wireless MAN ensures rapid adoption and deployment, justifying the praise of 802.16 as the next wireless revolution.

468

Vol. 5, Issue 1, pp. 462-469

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

REFERENCES
[1] IEEE 802.16-2001, IEEE Standard for Local and Metropolitan Area Networks Part 16: Air Interface for Fixed Broadband Wireless Access Systems, Apr. 8, 2002 [2] IEEE P802.16-REVd/D5-2004: Air Interface for Fixed Broadband Wireless Access Systems , available on www.ieee802.org/16 [3] G. Nair, J. Chou, T. Madejski, K. Perycz, D. Putzolu and J. Sydir, IEEE 802.16 medium access control and service Provisioning, Intel Technology Journal, vol. 8, no. 3, pp. 213-28, Aug. 2004. [4] Stanley Wang , Ken Stanwood , Yair Bourlas , Robert Johnson IEEE 802.16.1 Convergence Sub layer for ATM [5] J. Chou, Russ Reynolds , Vladimir Yanover , Shlomi Eini and Radu Selea MAC and PHY MIB for Wireless MAN BS and SS , available on www.ieee802.org/16

AUTHORS
H M Shamitha, currently working at Proudadhevaraya Institute of Technology, Hospet, India. Her area of interest includes computer networks, digital circuits, computer organisation. Active member of Indian Society of technical Education, New Delhi.

H M Guruprasad, currently working at Proudadhevaraya Institute of Technology, Hospet, India. His area of interest includes analog communication, optical fiber communication, VLSI, VHDL. Active member of Indian Society of technical Education, New Delhi.

Kishore M, His area of interest includes wireless communication and networking, antenna theory and design, Smart antenna and its applications. He has various research publications into is credit

Ramesh K, currently working at Karnataka State Womens University, Bijapur, India. Department of Computer Science. His area of interest includes wavelength division multiplexing, computer networks.

469

Vol. 5, Issue 1, pp. 462-469

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

TOPOLOGY OPTIMIZATION OF CONTINUUM STRUCTURES USING OPTIMALITY CRITERION APPROACH IN ANSYS


Dheeraj Gunwant & Anadi Misra
Department of Mechanical Engineering, G. B. Pant University of Agriculture and Technology, Pantnagar, India

ABSTRACT
Topology optimization is an important category of structural optimization which is employed when the design is at the conceptual stage. Generally, the topology optimization deals with finding the optimal material distribution in a design domain while minimizing the compliance of the structure. In this work, focus has been kept on the topology optimization of five benchmark plane stress models through a commercially available finite element software ANSYS. ANSYS employs topology optimization using the Solid Isotropic Material with Penalization (SIMP) scheme for the penalization of the intermediate design variables and the Optimality Criterion for updating the design variables. The results of the ANSYS based Optimality criterion are validated and compared with the results obtained by Element Exchange Method. KEYWORDS: Topology Optimization, Pseudo-densities, Compliance minimization, Optimality Criterion, SIMP.

I.

INTRODUCTION

Designers are many times faced with the problems of deciding the optimal layout (distribution of material) or topology of the design. They have to make trade-offs between various factors to achieve a sensible design, which satisfies the performance criteria imposed on it satisfactorily. While doing this he has to examine a large number of candidate solutions and find a globally optimal solution which satisfies the boundary conditions imposed on it. The task of searching globally optimal solutions is more cumbersome when the design is at conceptual stage. Therefore, in an optimisation problem, different candidate solutions are compared with each other, and then the best or optimal solution is obtained which means that solution quality is fundamental. In engineering, the optimisation of an objective function is basically the maximisation or minimisation of a problem subjected to constraints. Optimisation can basically be categorised into three types namely: a) sizing (mass), b) shape and c) topology (layout). Refer figure below.

Fig. 1: (a) Sizing, (b) Shape and (c) Topology optimization

470

Vol. 5, Issue 1, pp. 470-485

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
This paper basically focuses on topology or layout optimisation, so that will be discussed in detail.

1.1. A word about topology optimization


Topology optimization is perhaps the most difficult of all three types of structural optimization. The optimization is performed by determining the optimal topology of the structure. Hence, the design variables control the topology of the design. Optimization therefore occurs through the determination of design variable values which correspond to the component topology providing optimal structural behaviour. While it is easy to control a structures shape and size as the design variables are the coordinates of the boundary (shape optimization) or the physical dimensions (size optimization), it is difficult to control the topology of the structure. In this problem, the design domain is created by assembling a large number of basic elements or building blocks. By beginning with a set of building block representing the maximum allowable region (region in space which the structure may occupy) each block is allowed to either exist or vanish from the design domain, a unique design is evolved. For example in the topology optimization of a cantilever plate, the plate is descritized into small rectangular elements (building blocks), where each element is controlled by design variables which can vary continuously between 0 and 1. When a particular design variable has a value of 0, it is considered to be a hole, likewise, when a design variable has a value of 1, it is considered to be fully material. The elements with intermediate values are considered materials of intermediate densities. The development of topological optimization can be attributed to Bendse and Kikuchi [1988]. They presented a homogenization based optimization approach of topology optimization. They assumed that the structure is formed by a set of non-homogenous elements which are composed of solid and void regions and obtained optimal design under volume constraint through optimization process. In their method, the regions with dense cells are defined as structural shape, and those with void cells are areas of unnecessary material. The maximization of the integral stiffness of a structure composed of one or two isotropic materials of large stiffness using the homogenization technique was discussed by Thomsen [1992]. Numerical results are presented at the end of the paper. Application of Genetic algorithm for topology optimization was made by Chapman [1994]. Given structures boundary conditions and allowable design domain, a descritized design domain is created. The genetic algorithm then generates an optimal structure topology by evolving a population of chromosomes, where each chromosome, after mapping into the design domain creates a potentially-optimal structure topology. Diaz and Sigmund [1995] computed the effective properties of strong and weak materials. It is shown that when 4-noded quadrilateral elements are used, the resulting topology consists of artificially high stiffness material which is difficult to manufacture. This material appears in specified manner and is known as the checker board pattern due to alternate solid and void elements. Swan and Kosaka [1997] investigated a continuous topology optimization framework based on hybrid combinations of classical Reuss (compliant) and Voigt (stiff) mixing rules. To avoid checker boarding instabilities, the continuous topology optimization formulation is coupled with a novel spatial filtering procedure. Sigmund and Petersson [1998] summarized the current knowledge about numerical instabilities such as checkerboards, mesh-dependence and local minima occurring in applications of the topology optimization method. The checkerboard problem refers to the formation of regions of alternating solid and void elements ordered in a checkerboard-like fashion. The mesh-dependence problem refers to obtaining qualitatively different solutions for different mesh-sizes or diseretizations. A local minimum refers to the problem of obtaining different solutions to the same descritized problem when choosing different algorithmic parameters. A web-based interface for a topology optimization program was presented by Tcherniak and Sigmund [2001]. The program is available over World Wide Web. The paper discusses implementation issues and educational aspects as well as statistics and experience with the program. Allaire et al. [2002] studied a level-set method for numerical shape optimization of elastic structures. The approach combines the level-set algorithm of Osher and Sethian with the classical shape gradient. Although this method is not specifically designed for topology optimization, it can easily handle topology changes for a very large class of objective functions. Rahmatalla and Swan [2004] presented a node-based design variable implementation for continuum structural topology optimization in a finite element framework and explored its properties in the context of solving a number of different design

471

Vol. 5, Issue 1, pp. 470-485

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
examples. Since the implementation ensures C0 continuity of design variables, it is immune to element wise checker boarding instabilities that are a concern with element-based design variables. The objective of maximizing the Eigen frequency of vibrating structures for avoiding the resonance condition was considered by Du and Olhoff [2005]. This can also be achieved by maximizing the gap between two consecutive frequencies of the given order. Different approaches are considered and discussed for topology optimization involving simple and multiple Eigen frequencies of linearly elastic structures without damping. The mathematical formulations of these topology optimization problems and several illustrative results are presented. Sigmund and Clausen [2007] suggested a new way to solve pressure load problems in topology optimization. Using a mixed displacementpressure formulation for the underlying finite element problem, we define the void phase to be an incompressible hydrostatic fluid. Rozvany [2008] evaluated and compared the established numerical methods of structural topology optimization that have reached the stage of application in industrial software. Dadalau et al. [2008] presented a new penalization scheme for the SIMP method. One advantage of the present method is the linear densitystiffness relationship which has advantage for self weight or Eigen frequency problem. The topology optimization problem is solved through derived Optimality criterion method (OC), which is also introduced in the paper. Rouhi et al. [2010] presented a stochastic direct search method for topology optimization of continuum structures. In a systematic approach requiring repeated evaluations of the objective function, the element exchange method (EEM) eliminates the less influential solid elements by switching them into void elements and converts the more influential void elements into solid resulting in an optimal 01 topology as the solution converges. For compliance minimization problems, the element strain energy is used as the principal criterion for element exchange operation. Gunwant et. al. obtained topologically optimal configuration of sheet metal brackets using Optimality Criterion approach through commercially available finite element solver ANSYS and obtained compliance versus iterations plots for various aspect ratio structures (brackets) under different boundary conditions.

1.2. Topology optimization using ANSYS


The goal of topological optimization is to find the best use of material for a body such that an objective criterion (i.e. global stiffness, natural frequency, etc.) attains a maximum or minimum value subject to given constraints (i.e. volume reduction). In this work, maximization of static stiffness has been considered. This can also be stated as the problem of minimization of compliance of the structure. Compliance is a form of work done on the structure by the applied load. Lesser compliance means lesser work is done by the load on the structure, which results in lesser energy is stored in the structure which in turn, means that the structure is stiffer. Mathematically, Where, u = Displacement field f = Distributed body force (gravity load etc.) Fi = Point load on ith node ui = ith displacement degree of freedom t = Traction force S = Surface area of the continuum V = Volume of the continuum ANSYS employs gradient based methods of topology optimization, in which the design variables are continuous in nature and not discrete. These types of methods require a penalization scheme for evolving true, material and void topologies. SIMP (Solid Isotropic Material with Penalization) is a most commonly penalization scheme, and is explained in the next section.

1.3. The SIMP method


The SIMP stands for Solid Isotropic Material with Penalization method. This is the penalization scheme or the power law through which is the basis for evolution of a 0-1 topology in gradient based methods.

472

Vol. 5, Issue 1, pp. 470-485

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
In the SIMP method, each finite element (formed due to meshing in ANSYS) is given an additional property of pseudo-density, where , which alters the stiffness properties of the material.

Where, = Density of the jth element = Density of the base material = Pseudo-density of the jth element This Pseudo-density of each finite element serves as the design variables for the topology optimization problem. The Pseudo-density of jth element depends on its Pseudo-density in such a way that,

Where, = Stiffness of the base material = Penalization power As is clear from equation 3.3, For, = 0, which means no material exists For, = 1, which means that material exists. In SIMP is taken to be greater than 1 so that intermediate densities are unfavourable in the sense that that the stiffness obtained is small as compared with the volume of the material. In other words, specifying a value of higher than 1 makes it uneconomical to have intermediate densities in the optimal design. As a matter of fact, for problems where the volume constraint is active, experience shows that optimization does actually result in such designs if one chooses sufficiently large (in order to achieve complete 0-1 designs, is usually required). In ANSYS, the standard formulation of topology optimisation problem defines the problem as minimising the structural stiffness and maximising the fundamental frequency while satisfying a constraint on volume of the structure. Another problem is the maximisation of natural frequency of the structure subjected to dynamic loading, while satisfying a constraint on the volume of the structure. The objective function (function which is to be minimized in topology optimization) is generally the compliance of the structure. A constraint on usable volume is applied on the structure. As the volume reduces, the structures stiffness also reduces. So the volume constraint is of opposing nature. The Compliance of a descritized finite element is given by, ( ) The force vector (which is a function of the design variables ) is given by, ( ) Therefore, ( ) can be written as, ( ) ( )

A lower bound on the design variables has been applied to avoid singularity in the stiffness matrix. We have used a gradient based, heuristic approach the Optimality Criterion approach in this work. The Optimality Criterion method is described in the next section.

473

Vol. 5, Issue 1, pp. 470-485

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

II.

MATERIALS AND METHODS

2.1. The Optimality Criterion approach


The discrete topology optimization problem is characterized by a large number of design variables, N in this case. It is therefore common to use iterative optimization techniques to solve this problem, e.g. the method of moving asymptotes optimality criteria (OC) method, to name two. Here we choose the latter. At each iteration of the OC method, the design variables are updated using a heuristic scheme. The Lagrangian for the optimization problem is defined as: ( ) ( ) ( ) ( ) ( )

Where, , , , and condition is given by:

are Lagrange multipliers for the various constraints. The optimality

Now, Compliance,

Differentiating eq. 3.8 w. r. t.

, the optimality condition can be written as:

The Compliance sensitivity can be evaluated as using eq.: ( ) Based on these expressions, the design variables are updated as follows: { { { ( ( ( ) ) ( ) ( ( } } }

Where, is called the move limit and represents the maximum allowable change in in a single OC iteration. Also, is a numerical damping coefficient, and is usually taken to be . The Lagrange multiplier for the volume constraint is determined at OC iteration using a bisection algorithm. is the value of the density variable at each iteration step. is the displacement field at each iteration step determined from the equilibrium equations. The optimization algorithm structure is explained in the following steps: - Make initial design, e. g. homogenous distribution of material. -For this distribution of density, compute by finite element method the resulting displacements and strains. -Compute the compliance of the design. If only marginal improvement in compliance over last design, stop iterations. Else, continue. -Compute the update of design variable, based on the scheme shown in eq. 3.13. this step also consists of an inner iteration loop for finding the value of Lagrange multiplier for the volume constraint. -Repeat the iteration loop. This paper considers the maximization of static stiffness through the inbuilt topological optimisation capabilities of the commercially available FEA software to search for the optimum material distribution in five plane stress structures as used by [15]. The optimum material distribution depends upon the configuration of the initial design space and the boundary conditions (loads and constraints). The goal of the paper is to minimize the compliance of the bracket while satisfying the constraint on

474

Vol. 5, Issue 1, pp. 470-485

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
the volume of the material reduction. Minimizing the compliance means a proportional increase in the stiffness of the material. A volume constraint is applied to the optimisation problem, which acts as an opposing constraint. To visualize, more the volume of material, lower will be the compliance of the structure and higher will be the structural stiffness of the structure. For implementation of this, APDL codes for various brackets modelling and topological optimisation were written and run in ANSYS.

2.2. Specimen Geometry and Boundary Conditions


In the present investigation, five specimen geometries and boundary conditions applied have been used as shown in the figures below. The specimens are taken from the work of Rouhi et al. [2010]. All the models are under plane state of stress.

Model 1: Messerschmitt-Bolkow-Beam (MBB):


This is a simply supported beam of dimensions 6mmx1mmx1mm. The beam is supported by a roller support on the right hand side support and on the other end; it is supported by fixed support. The beam is acted upon by a central load of 1N. Due to symmetry of the model, only the right half of the model has been used in this study.

Fig 2: Geometry and boundary conditions for Model 1 (Symmetric model)

Model 2: A cantilever with load at bottom tip


In this case a cantilever of dimensions 8mmx5mmx1mm and loaded with a load of 1N on the bottom tip and in a state of plane stress is considered. The left hand edge is supported with fixed support as shown in figure 3 below.

Fig 3: Geometry and boundary conditions for Model 2

Model 3: A cantilever with load at the centre of right edge


Figure 4 below shows the geometry and boundary conditions for model 3. The model is a 2mmx1mmx1mm structure loaded with a unit load at the middle of the right hand side edge. The left hand side edge is supported by fixed support.

475

Vol. 5, Issue 1, pp. 470-485

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Fig 4: Geometry and boundary conditions for Model 3

Model 4: A cantilever with load at the centre of right edge Figure 5 shows a short cantilever of dimensions 2mmx1mmx1mm. It is centrally loaded with a unit load at the middle of right hand side edge and is under a state of plane stress. The left hand edge is fixed.

Fig 5: Geometry and boundary conditions for Model 4

Model 5: A cantilever with load at the centre of right edge


Figure 6 below shows the geometry and boundary conditions for the model 5. The structure is of dimension 1mmx1mmx1mm. It is subjected to a unit load at each of upper and bottom tip of the right edge and is also under a state of plane stress. The left edge is fixed.

Fig 6: Geometry and boundary conditions for Model 5

The material properties used are given in the table below:

476

Vol. 5, Issue 1, pp. 470-485

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

III.

RESULTS

In this section, final compliance and optimal shape of the models obtained with the help of gradient based ANSYS based Optimality Criterion have been compared with a non-gradient based Element Exchange Method (EEM) in the work of Rouhi et al. [2010]. This is important in the sense that the gradient based methods are prone to entrapment in local optimum instead of the global optimum. Whereas, non-gradient based methods do not suffer with this problem. Comparison and validation is necessary to prove that this method is not giving results that are sub-optimal. Each of the five models are characterized by the finite element descritization in x( ) and y( ) directions and the volume usage fraction( ) used. The material properties used are given in the table below:
Table 1: Material Properties used Youngs Modulus ( )( ) 1 Poissons ratio ( )

0.3

Model 1: The beam is in the state of plane stress with a thickness of 1 mm. The beam is optimized for minimum compliance. Due to symmetry of the model, only half of the model is considered with symmetry boundary conditions as it is symmetric about the vertical axis. The beam is supported by a roller support at the bottom right corner and symmetric boundary conditions are applied on the right edge (fig. 2.) Table 2 shows the final compliance obtained in the case of ANSYS based OC and EEM.
Table 2: Comparison

Method Compliance( ) Iterations Percentage difference in compliance values

between OC and EEM for model 1 (60, 20, 0.5) ANSYS based OC EEM 182.20 187.00 46 210 2.56

(a)

(b)

Fig 7: Optimal shapes obtained by (a) ANSYS based OC and (b)EEM for (

for model 1

477

Vol. 5, Issue 1, pp. 470-485

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
(60, 20, 0.5) 450 400 Compliance (N-mm) 350 300 250 200 150 100 50 0 0 5 10 15 20 25 Iterations Fig 8: Convergence plot for ANSYS based Optimality criterion for model 1 30 35 40 45 50

Initial compliance obtained in the first iteration is 427.93 Nmm, which drops to 350.86 Nmm in second iteration and 263.00 Nmm in the third iteration. The final compliance obtained with the help of ANSYS based OC is 182.20 Nmm after 46 iterations. On the other hand, the EEM gives a final compliance of 187.00 after 210 iterations. ANSYS based OC reaches a more optimal solution in lesser number of iterations. Model 2: A cantilever beam of dimensions 8mm x 5mm and thickness 1mm is considered in this case. The cantilever is under a state of plane stress and supports a concentrated load of magnitude 1N at the bottom right corner. The left hand side edge is fixed (fig 3). Table shows the final compliance obtained with ANSYS based OC and EEM for mesh densities of 32, 20 and 64, 40 and a volume usage fraction of 40%.
Table 3: Comparison between OC and EEM for model 2 ( Method Compliance ( ) Iterations Percentage difference in compliance values ) Coarse mesh (32, 20, 0.4) ANSYS based EEM OC 52.22 53.60 39 178 2.57 Fine mesh (64, 40, 0.4) ANSYS EEM based OC 53.41 57.00 30 174 6.3

(a)

(b)

Fig 9: Optimal shapes obtained by (a) ANSYS based OC and (b)EEM for (

for model 2

478

Vol. 5, Issue 1, pp. 470-485

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

(a)

(b)

Fig 10:Optimal shapes obtained by (a)ANSYS based OC and (b)EEM for ( (32, 20, 0.4) 160 140 Compliance (N-mm) 120 100 80 60 40 20 0 0 5 10 15 20 Iterations 25 30 (64, 40, 0.4)

for model 2

35

40

Fig 11: Convergence plot for ANSYS based OC for model 2

For a mesh size of 32, 20 and volume usage fraction of 40%, the initial compliance obtained is 142.43 Nmm in the first iteration, which drops to 118.68 Nmm and 104.43 Nmm in second and third iterations respectively. The final compliance obtained is 52.22 Nmm after 39 iterations. EEM gives a minimum compliance of 53.60 Nmm after 178 iterations. The optimal shapes, as shown in the Fig. 10 (a) and (b) are almost same. When the mesh size is changed to 64, 40 and volume usage fraction of 40%, the initial compliance obtained is 150.39 Nmm in the first iteration, which drops to 122. 86 Nmm and 106.97 Nmm after second and third iterations. Final compliance obtained is 53.41 Nmm after 30 iterations in case of ANSYS based OC and in case of EEM it is 57.00 Nmm after 174 iterations. Optimal shapes are shown in Fig. 11(a) and (b) are different. Model 3: A cantilever beam of dimensions 2mm x 1mm and thickness 1 mm is considered in this case. The cantilever beam is in the state of plane stress and is subjected to a load of 1 N in the center of the right edge as shown in Fig 4. Final compliance obtained using ANSYS based OC has been compared with those reported by EEM and Genetic algorithm and are given in the table 4 below.
Table 4: Comparison between ANSYS based OC, EEM and GA for model 3 for different mesh densities ( Method ) Coarse mesh (24, 12, 0.5) ANSYS EEM GA based OC 63.20 66.10 64.40 99 150 4x104 4.4 Fine mesh (48, 24, 0.5) ANSYS EEM based OC 62.73 63.50 33 250 1.2

Compliance( ) Iterations Percentage difference in compliance values

479

Vol. 5, Issue 1, pp. 470-485

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

(a)

(b)

(c) Fig 12: Optimal shapes obtained by (a) ANSYS based OC and (b) EEM and (c) Genetic Algorithm for ( ) for model 3

(a)

(b) ) for model

Fig 13: Optimal shapes obtained by (a) ANSYS based OC and (b) EEM for ( 3 (24, 12, 0.5) 160 140 (48, 24, 0.5)

Compliance (N-mm)

120 100 80 60 40 20 0 0 10 20 30 40 50 Iterations 60 70 80 90 100

Fig 14: Convergence plot for ANSYS based OC for model 3.

In the case when mesh density is 24, 12, the initial compliance is 140.39 Nmm in the first iteration, which drops to 124.38 Nmm in the second and 100.45 Nmm in the third iteration. The final

480

Vol. 5, Issue 1, pp. 470-485

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
compliance is 63.20 Nmm in this case after 99 iterations. When the mesh density is 50, 50, the final compliance is 65.89 Nmm after 28 iterations. In the case of EEM, the mesh densities of 24, 12 and 48, 24 yield a final compliance of 66.1 Nmm and 63.50 Nmm respectively after 150 and 250 iterations respectively. GA yields a final compliance of 64.40 Nmm for a mesh density of 24, 12 after 4x10 4 iterations. Model 4: A short cantilever with dimensions 1mm x 2mm and thickness of 1mm is considered in this case. The cantilever is in the plane state of stress and is acted upon by a unit load at the center of the right edge as shown in Fig 5 above. Table 5 below lists the final compliances of the model obtained by ANSYS based OC, EEM and Particle Swarm Optimization (PSO) for the mesh densities of 20, 47 and 40, 94.
Table 5: Comparison between OC, EEM and PSO for model 4
( Method Compliance ( ) Iterations Percentage difference in compliance values ) Coarse mesh (20, 47, 0.5) ANSYS based OC 4.77 12 EEM 2.96 100 37.9 PSO Not reported 105 Fine mesh (40, 94, 0.5) ANSYS based OC 5.40 15 EEM 5.10 103 5.55 PSO Not reported 103

(a)

(b)

(c)

Fig 15: Optimal shapes obtained by a) ANSYS based OC and b) EEM and (c) PSO for ( for model 4

481

Vol. 5, Issue 1, pp. 470-485

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
(20, 47, 0.5) 20 18 16 14 Compliance 12 10 8 6 4 2 0 0 2 4 6 8 Iterations 10 12 14 16 (40, 94, 0.5)

Fig 16: Convergence plot for ANSYS based OC for model 5

It is clear from the figure that ANSYS based OC converges to results very fast and within 20 iterations for both mesh sizes. For mesh density 20, 47, the initial compliance is 15.94 Nmm which drops to a final compliance of 4.77 Nmm. For a mesh density of 40, 94, the ANSYS based OC reached optimal solution in 23 iterations. Initial compliance being 15 Nmm and final compliance is 5.38 Nmm. Although there is difference in the optimal structures at mesh densities of 20, 47, but the optimal shape obtained in the case of 40, 94 mesh densities are almost same. It is also clear that ANSYS based OC reaches optimal solution in lesser number of iterations. Model 5: In this case a doubly loaded cantilever of dimensions 1mm x 1mm and plane thickness of 1mm is considered. The cantilever is loaded with a unit load at each of the bottom and top corner points as shown in the Fig 6. The compliance values obtained by ANSYS based OC at different mesh densities are compared with EEM. Table 6 shows the final compliance obtained for each mesh densities and number of iterations taken be ANSYS based OC and EEM.
Table 6: Comparison between OC and EEM for model 5 ( Method Compliance ( ) Iterations Percentage difference in compliance values ) Coarse mesh (32, 20, 0.4) ANSYS based OC EEM 15.23 15 12.8 17.48 73 Fine mesh (50, 50, 0.4) ANSYS based OC EEM 17.29 17 12.22 19.70 37

(a) (b) Fig 17: Optimal shapes obtained by (a) ANSYS based OC and (b) EEM for ( model 5

) form

482

Vol. 5, Issue 1, pp. 470-485

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

(a)

(b) ) ( ) for

Fig 18: Optimal shapes obtained by (a) ANSYS based OC and (b) EEM for ( model 5 (32, 20, 0.4) 90 80 70 Compliance 60 50 40 30 20 10 0 0 5 10 Iterations 15 (50, 50, 0.4)

20

Fig 19: Convergence plot for ANSYS based OC for model 5

The initial compliance obtained in this case for a mesh density of 32, 20 is 67.85 Nmm which eventually drops to 15.23 Nmm after 15 iterations. For a mesh density of 50, 50 Nmm, the initial and final compliances are 77.93 Nmm and 17.30 Nmm. The final compliance is attained in this case after 17 iterations. EEM yields 17.48 Nmm and 19.70 Nmm after 73 and 37 iterations for mesh densities of 32, 20 and 50, 50 respectively.

IV.

DISCUSSION

Of all the stages of the design process, the conceptual design (topological optimization) phase is considered to be the most critical. It is an early stage of the design process, yet decides much of the structures final design. Because, the design revisions are expensive at the later stages of the design, design decisions made in the conceptual phase must be planned and executed thoroughly. Unfortunately, till date very less importance is given to the conceptual design phase and most of the decisions about the form of the design are left to the designers intuition. The shape and pattern of the holes (locations from where material is to be removed) are usually left to the intuition of the design engineer. Time constraints typically do not permit multiple iterations which can result in nonoptimized designs. In an attempt to aid the designer with the conceptual design stage, this investigation uses a commercially available finite element solver ANSYS for the form finding of some benchmark structure in the literature. Shape and topology optimization by ANSYS based Optimality Criterion was reviewed as a tool to converge on the ideal hole configuration. Through this paper, we emphasize that topology optimization is a very important and the relatively toughest part of

483

Vol. 5, Issue 1, pp. 470-485

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
the design optimization studies. Therefore, there appears the need of studying topology optimization separately. No amount of sizing and shaping optimization can rectify mistakes committed in finding optimal distribution of material in the design domain (topology optimization).

V.

CONCLUSIONS

Following conclusions can be drawn from the above studies: 1. The results of ANSYS based Optimality Criterion which is a gradient based method are compared with those obtained by Element Exchange Method which is a non-gradient based method. The nongradient based methods guarantee a globally optimal solution, which implies that the results obtained by ANSYS based Optimality Criterion are also global. 2. Compliance values obtained by ANSYS based Optimality Criterion are lower by 1.5 to 13 % than the Element Exchange Method, Genetic Algorithm and Particle Swarm Optimization in the work of Rouhi et al. [2010], except in the case of model 4, when the mesh size is 20, 47. Moreover, it takes lesser number of iterations to reach optimal results as compared with the Element Exchange Method, Genetic Algorithm and Particle Swarm Optimization. 3. As the mesh density is increased there is a 2-3% decrease in the compliance values for every model. 4. As compared to the EEM, Optimality Criterion approach provides symmetrical results for model 5.

VI.

FUTURE SCOPE

Topology optimization being the primary stage of structural optimization, the above plane stress structures can be considered for shape optimization and sizing optimization. In shape optimization, the design variables can be considered to be the coordinates of the nodes and in sizing optimization, any physical dimensions, such as thickness etc can be considered. The objective variable in both cases can be the volume of the structure.

REFERENCES
[1] M. P. Bendse, and N. Kikuchi, (1988) Generating optimal topologies in structural design using a homogenization method Comput. Meth. Appl. Mech. Eng., vol: 71: 197-224. [2] J. Thomsen, (1992) Topology optimization of structures composed of one or two materials Struct. Multidisc. Optim., vol: 5: 108-115 [3] C. D. Chapman, (1994) Structural topology optimization via the genetic algorithm, Thesis, M. S. Massachusetts Institute of Technology, America. [4] A. Diaz and O. Sigmund, (1995), Checkerboard patterns in layout optimization Struct. Optim.. Vol: 10: 40-45 [5] C. C. Swan and I. Kosaka, (1997) Voigt-Reuss topology optimization for structures with linear elastic material behaviors, Int. J. Numer. Meth. In Eng. Vol: 40: 3033-3057 [6] O. Sigmund and J. Petersson, (1998) Numerical instabilities in topology optimization: A survey on procedures dealing with checkerboards, mesh-dependencies and local minima, Struct. Optim.. Vol 16: 6875 [7] D. Tcherniak and O. Sigmund, A web-based topology optimization program Struct. Multidisc. Optim. Springer-Verlag 2001, Vol 22: 179-187 [8] G. Allaire, F. Jouve and A. M. Toader, (2002) A level set method for shape optimization C. R. Acad. Sci. Paris. [9] S. F. Rahmatalla and C. C. Swan, (2004) A Q4/Q4 continuum structural topology optimization implementation, Struct. Multidisc. Optim. Springer-Verlag, Vol 27: 130-135 [10] J. Du and N. Olhoff, (2005) Topology optimization of continuum structures with respect to simple and multiple Eigen-frequencies. 6th World Congr. Struct. Multidisc. Optim. Brazil,. [11] O. Sigmund and P. M. Clausen, (2007) Topology optimization using a mixed formulation: An alternative way to solve pressure load problems Comput. Meth. Appl. Mech. Eng., Vol 196: 1874-1889 [12] G. I. N. Rozvany, (2008) A critical review of established methods of structural topology optimization. Struct. Multidisc. Optim. Springer-Verlag. [13] Stuttgart Research Centre for Simulation Technology (SRC SimTech), Stuttgart University. (2008) A new adaptive penalization scheme for topology optimization A. Dadalau, A. Hafla, and A. Verl. [14] Thomas R. Michael, (2010) Shape and topology optimization of brackets using level set method, An Engineering project submitted to the graduate faculty of Rensselaer Polytechnic Institute in partial

484

Vol. 5, Issue 1, pp. 470-485

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
fulfillment of the degree of Master of Engineering in Mechanical Engineering. Rensselaer Polytechnic Institute Hartford, Connecticut, [15] M. Rouhi, R. R. Masood and T. N. Williams, (2010) Element exchange method for topology optimization Struct. Multidisc. Optim. [16] Dheeraj Gunwant and Anadi Misra, (2012), Topology Optimization of sheet metal brackets using ANSYS. MIT Int. J. of Mech. Eng., Vol. 2. No. 2, Aug. 2012. pp 119-125.

AUTHORS
Dheeraj Gunwant obtained his bachelors degree (B. Tech.) in Mechanical Engineering from Graphic Era Institute of Technology, Dehradun, Uttarakhand, in the year 2008 and M. Tech. in Design and Production Engineering from G. B. Pant University of Agriculture and Technology, Pantnagar, Uttarakhand in the year 2012. He is currently working as Assistant Professor in the Mechanical Engineering department of Apex Institute of Technology, Rampur, U. P. His areas of interest are optimization and finite element analysis. Anadi Misra obtained his Bachelors, Masters and doctoral degrees in Mechanical Engineering from G. B. Pant University of Agriculture and Technology, Pantnagar, Uttarakhand, with a specialization in Design and Production Engineering. He has a total research and teaching experience of 25 years. He is currently working as professor in the Mechanical Engineering department of College of Technology, G. B. Pant University of Agriculture and Technology, Pantnagar and has a vast experience of guiding M. Tech. and Ph. D. students.

485

Vol. 5, Issue 1, pp. 470-485

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

A DESIGN OF ROBUST PID CONTROLLER FOR NONMINIMUM NETWORK CONTROL SYSTEM


Dewashri Pansari, Balram Timande, Deepali chandrakar
Department of Electrical and Electronics Engineering Chhattisgarh Swami Vivekananda University Raipur (C.G.), India

ABSTRACT
We have designed a robust PID controller for controlling the delay induced in the Network control system (NCS). A robust PID controller for a Non Minimum phase system subjected to uncertain delay is presented here. The previous achievements are extended to the Non Minimum phase plant containing an uncertain delay time with specifications in terms of gain and phase. Controller design to meet the gain phase margin specifications have been demonstrated in the literature [4, 6].Synthesis and analysis presented in this paper, is an extension procedure in [1, 3]. The paper presents optimization in tuning controllers for varying time delay system using simulation. .The simulation results show the effectiveness of this compensation method.

KEYWORDS: Delays, Network control system, Non-minimum phase system Compensation, Gain margin and
Phase margin.

I.

INTRODUCTION

The robust PID controller is the modified form of PID controller in which the parameters of system are tuned to compensate for instability induce by time delays for non-minimum phase system and endows the system with robust safety margins in terms of gain and phase. According to modern control theory, the information (signals) are transmitted along perfect communication channels, which involve network communication. New controllers, algorithms and demonstration must be developed in which the basic input/output are data packets that may arrive at variable times not necessarily in orders and sometimes not at all. When PID controllers receive the sensor information or transmit its output through a communication network, its parameters are difficult to tune using classical tuning methods; this is due to the delays introduced by the network. This paper presents the Ziegler-Nichols closed loop cycling methods for tuning the various parameter of a system. Gain and phase margin is one of the most suitable methods for making the system stable. In this paper previous achievement is extended to the plant containing uncertain delays [1, 3].Controller designed to meet the gain phase margin specification have been demonstrated in the literature[4-6].

II.

NETWORK CONTROL SYSTEM

In point to point control system, where centralized computer systems are used for each sensors and actuators respective for control signal calculation, sensing and actuation required for closed loop control is shown in fig 1. Such scheme has drawbacks that it requires huge wiring connected from the sensors to computer and computer to actuators and moreover becomes complicated on requirement of

486

Vol. 5, Issue 1, pp. 486-493

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
reconfiguring the physical setup and functionality. Further, diagnosis and maintenance are also difficult in such systems.

Figure 1. Point to Point Control Configuration

To overcome the above mentioned difficulties posed by the centralized system. Networked Control System (NCS) has received considerable attention with advances in control and communication technologies. When sensor and actuator data are transmitted over a network and the network nodes are used to work in tandem for completion of the controlling task, then we call such system as Network Control System (NCS). NCS uses a common bus for information exchange.Their sensors, actuators, estimator units, and control units are connected through communication networks as shown in fig 2. This type of system provides several advantages such as modular and flexible system design, simple and fast implementation, and powerful system diagnosis and maintenance utilities.

Figure 2. Network Control System

Fig (3) shows the timing diagram of delay in network control system. Delay in an NCS can be divided into different types on the basis of data transfers i.e., i) sensor to controller delay. ii) Controller to actuator delay.

Figure 3. Timing diagram of Network delay propagations

487

Vol. 5, Issue 1, pp. 486-493

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

III.

PID CONTROLLER IN NCS

The PD controller could add damping to a system, but steady state response is not affected. The PI controller could improves the relative stability and improves the steady state error at the same time, but the rise time is increased .This leads to the motivation of using PID controller so that the best features of each of PI and PD controllers are utilized. The PID Control is one of the most popular control strategies for process control because of its simple control structure and easy tune .The transfer function of PID controller is GC (s) =KP + KDS + Where, KP = proportional gain constant, KI = Integral gain constant, KD = Derivative gain constant, The PID controller is traditionally suitable for second and lower order systems. It can also be used for higher order plants with dominant second order behaviour[6]. In this paper we used Ziegler Nichols closed loop cycling method and gain margin, phase margin tester methods for PID controller tuning. Ziegler- Nichols closed loop cycling methods: Procedure for tuning 1. Select proportional control alone. 2. Increase the value of the proportional gain until the point of instability is reached, the critical value of gain KC is reached. 3. Measure the period of oscillation to obtain the critical time constant . Once the values for KC and TC are obtained, the PID parameters Can be calculated, according to the Design specification given in Table-1. Table -1: Design specification
Control P PI PID KP 0.5 KC 0.45 KC 0.33 KC 1.2TC 2 TC 0.33 TC KI KD

Figure4. Simulink Model for Z-N Tuning PID Controller

488

Vol. 5, Issue 1, pp. 486-493

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
1.5

ReferenceSignal SystemResponseWithDelay
1

0.5

AMPLITUDE
-

-1

10

20

30

40

50

60

70

80

90

1000

SIMULATION TIME

Figure 5. PID controller response with Z-N tuning and no delay

The PID Controller is suitable for second and lower order system and when delays is introduce in the system, performance of the system is degraded and also de-stabilized the system by reducing the system stability margin, thus a Robust PID Controller design is introduced in this paper for higher order non-minimum system which contains the time delay element

IV.

A ROBUST PID CONTROLLER DESIGN

Whenever there is a delay between the commanded response and the start of the output response time delay occurs in the control system which decreases the phase margin and lowers the damping ratio and hence increases the oscillatory response for the closed loop system [12]. Time delay also decreases the gain margin, thus moves the system closer to the instability. In this paper, suitable algorithms are introduced for the instability induced by the time delays. For a high -order non -minimum phase system which contains the time delay element, whose transfer function is as shown [10]. Transfer function = Where, T is the delay time of the system. D(s) R(s) += (1)

GC(s)

GP(s)

Figure 6. Block Diagram of a Typical PID Control System

An error-actuated PID controller has the general transfer function GC (s) =KP + KDS + The forward open-loop transfer function of the control system shown in Fig. 5 is G0(S) = GC(S) .GP(S) = (3) (2)

489

Vol. 5, Issue 1, pp. 486-493

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
By letting S= , and Re [G0 ( )] and Imaginary [G0 ( the G0( ), respectively, one has | | G0 ( ) Where, [ ] [ | |= = {
[ [ ] } ]

)] be the real part and imaginary part of (4) ] (5) (6)

Substituting (4) and (3), one obtains D( ) - | | Let

)=0

(7)

A=
=+180.

(8)
(9)

When =0, A is the gain margin of the system, and when A=1, is the corresponding phase margin. Now we define the gain-phase margin tester function as, F( )=D( )+ N( ) (10) (7), (8), (9) and (10) imply that the function F ( ) should always be equal to zero. This indicates that the gain margin and the phase margin of the PID control system can be determined from the characteristic equation.

The open loop transfer function defined as = putting s= N( )=( ) ( [ Let us define ( ( ) ] ) ( ) ( ) ) ] Noting that =A - A ,

Let us define { { ( ) } ( )}

Define:-

490

Vol. 5, Issue 1, pp. 486-493

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Then we can write from (10), (18), (19), (20), and (21) as B1+ C1+D1=0 Imaginary parts: [ ( ( [ B2= C2=

(22) ) ( )]) (24) (25) ]

D2= Then we can write from (10), (23), (24), (25), and (26):B2+ C2+D2=0 Solving the equations (22) and (27), we can find:AND

(27)

(28)

V.

SIMULATION RESULTS

The simulation is carried out in MATLAB and SIMULINK. With the help of robust PID controller, system stability is achieved and the system with delay gets stable and gives high degree of performance as shown in fig (7)

Figure7. Frequency and phase response of a system

491

Vol. 5, Issue 1, pp. 486-493

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

VI.

CONCLUSION AND FUTURE WORK

The robust PID controller designed in this paper presents straight forward technique for characterizing all admissible PID controllers in the parameters plane for the system with uncertain time delays .The advantage of this methods is the guaranteed robustness with respect to plant variations and external disturbances. It promises the control system with good tracking and disturbance rejection behaviour .This method of selecting PID controller settings can be applied to a wide range of industrial applications. Further analysis of this will be the implementation of the hardware in loop simulation followed by real time implementation.

REFERENCES
[1]. Ying J. Huang and Yuan-jay Wang. Robust PID controller design for non-minimum phase time delay systems, ISA transactions 40(2001)31-39. [2]. A.M. De Paor and M. O'Mally, Controllers of ZieglerNichols type for unstable process with time delay. Int. J. of Control 49 4 (1989), pp. 12731284. [3]. A.T. Shenton and Z. Shafiei, Relative stability for control systems with adjustable parameters. J. of Guidance, Control and Dynamics 17 (1994), pp. 304310. [4]. W.K. Ho and W. X u, PID Tuning for unstable processes based on gain and phase-margin specifications. IEE Proc. - Control Theory and App. 145 5 (1998), pp. 392396. [5]. C.H. Chang and K.W. Han, Gain margins and phase margins for control systems with adjustable parameters. J. of Guidance, Control, and Dynamics 13 3 (1990), pp. 404408. [6]. K.W. Han, C.C. Liu, Y.T. Wu, Design of controllers by parameter-space method and gain-phase margin tester method. Proc. of 1999 ROC Auto. Control Conf. Yunlin, 1999, pp. 145-150. [7]. K.W. Han and G.J. Thaler, Control system analysis and design using a parameter space method. IEEE Trans. on Automatic Control, AC-11 (3) (1966), pp. 560563. [8]. D.D. iljak, Parameter space methods for robust control design: a guided tour. IEEE Trans. on Automatic Control34 7 (1989), pp. 674688. [9]. D.D. iljak, Generation of the parameter plane method. IEEE Trans. on Automatic Control 11 7 (1997), pp. 674688. [10]. C.T. Huang, M.Y. Lin and M.C. Huang, Tuning PID controllers for processes with inverse response using artificial neural networks. J. Chin. Inst. Chem. Eng 30 3 (1999), pp. 223232. [11]. D.D. iljak, Nonlinear Systems: The Parameter Analysis and Design. , John Wiley & Sons Inc, New York (1969). [12]. N.S. Nice, Control systems engineering. (2nd Ed. ed.),, Addison-Wiley Publishing Company, [13]. Jianying Liu, Pengju Zhang, and Fei Wang. Real-time dc servo motor position control by pid controller using labview. In Intelligent Human-Machine Systems and Cybernetics, 2009. IHMSC 09. International Conference on, volume 1, pages 206 209, 2009. [14]. Guoshing Huang and Shuocheng Lee. Pc-based pid speed control in dc motor. In Audio, Language and Image Processing, 2008. ICALIP 2008. International Conference on, pages 400 407, 2008. [15]. Zhang Wenan, Yu Li, and Song Hongbo. A switched system approach to networked control systems [16]. with time-varying delays. In 27th Chinese Control Conference, pages 424 427, 2008. [17]. S. longo, G.Herrmann, and P. barber. Stabilisability and detectability in networked control. IET Controltheoty Appl., 4(9):16121626, 2010. [18]. M.G.B. Cloosterman et al. Controller sysnthesis for networked control system. Automatica, 2010. [19]. Hehua Yan, Jiafu Wan, Di Li, Yuqing Tu, and Ping Zhang. Codesign of networked control systems a review from different perspectives. In IEEE International Conference on Cyber Technology in Automation,Control, and Intelligent Systems, Kunming, March 2011. [20]. Ahmad T. Al-Hammouri, Michael S. Branicky, and Vincenzo Liberatore. Co-simulation tools for networked control systems, 2009. [21]. B.Subudhi, S.Ghosh, S.Bhuyan, B.Raju and M.M.Gupta. Smith Predictor Based Delay Compensation in Networked Control of Digital Servo Motor, chapter Innovations and Advances in Communications, Information and Network security. Macmillan Publishers India, 2010. [22]. B.Subudhi, S.Ghosh, S.Bhuyan, B.Raju and M.M.Gupta, Smith predictor based delay compensation in networked control of digital servo motor, In International Conference on Data Management, Gaziabad, pp.123-134, March, 2010

492

Vol. 5, Issue 1, pp. 486-493

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

AUTHORS
Dewashri Pansari was born in Raipur, Chhattisgarh on 9 th of June 1988.She received her B.E. in Electrical and Electronics Engineering from Government Engineering college Raipur, Chhattisgarh ,India in the year 2010 and currently she is a M-tech student in Disha institute of management and technology, Raipur, Chhattisgarh. Her special fields of interest include control system and power system.

Balram Timande did his B.E. from B.D.C.O.E. Sewagram, Nagpur University and M. Tech from B.I.T. Durg, CSVT University Bhilai. He is having industrial experience of 9 and half years and a teaching experience of 08 years. His area of research is an embedded system design and Image processing.

Deepali Chandrakar was born in Raipur, Chhattisgarh on 28 th of October 1988.She received her B.E. in Electrical and Electronics Engineering from Government Engineering college Raipur, Chhattisgarh, India in the year 2010 and currently she is a M-tech student in Disha institute of management and technology, Raipur, Chhattisgarh. Her special field of interest includes control System and power electronics.

493

Vol. 5, Issue 1, pp. 486-493

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

STRUCTURAL AND MAGNETIC PROPERTIES OF CU SUBSTITUTED NI ZN NANOCRYSTALLINE FERRITE SYNTHESIS BY SOL-GEL AUTO COMBUSTION TECHNIQUE
Vidyadhar.V.Awati1,, Maheshkumar L. Mane2 , Sopan .M.Rathod*
Department of Physics, C.T. Bora College, Shirur., Pune, India Research scholar of JJT University, Jhunjhunu, Rajasthan, India. 2 Department of Physics, S. G. R. G. Shinde Mahavidyalaya, Paranda, India * P.G. Department of Physics, Abasaheb Garware College, Pune, India.
1

ABSTRACT
A Cu 2 + substituted NiZn nano-sized ferrites of compositions Ni(0.8-x)CuxZno.2 (Where x = 0.0, 0.2, 0.4, 0.6) were synthesized through nitrate-citrate auto-combustion method at relatively low temperature and were characterized for structural and magnetic properties using X-ray diffraction(XRD), scanning electron microscopy (SEM) and vibrating sample magnetometer (VSM). The detailed studies regarding the crystal structural stability, surface morphology and magnetic properties as a function of Cu 2+ ion concentration were performed. X-ray (XRD) revealed the formation of nano-sized ferrite particles with cubic spinel structure. The annealing treatment does not alter the crystal structure but increases the crysatllinityof the sample. The IR Spectra confirmed that the synthesized material is ferrite. SEM revealed the micro structure and surface morphology of the obtained ferrite. The saturation magnetization at room temperature with field 10 KOe exhibits strong influence of Cu2+ ion content and annealing temperature. These nanoferrites may have applications in core materials and in electronic device technology.

KEYWORDS: Nanocrystalline NiCuZnFe2O4, Auto-combustion, XRD, IR and SEM and VSM.

I.

INTRODUCTION

Ferrite materials have been under intense research for many years due to their useful electromagnetic characteristics for a large number of applications [1]. The Ni-Cu-Zn spinel ferrites are soft magnetic materials, which have wide application in advance technologies such as multi-layer chip inductors (MLCIs) [2], multi-layer LC filters [3], magnetic temperature sensors [4] and humidity sensors [5]. Soft magnetic materials with initial particle size in the nanometer scale are now of interest because of their unique magnetic properties, which differ considerably from those of bulk materials and become technologically very important. In order to develop Multilayer Ferrite Chip Inductor (MLFCI) the NiCuZn ferrite was intensively studied in the last ten years. It is one of the widely used electronic components for the electronic products such as Cellular Phone, Notebook Computer and Video Cameras etc. [6,7]. Multilayer Chip Inductor [MLCI] has recently been developed as on of the key surface mounting devices [8, 9]. The low temperature sintered NiCuZn ferrite is one of the most important magnetic material for multi-layer chip inductors (MLCIs) applications because of their relatively low sintering temperatures, high permeability in a high frequency region, high electrical resistivity and chemical stability [10-11]. The chip inductor is fabricated by layering alternate layers of ferrite and silver electrodes. This multi-layer ceramic metal composite should be co-fired below

494

Vol. 5, Issue 1, pp. 494-502

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
950oC to suppress the interfacial diffusion of Ag metal into ceramics as melting point of Ag is 961 o C. Ni-Cu-Zn ferrite is used in this application because it can be sintered below 950o C. Ni-Cu-Zn ferrite powder is usually synthesized by the conventional solid state reaction oxide method and calcinations at higher temperature. The oxide method has some inherent disadvantages such as chemical in homogeneity, coarser particle size etc [12, 13]. The aim of the present work is to prepare Ni-Cu-Zn ferrites using sol-gel auto-combustion method which helps to control the micro structural properties such as grain size, distribution and orientation easily. We describe the effect of thermal treatment on structure and micro-texture of synthesized material. Chemical and structural changes that take place during combustion can be monitored by spectroscopy analysis which helps to understand the combustion reaction mechanism. The magnetic behavior has been investigated with vibrating sample magnetometer. The various compositions of the system Ni1-x-yCuxZnyFe2O4 were investigated and reported in the literature [14, 15]. However, according to the knowledge of the authors the reports on the specific composition of system Ni0.8xCuxZn0.2 with x=0.0 to 0.6 are not available in the literature. In this paper, we report on the synthesis process, the characterization and the magnetic properties of as burnt and sintered ferrite from this nano-sized powder.

II.

EXPERIMENTAL PROCEDURE

The powders were synthesized by Sol-gel auto-combustion method. Analytical grade Zinc Nitrate (Zn(NO3)2.6H2O), Nickel Nitrate (Ni(NO3)2.6H2O), Copper Nitrate(Cu (NO3)23(H2O) and Ferric Nitrate (Fe(NO3)39(H2O) were dissolved in distilled water to obtain a mixed solution. Reaction procedure was carried out in air atmosphere without protection of inert gases. An aqueous solution of citric acid (C6H8O7.H2O) mixed with metal nitrates solution, then ammonia solution was slowly added to adjust the pH at 7. The mixed solution was kept on to a hot plate with continuous stirring at 100C. During evaporation, the solution became viscous and finally formed a very viscous brown gel. When finally all water molecules were removed from the mixture, the viscous gel began frothing. After few minutes, the gel automatically ignited and burns with glowing flints. The decomposition reaction would not stop before the whole citrate complex was consumed. The auto-combustion was completed within a minute, yielding the brown colored ashes termed as a precursor. The as-burnt powders of all the samples were sintered at 400C and 700C for 2h to get the final product.

III.

RESULTS AND DISCUSSION

3.1 Structural Aspects:


Powder X-ray diffraction studies (XRD) have been carried out on the sintered samples at 400C and 700C for Ni0.6 Cu0.2 Zn0.2 Fe2O4, Ni0.4Cu0.4 Zn0.2 Fe2O4 and Ni0.2Cu0.6Zn0.2 Fe2O4 by Philips X-ray diffractometer (Bruker AXS-D8 Advance) using CuK radiation (=1.5406A). Figure.1(a) and Figure 1(b) shows the XRD patterns for the general formula (Zn0.8-x Cux Zn0.2 Fe2O4) ferrite samples with various x-values for as-burnt and powders sintered at 700C respectively. For samples with x=0.00.6 with steps of 0.2, the crystalline structure remained a cubic spinal structure with no other phases observed, which is in agreement with JCPDs card no. 48-0489. The average crystallite size of the sintered samples was found to be in between 25-40 nm. This shows that synthesized powder has nano-sized crystallites. The XRD patterns for higher substitution of Cu2+ ions indicate that the powder is mainly composed of tetragonal structure of CuFe2O4 (JCPDs card No. 34-425). At higher substitution of Cu2+, weak diffraction peaks attributed to the presence of CuFe2O4 in a tetragonal crystal structure is observed and these peaks weakens at higher substitution of Cu2+ ions. This tetragonal distortion is due to the Jahn-Teller effect of the Cu2+ ions located in the octahedral sites of spinel in a large concentration [16, 17]. Further, it is observed from Figure. 1 (a) , for the as burnt samples (Figure. 1(b), the reflections are relatively weak, indicating its low crystallinity and small particle size.

495

Vol. 5, Issue 1, pp. 494-502

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Figure 1(a): XRD pattern of the as-burnt samples of Ni0.8-xCuxZn0.2Fe2O4

Figure 1(b): XRD pattern of the samples sintered at 700 C of Ni0.8-xCuxZn0.2Fe2O4

The lattice constant a was investigated by using the following relationship [18]:

a dhkl h 2 k 2 l 2

(1)

where dhkl is the observed interplaner distance for hkl planes. The d-spacing values were calculated for the recorded peaks using Braggs law, and the lattice constant a was calculated for each plane. The dependence of the lattice constant a on the concentration of substituted Cu2+ ion by NiCuZn ferrite is shown in Figure. 2.

Figure 2: Lattice constant (a) of as-burnt and the samples sintered at 700o C.

496

Vol. 5, Issue 1, pp. 494-502

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
The values of the lattice constant increase as the Cu2+ attentiveness x increases in the Ni-Cu-Zn spinel ferrite system. This decrease can be attributed to the difference in ionic radii of Ni2+ (0.72) and Cu2+ (0.74 ) ions. However, from Fig. 2 it is observed that the lattice constant at any Cu concentration in the nanoparticles was found to be less than that of the sintered samples for the corresponding composition. The lattice contraction in nanoparticles, may be accounted due to the partial oxidation of Cu2+ to Cu3+, Ni2+ to Ni3+ and zinc loss. The enhancement in the lattice constant is normally attributed to the interface structure with a large volume fraction [19]. The enhanced role of the surface is clearly visible in the variation in the lattice constant, which is becoming more important as the size is reduced. The broadness of the diffraction peaks indicates the small size of the ferrite crystals. The average crystallite diameter of the powder estimated from the most intense (311) peak of XRD and using the Scherrer method.

cos

(2)

Where d is the average crystalline dimension perpendicular to the reflecting phases, is the X-ray wavelength, is the Bragg angle, and is the finite size broadening. k is a constant close to unity that is related both to the crystalline shape and to the way is defined, i.e. either as the full width at halfmaximum (FWHM) or as the integral breadth i.e. the ratio of the peak area to peak maximum. The analysis revealed that the crystallite sizes are in the range of nanometer and exhibit gradual increase with the increasing in Cu2+ content (Figure. 3).

Figure 3: Variation of crystallite size with Cu content x for as-burnt and the samples sintered at 700o C.

The size of the particles is observed to be increased with increase in sintering temperature. While sintering generally decreases the lattice defects and strains, however it can also cause coalescence of crystallites that result in increasing the average size of the nanoparticles [20]. Figure 3 shows the dependence of particle size on sintering temperature and Cu2+ substitution. Thus it appears that particle size may be controlled by varying sintering temperature.

3.2 Scanning Electron Microscopy (SEM) :


The microstructure and morphology of sintered powder at 700C were characterized at room temperature by scanning electron microscopy (FESEM) (HITACHI, S4800, JAPAN). Figure.4 represents the SEM micrographs of the Ni-Cu-Zn ferrite powder as burnt and sintered at 700C for 2 hours. The surface morphology of the powder reveals an aggregation of particles ranging in size from less than 1 micrometer to 10 micrometers. The average grain size is calculated by using the following equation;

Ga

1.5L MN

(3)

497

Vol. 5, Issue 1, pp. 494-502

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
where L is the total test line length in cm; M the magnification; N is the total number of intercept. The grain size increases gradually with increasing composition. This may be due to the fact that the melting points of Copper (1357 K) is less than that of Nickel (1726 K). The grains of uniform size are distributed throughout the surface for higher Cu2+ composition. At lower composition, formation of large exaggerated grains of non-uniform size is seen to occur. The driving force for grain growth is the surface tension of the grain boundary [21].

X=0.2

X=0.4

Figure 4: SEM micrographs of (Ni0.8-xCuxZn0.2) Fe2O4 ferrite typical samples for as-burnt and the samples sintered at 700 C.

Figure 4 also shows the difference between the grain sizes related to the different sintering temperature. The size of the grains is observed to be increased with sintering temperature. While sintering generally decreases the lattice defects and strains, however it can also cause coalescence of crystallites that result in increasing the average size of the nanoparticles [22].

3.3 Fourier transformed infrared spectroscopy analysis ( FT-IR) :


Fourier transformed infrared spectroscopy (FT-IR) transmittance spectra of as burnt ferrite nanoparticles measured in the frequency range of 400 to 4000 cm-1 are shown in Figure. 5.The spectrum shows absorption bands in the region 1200 to 1700 cm-1 corresponding to NO3-1 ions, and also due to the carboxyl group (COO-). Bands at 3600 to 3800 cm-1 corresponding to hydrogen bonded O-H groups. The two strong bands that appear around 605 cm-1 and 410 cm-1 are the characteristic bands of spinel ferrite [23]. The difference between 1 and 2 is owing to the changes in the bond length (Fe3+-O2-) at octahedral and tetrahedral sites. As the Cu2+ substitution is increases, characteristic band (1) shifts to the higher frequency region while (2) shifts to the lower frequency region. The value of stretching vibrations at both sites shows that both the bands are disturbed with the incorporation of Cu2+ ions in the Ni-Zn matrix. The tetrahedral site bands are shifted from lower band values to higher band values, i.e., from 407 to 430 cm-1, which is attributed to the stretching of Fe3+-O2- bonds on the substitution of In ions. The octahedral band sites on the contrary shift towards

498

Vol. 5, Issue 1, pp. 494-502

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
lower frequency region from 590 to 605 cm-1 with Cu2+ addition, which is attributed to the shifting of Fe3+ towards oxygen ion on occupation of octahedral site by Cu2+ ions.

Figure 5: Infrared spectra in the range 400-4000 cm-1 for as burnt nano-crystalline (Ni0.8-xCuxZn0.2)Fe2O4 ferrite.

3.4 Magnetization:
The hysteresis loops (Figure.6) of the investigated samples were measured to determine magnetic parameters such as the saturation magnetization (Ms), coercivity (Hc) and magnetron number (nB). It is observed that the material has negligible corecivity and remnance, which indicates that the studied sample is soft ferrite. The measurement results are presented in Figure 6-8, indicating that Ms decreases with the increase of Cu2+ substitution. This may be attributed to the weakening of exchange interactions due to Cu2+ ions. Moreover, the saturation magnetization of Ni-Cu-Zn ferrite materials is defined by their molecular magnetic moments. When Cu2+ ions were introduced into the Ni-Zn, they replace some of Ni2+ in A-site. Moreover, Cu2+ ions have 1B magnetic moment, less than 2B of Ni2+ ions. The magnetic moment in ferrite is mainly due to the uncompensated electron spin of the individual ions and the spin alignments in the two sub-lattices, which are arranged in an antiparallel fashion [24]. According to Neels molecular-field model [25], the A-B super-exchange interactions predominates the intrasublattice A-A and B-B interactions. Therefore, the net magnetic moment is given by the sum of the magnetic moments of the A and B sub-lattices. The magnetic moment per formula unit (nB) was calculated from Neels sub-two-lattice model using the relation, (4) nB cal M B ( x) M A ( x) where MB and MA are the B and A sub-lattice magnetic moments in B. The observed magnetic moment (nB Obs.) per formula unit in the Bohr magnetron (B) was calculated using the following relation [26],

n B obs

MW MS 5585

(5)

499

Vol. 5, Issue 1, pp. 494-502

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Figure 6: Magnetic hysteresis curve for (Ni0.8-xCuxZn0.2) Fe2O4 ferrite sample sintered at 700o C with difference Cu content.

Figure 7: Variation of observed (nBObs.) and calculated magnetron (nBCal.) number for the Samples sintered at 700C.

Figure 8: Variation of saturation magnetization (Ms) and coercivity (Hc) for the samples sintered at 700C.

It is obvious from Figure 5 that the calculated and observed values of the magnetron number decreases with the increase in Cu2+ substitution. The substitution of Cu2+ leads to a decrease of the magnetic moment of the A and B site, and thus the magnetron number nB decreased. This is due to the

500

Vol. 5, Issue 1, pp. 494-502

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
fact that copper has lower value of magnetic moment than that of nickel [27] It is clearly observed from Figure.5 and figure 6 that the coercivity increases as the Cu2+ Substitution increases. The saturation magnetization is related to HC through Browns relation [28]

Hc

2 K1 0 M s

(6)

According to this relation, HC is inversely proportional to MS, which is consistent with our experimental results. An increase in coercivity can also be correlated with the increase in agglomeration [29]. It may be due to the fact that due to agglomeration, various domains having different alignments come closer to each other causing an increase in magneto crystalline anisotropy.

IV.

CONCLUSIONS

Nanocrystalline Ni-Cu-Zn spinel ferrite was successfully fabricated via solution combustion route. The substitution of Cu in Ni0.8-xCuxZn0.2Fe2O4 ferrites causes appreciable changes in its structural and magnetic properties. The formation of single phase cubic spinel structure was confirmed by x-ray diffraction analysis. Lattice constant and crystallite size shows increasing trends both for composition and temperature. The SEM result shows the morphological development after heat treatment. The FTIR spectrum shows characteristic features of spinel structure and strong influence of composition. The magnetic measurements indicate that magnetization decreases and corecivity increases with the increase in copper concentration. The studied materials may be suitable for MLCIs operable in the radio frequency range and in even in high frequency devices and components. These investigation clearly points towards the merit of Sol-gel methods for preparing Nano-crystalline NiCuZn ferrites with improved properties. The result shows that the sol-gel auto combustion is a novel synthesis technique having several advantages like, low cost, simple preparation process and nano-sized active powder. The synthesis methodology reported in this paper is technically simple and cost effective and the method does not require any complex equipment and complicated operations. The method is versatile and can be utilized to synthesize different multi-component ferrite nano-powders for various high frequency applications. With the advent of nanotechnology, a tremendous surge in research on miniaturization and high efficiency electronic devices is on rise. The soft ferrite magnetic materials form a basic requirement in high technology areas. NiCuZn ferrites adequately suit these demands and are considered to shape the future of advanced technology. Thus magnetic and semiconducting

ferro spinel have a wide range of potential applications, due to their interesting physical, structural, magnetic and electrical behavior.

ACKNOWLEDGEMENT
Authors would like to thank Department of Physics, University of Pune for XRD facility, SAIF, IIT, Powai, Mumbai for SEM and IR spectroscopy facilities provided. sAlso one of the authors V. V. Awati wish to thank BCUD, University of Pune, for the fund provided under Research Project (Proposal No. 11SCI00265).

REFERENCES
[1] [2] [3] [4] [5] [6] [7] [8] Yamauchi T, Abe Mce Ferrites: Proceedings of sixth international conference on ferrites (ICF6) The Japan society of powder and powder metallurgy, Japan, 1992. H. Su, H.W. Zhang, X.L. Tang, L.J. Jia, Q.Y. Wen, Mater.Sci. Eng. B 129 (2206) 172-175 C.L.Miao, J.Zhou, X.M.Cui, X.H.Wang, Z.X. Yue, L.T. Li, Mater. Sci. eng. B 127 (2006) 1-5 C. Miclea, C. Tanasoiu, C.F. Miclea, A.Gheorgjiu, V. Tanasiou, J.Magn.Magn. Mater. 290-291 (2005) 1506-1509. N. Rezlescu, C. Doroftei, P.D. popa, Rom.J. Phys. 52 (3-4) 2007 353-360. H. Watanabe, Y. Kanagawa, T. Suzuki, T. Nomura, U. S. Patent No. 4956114, 1990 T. Nakamura, J. Magn. Magn. Mater. 168(1997) 285. J.H. Jean, C.H. Lee, W.S. Kou, J. Am Ceram. Soc. 82(2) (1999) 343.

501

Vol. 5, Issue 1, pp. 494-502

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
[9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] R. J. Charles, A. R. Achuta, U. S. Patent No 4966625, 1990 J. Murbe and J. Topfer, J. Electro Ceramics, 15 (2005), 215. H. Su., H. Zhang, X. Tang, L. Jia, Q. Wen, Mater. Sci. eng. B, 129 (2006) 172. P.S. Anil Kumar, J.J. Shrotri, S.D. Kulkarni, C.E. Deshpande, S.K. Date, Materi. Lett, 27(1996) 293. N.J. Chu, X.Q. Wang, Y.P. Liu, H.x Jin,Q, Wu, L. Li, Z.S.Wang, H.L. Ge, J.Alloy. compd 470 (2009) 438-442. C. Liu, B. Zou, A. J. Rondinone, Z. J. Zhang, J. Phys. Chem.. B, 104 (2000) 1141-1145. D. K. Kim, Y. Zhang, W. Voit, K. V. Rao, J. Magn. Magn. Mater. 225 (2001) 30-36. A.Gabal, Y. M. Al Angari, M. W.Kadi, Polyhedron, 30 (2011) 1185. J. Balavijayalakshmi, N. Suriyanarayanan, R. Jayapraksah, Mater. Lett. 81 (2012) 52. B.D. Cullity, Elements of X-ray diffraction , Addison Wesley Publish. Co. England (1967) 42.. Chandan Upadhyay and H. C. Verma, S. Anand, J. Appl. Phys. 95 (2004) 5746 T. P. Raming, A. J. A. Winnubst, C. M. Van Kats and P. Philips, J. Coll. Int. Sci. 249 (2002) 346. R.L. Coble, J.E. Burke, in: J.E. Burke (Ed.), Sintering in Ceramics, Progress in Ceramic Science, Vol. 3, p. 197. Raming, T. P.; Winnubst, A. J. A.; van Kats, C. M.; Philipse, A. P. J. Coll. Inter. Sci, 249, (2002) 346350. R.D. Waldron, Phys. Rev. 99 (1955)1727.. A.A. Birajdar, Sagar E. Shirsath, R.H. Kadam, S.M. Patange, K.S. Lohar, D.R. Mane, A.R. Shitre, J. Alloy. Compd. 512 (2012) 316 L. Nel, Ann. Phys. 3 (1948) 137. J. Smit, H. P. J. Wijn, Ferrites, Philips Technical Library, Netherland, 1959. J.J. Shrotri, S. D. Kulkarni, C. E. Deshpande, A. Mitra, S. R. Kulkarni, P. S. A. Kumar, S.K. Date, Mater. Chem. And Phys., 59 (1999), 1-5. J.M.D. Coey, Rare Earth Permanent Magnetism, John Wiley and Sons, New York, 1996.. M. Srivastava, A.K. Ojha, S. Chaubey, P.K. Sharma, A.C. Pandey, Mater. Sci. Engg.B 175 (2010) 14..

AUTHORS
Vidyadhar Vasant Awati: Currently working as a Associate Professor and Head, Department of Physics, C. T. Bora College, shirur, District Pune , India. Presently he is pursuing his Ph. D. in Physics. He has completed 3 Minor Research Project funded by UGC and University of Pune. One Minor Research Project is ongoing. 4 Research Papers are presented at National and International conferences. Interests in research area are thin film and its Applications, nano ferrite and material Science Sopan Mansing Rathod: Currently working as a Associate Professor at the P. G. and research, Department of Physics, Abasaheb Garware College, Karve road, Pune, India. Ph. D (Physics) laser Physics awarded from the Dr. B. A. M. University, Aurangabad, in 2003, under the guidance of Dr. B. H. Pawar, HOD, Department of Physics, Amravati University. Presently recognized Ph. D. research Guide of University of Pune, also M. Phil and P. G. recognized of University of Pune. 8 students are awarded M. Phil degree and 4 students are pursuing their Ph. D. Organized one international conference, worked as a convener. 13 papers are published in international Journals. 4 papers are presented at International conference. 15 papers are presented at national Conference. Interests in research area are Laser and its Applications, nano ferrite and material Science.

502

Vol. 5, Issue 1, pp. 494-502

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

COMPARATIVE PARAMETRIC ANALYSIS FOR STABILITY OF 6T AND 8T SRAM CELL


Manpreet Kaur1, Ravi Kumar Sharma2
2

Lovely Professional University, Jalandhar, Punjab, India Vivekanand Institute of Technology, Jaipur, Rajasthan, India

ABSTRACT
As the technology is improving , channel length of MOSFET is scaling down. In this environment stability of SRAM becomes the major concern for future technology. Static noise margin (SNM)[1] plays a vital role in stability of SRAM[2]. This paper gives an introduction to the 8T SRAM cell[3]. It includes the Implementation, characterization and analysis of 8T SRAM cell and its comparison with the conventional 6T SRAM cell[4] for various parameters like read margin, write margin, data retention voltage, temperature and power supply fluctuations and depending upon these analysis we find SNM for 6T and 8T SRAM cell. The tool used for simulation purpose is IC Station by Mentor Graphics using 350nm technology at supply voltage of 2.5volts.

KEYWORDS: SNM, 6T SRAM, DRV, CR, 8TSRAM

I. INTRODUCTION
As, device size is scaled, random process variations significantly degrade the noise margin. As the sizing of the SRAM is in nanometer scale the variations in electrical parameters (e.g., threshold voltage, sheet resistance) reduces its stability due to the fluctuations in process parameters i.e., density of impurity concentration, oxide thickness and diffusion depths[6]. Considering all these effects, the bit yield for SRAM is strongly influenced by VDD, threshold voltage (Vth), and transistor-sizing ratios. So, there are number of design criteria that must be taken into the consideration. The two basic criteria which have been taken such as is the data read operation should not be destructive and another one is static noise margin should be in the acceptable range. Thus, to achieve higher stability, 8T SRAM with improved SNM has been used but at the penalty of area and power as it uses three extra transistors. In this cell structure, the read operation can be performed without altering the cell stability. This makes it possible to lower the threshold voltage, V th of MOSFET in SRAM cell by the same proportion as in CMOS logic transistor. The rest of the paper is organized as follows: Section II introduces cell structure of 8T SRAM cell and describes its read and write operation. In section III, analysis of various parameters has been done. Section IV, Shows the comparison with 6T SRAM cell. Finally, section V offers a brief conclusion.

II. CELL STRUCTURE OF 8T SRAM


During Read operation in 6T SRAM cell, the fundamental stability problem occurs. In order to reduce leakage power consumption, pre-charge voltage for bit-lines is kept much lower than the cell supply voltage [7]. When the transistors are turned on, the 0 logic node is pulled up to a poor 0

503

Vol. 5, Issue 1, pp. 503-514

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
level and the 1 logic node is pulled down to a poor 1 level; this may lead to flip the cell data. In the novel 8T SRAM cell, three MOSFETs are introduced to separate the read and write current paths and to avoid the accidental flipping of cell during read operation as shown in Figure 1. This results in significantly large SNM during Read operation.

Figure 1. Cell structure of 8T SRAM cell

2.1 READ OPERATION


Read operation in 8T cell is performed by using MOSFETs M6, M7 and M8 as shown in Figure 1. Bit-line is pre-charged to logic 1 for successful read operation. Node Qbar is connected to the gates of M7 and M8 transistors. When transistor M6 is turned ON using Read_Word_Line, current starts flowing in and out of the read circuit. Sense amplifier is used to read the cell data by sensing the Bit-Line voltage fluctuation. The amplifier detects the voltage difference of its inputs. For read 1operation, the initial states of Q and Qbar are assumed to be 1 and 0 respectively. As Qbar node stores 0 logic, on turning ON M6 transistor, it enables the M8 (PMOS) transistor which in turn charges the Bit Line through M8 and M6. The sense amplifier detects the bit swing and output 1 is obtained[8]. During read 0 operation Read Word Line enables the M6 and as Qbar node stores logic 1, it turns on M7 (NMOS) which discharges the Bit-Line through M7 and M6 and logic 0 is obtained at the output.

2.2 WRITE OPERATION


In this cell structure of 8T SRAM cell only single bit-line is used for read and write operation as compared to conventional 6T SRAM cell as shown in Figure1. During write 1 operation, enabling the Write Word Line will turn on M5 transistor. As the Bit Line is charged to logic 1 for write 1 operation, the Q node starts charging and turns on M1 which leads to flip Qbar node to logic 0. Now Qbar node helps enabling the M4 which facilitates writing logic 1 at Q node. On the other hand, during write 0 operation, the Bit Line is charged to logic 0 and M5 turns on, by enabling Write Word Line signal. The Q node starts discharging and turns on M2 which in turn flips Qbar node to logic 1. Now Qbar helps turning M3 on, which facilitates discharging Q node properly, and consequently logic 0 is obtained at Q node.

III. PARAMETERS ANALYSED


In this section, different types of parameters such as Static Noise Margin (SNM), Data Retention Voltage (DRV), Read Margin (RM), and Write Margin (WM) for 8T SRAM cell have been analyzed. Also the calculations of both Cell Ratio (CR) and Pull-up Ratio (PR) have been done and corresponding Static Noise Margin is analyzed.

3.1.

STATIC NOISE MARGIN (SNM)

In this section, Butterfly method for measuring static noise margin is introduced. It is the maximum amount of noise voltage that can be tolerated in 8T SRAM cell while still maintaining the correct

504

Vol. 5, Issue 1, pp. 503-514

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
operation .Static noise margin of SRAM cell depends on the cell ratio (CR)[9], supply voltage[10] and pull up ratio[11]. High value of SNM is required for the high stability of SRAM cell. Both read margin and write margin are also affected by the static noise margin of SRAM cell. SNM is also affected by the threshold voltages of NMOS and PMOS transistors. To obtain the butterfly curve, Adobe Photoshop CS7 is used. The two output curves are rotated according to X-Y coordinates which results in butterfly structure. The butterfly structure shown in Figure 2 is finally obtained. The static noise margin is a side of Maximum Square drawn between the inverter characteristics.

Figure 2. Calculation of SNM after rotation

3.2 . CELL RATIO(CR)


In this section, static noise margin is calculated by varying the cell ratio of transistors. Cell ratio is the ratio of sizes of driver transistor to the access transistor. As the cell ratio increases by increasing the size of driver transistor, Static noise margin of memory cell also increases which results in increase of current in a memory cell. Cell Ratio (CR) = (W1/L1) / (W6/L6) (1)

3.3.

PULL UP RATIO(PR)

In this section, static noise margin is calculated by varying the Pull up ratio of transistors. Pull up ratio is the ratio of sizes of load transistor to the access transistor. As the Pull up ratio increases by increasing the size of

driver transistor, Static noise margin of memory cell also increases which results in increase of current in a memory cell. Pull up Ratio (PR) = (W4/L4) / (W5/L5) (2)

3.4. DATA RETENTION VOLTAGE (DRV)


In this section, Static noise margin is calculated by varying the supply voltage of a memory cell. Data retention voltage [12] is the Minimum power supply voltage required to retain data in a high node in the standby mode of SRAM cell is known as the data retention voltage of the cell. For calculation of DRV, the power supply voltage is reduced continuously until the state of SRAM cell is not flipped or until contents of the SRAM cell remain constant and the corresponding SNM can be calculated. With the decrease in the supply voltage i.e. VDD down to DRV, the Voltage Transfer Curves (VTC) of the cross coupled inverters degrade to such a level that Static Noise Margin (SNM) of the SRAM cell reduces to zero as shown in Figure 3.

505

Vol. 5, Issue 1, pp. 503-514

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Figure 3. VTC during calculation of DRV

3.5.READ MARGIN(RM)
In this section, Read margin (RM) is calculated based on transistor current model. The read margin defines the read stability of the SRAM cell based on the VTC obtained. The read margin of the SRAM cell is directly proportional to the cell ratio of the cell. The value of read margin increases with the increase in value of the cell ratio. Butterfly method is used for the read margin analysis as shown in Figure 4.

Figure 4. Calculation of Read Margin

3.6.

WRITE MARGIN (WM)

Write margin of an SRAM cell is the minimum bit line voltage required to flip the state of the cell. The value of write margin depends on the cell design, SRAM array size and process variation. Existing bit-line (BL) sweep method is used for the calculation of Write margin as shown in Figure 5. The value of Write margin increases with increase in the Pull up ratio (PR).

Figure 5. Calculation of Write Margin

506

Vol. 5, Issue 1, pp. 503-514

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

IV. COMPARATIVE ANALYSIS


The simulation results of 8T SRAM cell are analyzed and compared with the conventional 6T SRAM cell for Cell ratio, Pull up Ratio, Data retention voltage, Read Margin, Write Margin, threshold variations and temperature variations and corresponding SNM and power dissipation is calculated in 350 nm technology with supply voltage of 2.5V.

4.1 CELL RATIO VS SNM


Table1 presents the static noise margin of 8T SRAM cell calculated by varying the cell ratio and comparison with 6T is done. Results show that SNM i.e. stability of 8T SRAM cell is higher than the 6T SRAM cell as it uses a separate word line for read operation. Cell Ratio (CR) = (W1/L1) / (W6/L6) = 1 (3) In Figure 6, The profile of the Static noise margin for both 8T SRAM and 6T SRAM cells are shown for different cell ratios. Here the 8T SRAM cell has better SNM than 6T cell.

4.2 PULL UP RATIO VS SNM


Table2 presents the static noise margin of 8T SRAM cell calculated by varying the Pull up ratio and comparison with 6T is done. Results show that SNM i.e. stability of 8T SRAM cell is higher than the 6T SRAM cell as uses a separate word lines for read and write operation. Pull up Ratio (PR) = (W4/L4) / (W5/L5) = 2.1 (4) In Figure 7, The profile of the Static noise margin for both 8T SRAM and 6T SRAM cells are shown for different Pull up ratios. Here the 8T SRAM cell has better SNM than 6T cell.
Table 1. Cell ratio Vs SNM CR 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0 2.8 SNM(mV) (8T SRAM) 120 125 128 130 139 148 152 163 190 SNM(mV) (6T SRAM) 100 102.5 104.6 107 108 112 116.7 123 130

Figure 6. Calculated SNM for both cells. Table 2. Pull up Ratio Vs SNM

507

Vol. 5, Issue 1, pp. 503-514

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
PR 2.4 2.8 3.0 3.2 3.4 3.6 4.0 SNM(mV) (8T cell) 164 167 170 172 175 179 182 SNM(mV) (6T cell) 101.2 105 108 113 117 120 131

Figure7. SNM by varying Pull up ratio for both cells.

4.3 DATA RENTENTION VOLTAGE VS SNM


Table 3, presents the Static noise margin of 8T SRAM cell calculated by reducing the power supply voltage until the states of cell are not flipped and comparison with 6T is also done. Results show that static noise margin decreases with decrease in supply voltage but the SNM of 8T SRAM cell is better than the 6T cell. The value of DRV for 350nm technology is 0.5V.
Table3:-Data retention voltage Vs SNM DRV(V) 2.5 2.0 1.6 1.2 1.0 0.8 0.4 SNM(mV) (8T SRAM cell) 140 138 134 129 124 122 120 SNM(mV) (6T Cell) 100 96 95 94 92.9 89 86

In Figure 8, The profile of the DRV for both the cells are shown for different power supply voltages at 350nm technology. Here the 8T SRAM cell has better SNM than 6T cell.

508

Vol. 5, Issue 1, pp. 503-514

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Figure 8. Calculated SNM by varying supply voltage for both cells

4.4 READ MARGIN VS CELL RATIO


Table 4, presents the Read Margin of 8T SRAM cell calculated by varying the cell ratio and comparison with 6T is also done. Read margin of 8T SRAM cell is directly proportional to cell ratio i.e. read margin increases when cell ratio increases. Table 4. Read Margin Vs Cell Ratio CR 0.8 1.0 1.2 1.4 1.6 1.8 2.0 Read Margin (8T SRAM cell) 0.449 0.455 0.461 0.469 0.476 0.489 0.498 Read Marin (6T SRAM) 0.194 0.196 0.200 0.209 0.211 0.215 0.221

In Figure 9, The profile of the Read Margin for both the cells are shown for different values of cell ratio at 350nm technology. Here the 8T SRAM cell has better Read Margin than 6T cell.

Figure 9. Calculation of RM by varying CR

509

Vol. 5, Issue 1, pp. 503-514

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963 4.5 WRITE MARGIN VS PULL UP RATIO
Table 5, presents the Write Margin of 8T SRAM cell calculated by varying the Pull up ratio and comparison with 6T is also shown. Write margin of 8T SRAM cell is directly proportional to Pull up ratio i.e. write margin increases when Pull up ratio increases. In Figure 10, The profile of the Write Margin for both the cells are shown for different values of Pull up ratio at 350nm technology. Here the 8T SRAM cell has better Write Margin than 6T cell.
Table 5. Write Margin Vs SNM PR 3.0 3.2 3.4 3.6 3.8 4.0 Write margin (8T SRAM cell) 0.50 0.52 0.55 0.57 0.61 0.65 Write margin (6T SRAM) 0.246 0.249 0.253 0.256 0.261 0.269

Figure 10. Calculation of WM by varying PR

4.6 THRESHOLD VOLTAGE VS SNM


Table 6, presents the Static Noise Margin of 8T SRAM cell calculated by varying the threshold voltages of both NMOS and PMOS transistor and comparison with 6T is also shown. Static noise margin decreases with decrease in threshold voltages of both NMOS and PMOS transistors. In Figure 11, The profile of the SNM for both the cells are shown for different values of threshold voltage for both NMOS and PMOS at 350nm technology. Here the 8T SRAM cell has better noise tolerance at low values of threshold voltage than 6T SRAM cell.
Table 6. Threshold voltage Vs SNM
Threshold voltage(V) PMOS:-0.2 NMOS: 0.2 PMOS: -0.3 NMOS: 0.3 PMOS: -0.5490813 NMOS: 0.6807607 PMOS: -1.0 SNM(mV) (8T SRAM ) 132 138 140 158 SNM(mV) (6T SRAM) 103.09 105.67 109.34 112.37

510

Vol. 5, Issue 1, pp. 503-514

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
NMOS: 1.0 PMOS: -1.5 NMOS: 1.5 PMOS: -1.8 NMOS: 1.8 164 170 116.567 120.89

.Figure 11. Calculation of SNM by varying Threshold voltage

4.7 TEMPERATURE VS POWER DISSIPATION


For calculation of Power, temperature is varied from 0C to 50 C as shown in Table7 and the corresponding power dissipation is noted down. As temperature increases power dissipation also increases. Also the comparison with conventional 6T SRAM is shown. In Figure 12, The profile of the Power dissipation for both the cells are shown for different values of temperature at 350nm technology. Here the 8T SRAM cell higher power dissipation than 6T cell.
Table7. Temperature Vs Total Power Dissipation Temperature (in degree Celsius) 0 5 10 15 20 27 30 35 40 45 50 6T Cell Power Dissipation (in Pico Watts) 66.5688 66.9243 67.4591 68.0234 68.8615 70.5036 72.9018 76.0813 80.9964 84.2878 89.2715 8T Cell Power Dissipation (in Pico Watts) 43.7301 49.1746 54.9793 63.4268 74.4341 95.4047 112.8935 130.0293 159.3468 178.2163 248.5440

511

Vol. 5, Issue 1, pp. 503-514

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Figure 12. Calculation of power dissipation by varying temperature

4.8 . SUPPLY VOLTAGE VS POWER DISSIPATION


For calculation of Power, supply voltage i.e. VDD is varied from 2.5V to 0.5V and the corresponding power dissipation is noted down. As supply voltage increases power dissipation also increases. Comparison with 6T is also shown in Table 8. In Figure 13, The profile of the Power dissipation for both the cells are shown for different values of supply voltage at 350nm technology. Here the 8T SRAM cell higher power dissipation than 6T cell.
Table 8. Supply voltage Vs Total Power Dissipation Supply Voltage (Vdd in Volts) 5.0 4.5 4.0 3.5 3.0 2.5 2.0 1.5 1.0 0.5 6T Cell Power Dissipation (in Pico Watts) 396.5366 340.8164 274.1283 196.2356 129.9901 70.5036 67.3492 59.5460 51.4837 46.8218 8T Cell Power Dissipation (in Pico Watts) 478.3682 387.2569 313.3189 208.8437 161.4528 95.4047 89.3591 82.5396 77.2124 71.8397

Figure 13. Calculation of power dissipation by varying supply voltage

512

Vol. 5, Issue 1, pp. 503-514

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

V. CONCLUSION
In this paper, stability analysis for various parameters such as read margin, write margin and data retention voltage for 8T SRAM cell has been done. Also, the results are compared with conventional 6T SRAM cell. From the above analysis it can be concluded that Static Noise Margin (SNM) of 8T SRAM cell is better than the conventional 6T SRAM cell. The power analysis of the 8T SRAM cell by varying temperature and supply voltage has been done and the results are compared with that of the conventional 6T SRAM cell. These analysis are very useful for future research work on 6T or 8T SRAM cell as using these we can clarify the stability of memory cells. Using noise margin of memory cell it is easier to configure a new SRAM cell with higher stability and lower noise.

REFERENCES
[1] Andrei Pavlov & Manoj Sachdev, CMOS SRAM Circuit Design and Parametric Test in Nano-Scaled Technologies, Intel Corporation, University of Waterloo, 2008 Springer Science and Business Media B.V., pp: 1-202. [2]Abhijit Sil, Soumik Ghosh, Magdy Bayoumi, A Novel 8T SRAM Cell With Improved Read-SNM, The Center for Advanced Computer Studies, vol.12, 2008. [3]Ken Mai, Design and analysis of Reconfigurable Memories, Dissertation, Stanford University, 2005. [4]Janm Rabaey, Anantha Chandakasan, Borivoje Nikolic. Digital Integrated Circuits. Second Edition. [5]Paridhi Athe and S. Dasgupta A Comparative Study of 6T, 8T and 9T Deca nano SRAM cell, ISIEA, pp: 889-894, 2009. [6]Debasis Mukherjee1, Hemanta Kr. Mondal2 and B.V.R. Reddy, Static Noise Margin Analysis of SRAM Cell for High Speed Application, IJCSI International Journal of Computer Science Issues, Vol. 7, 2010 [7]Qiaoyan Yu and Paul Ampadu Cell Ratio Bounds for Reliable SRAM Operation, IEEE, pp: 1192-1195, 2006 [8]Sung-Mo Kang, Yusuf Leblebici. CMOS DIGITAL INTEGRATED CIRCUITS; Analysis and Design, McGraw-Hill International Editions, Boston, 2nd Edition, 1999. [9]E. Seevink and F. List, Static Noise Margin Analysis of MOS SRAM Cells, Solid-State Circuits, IEEE Journal of, 1987, 5, 748-754. [10]Mahmut E. Sinangil, Student Member, IEEE, Naveen Verma, Student Member, IEEE, and Anantha P. Chandrakasan, Fellow, IEEE A Reconfigurable 8T Ultra-Dynamic Voltage Scalable (U-DVS) SRAM in 65 nm CMOS, IEEE JOURNAL OF SOLID-STATE CIRCUITS, VOL. 44, NO. 11, NOVEMBER 2009. [11]Luigi Dilillo, Patrick Girard, Serge Pravossoudovitch and Arnaud Virazel, Analysis and Test of ResistiveOpen Defects in SRAM Pre-Charge Circuits, Proceedings of the European Test Symposium, pp: 1-14, 2005. [12] Kumar, A.; Qin, H.; Ishwar, P.; Rabaey, J.; Ramchandran, K.; Fundamental Bounds on Power Reduction during Data-Retention in Standby SRAM ; IEEE International Symposium on Volume , Issue , 27-30 May 2007 Page(s):1867 - 1870 . [13]K. Mai et al., Smart Memories: A Modular Reconfigurable Architecture, Proceedings, International Symposium on Computer Architecture, pp. 161-71, June 2000. [14]B. Amrutur and M. Horowitz, Speed and Power Scaling of SRAMs, IEEE Journal of Solid-State Circuits, Feb. 2000. [15]Tegze P. Haraszti, Microcirc Associates CMOS Memory Circuits, kluwer academic publishers New York, boston, dordrecht, London, Moscow. Pages 238-239. [16] A. P. Chandrakasan el al., Low-power CMOS digital design, IEEE Journal of Solid-state Circuits, Vol. 27, pp. 473-484, Apr. 1992. [17]L. Chang, D. Fried and J. Hergenrother, Stable SRAM cell design for the 32 nm node and beyond, VLSI Technology, 2005. Digest of Technical Papers. 2005 Symposium on, 2005, 128-129. [18] S. Birla, N. Kr. Shukla, M. Pattanaik and R. K. Singh, Device and Circuit Design Challenges for Low Leakage SRAM for Ultra Low Power Applications, Canadian Journal on Electrical & Electronics Engineering, Vol. 1, No. 7, 2010, pp. 156-167 [19] B. H. Calhoun and A. P. Chandrakasan Static Noise Margin Variation for Sub-Threshold SRAM in 65 nm CMOS, IEEE Journal of Solid-State Circuits, Vol. 41, No. 7, 2006, pp. 1673-1679. [20] Y. Chung and S.-H. Song, Implementation of Low- Voltage Static RAM with Enhanced Data Stability and Circuit Speed, Microelectronics Journal, Vol. 40, No. 6, 2009, pp. 944-951. [21] B. H. Calhoun and A. P. Chandrakasan A 256-kb 65-nm Sub-Threshold SRAM Design for Ultra-LowVoltage Operation, IEEE Journal of Solid-State Circuits, Vol. 42, No. 3, 2007, pp. 680-688.

513

Vol. 5, Issue 1, pp. 503-514

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

ABOUT AUTHORS
Manpreet Kaur is working as an Asst. Professor in the Department of Electronics and Communication Engineering, Lovely Professional University, Jalandhar, (Punjab) India. She has received her M.Tech. in VLSI Design from Guru Gobind Singh Indraprastha University, Delhi And B.E in Electronics and Communication Engineering in the year of 2011 and 2008 respectively. Her main research interest is in reconfigurable memory design for low power.

Ravi Kumar Sharma is working as an Asst. Professor in the Department of Electronics and Communication Engineering, Vivekanand Institute of Technology, Jaipur, (Rajasthan) India. He has completed his M.Tech. in VLSI Design from Guru Gobind Singh Indraprastha University, Delhi And B.E in Electronics and Communication Engineering from University of Rajasthan in the year of 2011 and 2009 respectively. His main research interest is in low power VLSI Design.

514

Vol. 5, Issue 1, pp. 503-514

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

A SURVEY ON ENERGY EFFICIENT SERVER CONSOLIDATION THROUGH VM LIVE MIGRATION


Jyothi Sekhar, Getzi Jeba, S. Durga
Department of Information Technology, Karunya University, Coimbatore, India

ABSTRACT
Virtualization technologies which are heavily relied on by the Cloud Computing environments provide the ability to transfer virtual machines (VM) between the physical systems using the technique of live migration mainly for improving the energy efficiency. Dynamic server consolidation through live migration is an efficient way towards energy conservation in Cloud data centers. The main objective is to keep the number of power-on systems as low as possible and thus reduce the excessive power used to run idle servers. This technique of VM live migration is being used widely for various system-related issues like load balancing, online system maintenance, fault tolerance and resource distribution. Energy efficient VM migration becomes a main concern as the data centers are trying to reduce the power consumption. Aggressive consolidation may even lead to performance degradation and hence can result in Service Level Agreement (SLA) violation. Thus there is a trade-off between energy and performance. Various protocols, heuristics and architectures have been proposed for the energy aware server consolidation via live migration of VMs and are the main area for this survey.

KEYWORDS:
consolidation

Cloud computing, Data Center, Energy Efficiency, Live Migration, Virtual Machine, VM

I.

INTRODUCTION

Computing resources have become cheaper, powerful and ubiquitously available than ever before due to the rapid development in the processing and storage technologies and also the success of Internet. People in businesses are trying to figure out methods to cut costs while maintaining the same performance standards. Their aspiration to grow even under the peer pressure of limited resources has invited them to try new ideas and methods. This realization along with the current technological advancement have enabled the actualization of a new model for computing called cloud computing, in which the resources (e.g., CPU, storage, etc.) are provided as general utilities that can be leased and released by users through the Internet in a pay-as-you-go and on-demand basis. Therefore we can also call it as utility computing. NIST Definition of Cloud Computing Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. [19] Challenges in cloud Automated service provisioning

515

Vol. 5, Issue 1, pp. 515-525

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Virtual machine migration Server consolidation Traffic management and analysis Data security Storage technologies and data management

Cloud Data Center Cloud data centers are attractive because the cost is likely to be very less compared to the traditional data centers for the same set of services. Cloud data centers [18] means data centers with 10,000 or more servers on site, all devoted to running very few applications that are built with consistent infrastructure components. One of the most important factors is that cloud data centers arent remodeled traditional data centers. The employee to server ratio is as 1:1000. This means that there is larger automation in Cloud data centers. The estimates of cost to build a cloud data center include labor costs were 6 percent of the total costs of operating the cloud data center, power distribution and cooling were 20 percent, and computing costs were 48 percent. Of course, the cloud data center has some different costs than the traditional data center (such as buying land and construction). Traditional Data Center Although each data center is a little different, the average cost per year to operate a large data center is usually between $10 million to $25 million. 42 percent: Hardware, software, disaster recovery arrangements, uninterrupted power supplies, and networking. 58 percent: Heating, air conditioning, property and sales taxes, and labor costs. The reality of the traditional data center [18] is further complicated because most of the costs maintain existing applications and infrastructure. Some estimates show 80 percent of spending on maintenance. The employee to server ratio is 1:100 in traditional data centers.
Table 1: Comparison [18] on Traditional and Cloud Data Centers Traditional Corporate Data Center Cloud Data Center Thousands of different applications Mixed hardware environment Frequent application patching and updating Complex Workloads Multiple software architecture Few Applications Homogenous hardware environment Minimal application patching and updating Simple Workloads Single standard software architecture

II.

ENERGY MANAGEMENT IN CLOUD DATA CENTERS

Power is the rate at which a system will perform some work while energy can be defined as the amount of work done in a particular period of time. This difference should be understood with clarity because the reduction in power consumption does not always reduce the energy consumed. Reduction in power consumption causes reduction in the cost of infrastructure provisioning. Improving the energy efficiency is a major issue in clouds. It has been estimated that the cost of powering and cooling accounts for 53% of the total operational expenditure of data centers. In 2006, data centers in the US consumed more than 1.5% of the total energy generated in that year, and the percentage is projected to grow 18% annually [19]. Thus there is a huge pressure to reduce energy consumption from the infrastructure providers. The goal is not only to cut down energy cost but also to maintain the Government regulations and SLA policies. Designing energy-efficient data centers has recently received considerable attention. This problem has been approached from various perspectives: Energy efficient hardware architecture Virtualization of computing resources Energy-aware job scheduling Dynamic Voltage and Frequency Scaling (DVFS)

516

Vol. 5, Issue 1, pp. 515-525

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Server consolidation Switching off unused nodes A key issue in all the above methods is to achieve a trade-off between application performance and energy efficiency. Some of the service providers show the statistics of power consumption as follows: Facebook 10.52%, Google 7.74% and YouTube 3.27% of the total generated power for the data centers. According to Gartner, Cloud market opportunities in 2013 will be worth $150 billion[6]. Data centers are not only expensive to maintain but also unfriendly to the environment. High energy costs and huge carbon footprints are ascertained by the massive amounts of electricity consumed to power and cool numerous servers hosted in these data centers. Cloud service providers need to adopt measures to ensure that their profit margin is not reduced too much due to high energy costs. Virtualization; way to Live Migration Virtualization technology is the one that abstracts away the details of physical hardware and provides virtualized resources for high-level applications. An essential characteristic of a virtual machine is that the software running inside it is limited to the resources and abstractions provided by the VM. The software layer that provides the virtualization is called a Virtual Machine Monitor (VMM) or hypervisor. It virtualizes all of the resources of a physical machine, thereby defining and supporting the execution of multiple virtual machines. Virtualization can provide significant benefits in cloud computing by enabling virtual machine migration to balance load across the data center. Virtual Machine Consolidation is an approach to minimize the energy consumption in a virtualized environment by maximizing the number of inactive physical servers. Live migration is an essential feature of virtualization that allows transfer of virtual machine from one physical server to another without interrupting the services running in virtual machine. Advantages of Live Migration: Workload balancing Maximize resource utilization Fault tolerance Online system maintenance Due to variability of workloads experienced by modern applications, VM placement should be optimized continuously in an online manner. The reason for high power consumption does not just lie in the quantity of computing resources and the power inefficiency of hardware, but rather lies in inefficient usage of these resources. Even completely idle servers consume about 70% of their peak power. For each watt of power consumed by computing resources, an additional 0.5-1W is required for the cooling system.

III.

RELATED WORKS ON ENERGY-AWARE PROTOCOLS


Papers EnaCloud: An Energy-saving Application Live Placement Approach for Cloud Computing Environments [1] Energy-Aware Virtual Machine Dynamic Provision and Scheduling for System Resources Memory, storage Table 2: Energy efficiency related papers Virtualization Goals Technique Yes Minimize energy consumption, application scheduling Energy Aware heuristic algorithm for load aggregation Platforms Xen VMM

CPU

Yes

Power saving, scheduling, consolidation

VM consolidation

Eucalyptus

517

Vol. 5, Issue 1, pp. 515-525

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Cloud Computing [7] Power-aware Provisioning of Cloud Resources for Real-time Services [2] Adaptive Threshold Based Approach for Energy Efficient Consolidation of Virtual Machines in Cloud Data Centers [3] Reducing Energy Consumption by Load Aggregation with an Optimized Dynamic Live Migration of Virtual machine [5] EnergyEfficient Virtual Machine Consolidation for Cloud Computing [6] Sercon: Server Consolidation Algorithm using Live Migration of virtual machines for Green Computing [9] MiyakoDori: A Memory reusing mechanism for Dynamic VM Consolidation [10] A Novel Energy Optimized and Workload CPU Yes CPU power saving in terms of VM provisioning DVFS CloudSim

CPU

Yes

Dynamic consolidation of VM with minimum SLA violation and no. of VM migration

Dynamic consolidation of VM based on adaptive utilization threshold

CloudSim

Memory, storage

Yes

Minimized energy consumption, minimum running physical machines

Clustering Algorithm

Not yet implemented

Storage

Yes

Energy efficient storage migration and live migration of VMs

CPU

Yes

Minimized energy consumption, minimum servers, minimum migration

Distributed Replicated Block Devices for high availability data storage in Distributed System Sercon algorithm

Eucalyptus

Memory

Yes

CPU, I/O, disk utilization, network

Yes

Minimized energy consumption, reduce the amount of transferred data in a live migration Minimized energy consumption, avoid

Memory reuse

KVM

Workload adaptive model

Xen platform

3.4

518

Vol. 5, Issue 1, pp. 515-525

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Adaptive Modeling for Live Migration [11] utilization unnecessary live migration

3.1 ENACLOUD: AN ENERGY-SAVING APPLICATION


The EnaCloud [1] proposed by Bo Li et.al supports application scheduling and live migration to minimize the number of running machines in order to save energy. It also aims in reducing the number of migrations of virtual machines. EnaCloud Architecture In EnaCloud, there is a central Global Controller which runs the Concentration Manager and Job Scheduler. The Job Scheduler in it receives the workload arrival, departure and resizing events and deliver them to the other component called the Concentration Manager. The Concentration Manager then generates a series of insertion and migration operations for application placement scheme which are passed to the Job Scheduler which then dispatches them to the Virtual Machine Controller by decomposing the schemes. Each resource node consists of Virtual Machine Controller, Resource Provision Manager and Performance Monitor. The Virtual Machine Controller invokes the hypervisor to execute the commands such as VM start, stop or migrate. The Resource Provision Manager does the VM resizing based on the performance statistics collected by the Performance Monitor.
Workload Arrive or Depart

Fig 1: EnaCloud Architecture

Here, they consider two types of nodes; computing nodes and storage nodes. Storage nodes store the data and files while the computing nodes are considered homogenous and hosts one or more VMs. The application in the VM and the underlying operating system together are termed here as workload. A server node running VMs is called open box and an idle server node is called close box. In Enacloud, the workloads are aggregated tightly to reduce the number of open boxes. The applications have varying resource demands and so it includes workload resizing. Workload resizing involves workload inflation and deflation. It seeks a solution for remapping workloads to the resource nodes through migration whenever a workload arrives, departs or resizes. The migration here has mainly two goals; first is to minimize the number of open boxes and second is to minimize the migration times. A resource pool and a sequence of workloads when given, there are three types of events for triggering the migration of applications. They include: Workload Arrival Event - This does not simply put the incoming workload into an already existing gap; rather it tries to displace the packed workloads smaller than the newcomer with the newcomer. In the same manner, the smaller workloads extruded from the node are then reinserted into the resource pool.

519

Vol. 5, Issue 1, pp. 515-525

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Workload Departure Event - It tries to create a closed box by popping the workload in it into the resource pool if that is the only remaining workload. Workload Resizing Event This is equivalent to the summation of workload departure event and workload arrival event.

3.2 POWER AWARE VM PROVISIONING METHODS


3.2.1 DYNAMIC ROUND ROBIN TECHNIQUE. Dynamic Round Robin [7] is an extension to the existing Round Robin in the case of consolidation of virtual machines. The objective of this technique is to minimize the number of physical machines that are used to run the entire virtual machines. The goal is very important as the number of physical systems used causes a large increase in the total power consumed. The technique follows two rules: The first rule states that if a virtual machine has finished running inside a physical system and there are still virtual machines running, the server will no more accept a new entry of VM. This state is called the retirement state. Once all the VMs finish their work, the physical machine will be shut down. The second rule is that a physical machine continues in its retirement state for a particular period of time called the retirement threshold. Once this time is exceeded the VMs are forced to migrate to a new physical machine and the former one is shut down or taken down for maintenance. This technique was compared with three other scheduling algorithms in Eucalyptus namely Round Robin, Greedy and Power save and was found to be more energy efficient than all others. 3.2.2 DYNAMIC VOLTAGE SCALING (DVS) ENABLED REAL TIME (RT) VM PROVISIONING In data centers, the most power consuming parts include processing, disk storage, network and cooling systems. The data centers can reduce the dynamic power consumption to increase their profit. It was found that the consumption of energy can be reduced by combining DVS and proportional sharing scheduling. When the user launches any service on the VM, the resource provider provision the VM using DVS [2] schemes in order to reduce the power consumption. The latter is used for scheduling multiple virtual machines on a processor. The power aware VM provisioning schemes are: Lowest-DVS : The processor speed is adjusted to the lowest level. The RT-VMs execute their services at the required MIPS rate. If the arrival rate of RT-VM is low, it consumes least energy and can accept all the requests. -Advanced-DVS : This scheme over-scales up to % of the required MIPS rate for the RTVMs. Thus the processor speed is % faster to increase the possibility of accepting the incoming VM requests in Real Time. The % is predefined according to the load on the system. Adaptive DVS : Adaptive DVS manages the average arrival rate, service rate and the deadlines for the last service requests. Here the RT-VM arrival rate and its service time are known in advance and thus we can analyze the optimal scale. This helps in reducing the power consumption by adjusting the processor speed.
Table 3: Comparison on energy aware DVS schemes Static (No use Lowest DVS -Advanced Adaptive DVS of DVS) DVS Maximum Required % increase Greater than lowest MIPS min. required Same as More than More than Same as static Adaptive DVS static at lower static at lower at high arrival arrival rates arrival rates rate Same adaptive as More than Lowest DVS Close to static

Processor Speed Profit

Acceptance Rate

520

Vol. 5, Issue 1, pp. 515-525

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Energy Consumption Performance Same as Lowest DVS Best in terms of profit per consumed power Lesser than static at lower arrival rate Limited by the simplified queueing model

3.3 ADAPTIVE THRESHOLD-BASED APPROACH


The obligations to provide high quality service to customers lead to the necessity in dealing with the energy-performance trade-off and thus a novel technique for dynamic VM consolidation based on adaptive utilization thresholds [3] was introduced which ensures a high level of Service Level Agreements (SLA). Fixed values of thresholds are unsuitable in environments dealing with dynamic and unpredictable workloads. The system should automatically adjust in such a way as to handle the workload pattern exhibited by the application. So there is a need for auto-adjustment of the utilization thresholds based on the statistical analysis of the historical data collected during the VMs lifetime. The CPU utilization of a host is the sum of utilizations of the VMs allocated to that host and can be modeled by t-distribution. The data of CPU utilization by each VM is collected separately. This, along with the inverse cumulative probability function for the t-distribution enables to set the interval of the CPU utilization. The lower threshold is the same for all the hosts. The complexity of the algorithm is proportional to the sum of the number of non over-utilized host plus the product of the number of over-utilized hosts and the VMs allocated to the over-utilized hosts.

3.4 A LOAD AGGREGATION METHOD


One important method for reducing the energy consumption in data centers is to aggregate the server load within a few number of computers while switching off the rest of the server systems. Usually this is done by virtualization of the systems. This method of load aggregation proposed by Daniel Versick et.al [5] uses some ideas of K-means partitioning clustering algorithm that can compute the results very fast. The K-means chooses cluster centers within an n-dimensional space randomly and calculate the distances between clusters centers and vertices. The proposed algorithm has three phases: Initialization, Iteration and Termination. The algorithm works as follows: First, the number of clusters is calculated based on resource needs. Define some number of physical machines as cluster centers. Each cluster center represents one cluster. A cluster consists of physical machine hosting various virtual machines. Add the virtual machines to the cluster VM list of nearest cluster center that can fulfill necessary requirements. If the clusters cannot fulfill the requirements, add the virtual machine to new cluster with still unused physical machine as cluster center. Every VM is assigned to a cluster center. Calculate a new cluster center for every cluster which is the nearest physical machine. If the cluster centers changed during last iteration and if the iterations are not at its maximum, then empty clusters are used again to add the virtual machines. Else, the VMs of a cluster are migrated to the physical machines representing a cluster center. Finally shut down the physical machines which are not cluster centers.

3.5 ENERGY EFFICIENT VM CONSOLIDATION IN EUCALYPTUS


This approach [6] is based on the storage synchronization. Storage synchronization and VM live migration phases are introduced explicitly instead of permanent synchronization of the disk image via network. The technique here leverages the concept of Distributed Replicated Block Device (DBRD) which is typically used for high-availability data storage in a distributed system. The DBRD module works in two modes: stand-alone and synchronized. In stand-alone mode all the disk accesses are passed to the underlying disk driver. In synchronized mode, disk writes are both passed to the underlying disk driver and sent to a back up machine through a network, while the disk reads are served locally. A multi layered root file system (MLRFS) is used for the virtual machines root image.

521

Vol. 5, Issue 1, pp. 515-525

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
The basic image is cached on a local disk. A separate layer stores the local modifications and then they are overlaid with the basic image transparently and form a single logical file system using the Copy-On-Write (COW) mechanism. Thus only the local modifications are sent during the disk synchronization phase.

3.6 SERCON: SERVER CONSOLIDATION ALGORITHM


This server consolidation algorithm proposed by Aziz Murtazaev et.al [9] is aimed at consolidating the virtual machines (VMs) so that minimum nodes are used. Another objective is to reduce the number of VM migrations. Certain constraints are considered in Sercon. They include: Compatible virtualization softwares in the systems Comparable types of CPUs Similar network connectivity and shared storage Choosing the right value of CPU threshold to prevent performance degradation Migration of VM is done if it results in releasing a node. The procedure goes like this. The nodes containing the VM loads are sorted in the decreasing order. The VMs in the least loaded nodes are chosen as candidates for migration, and are again sorted according to their weights. They are allocated to the most loaded nodes first and on, thus trying to compact them and so can release the least loaded nodes. By this method we can avoid several migrations which might otherwise be necessary if there are nodes that are still least loaded. These steps are continued until there are no more migrations possible. CPU and Memory are considered for representing the load in VMs.

3.7 MIYAKODORI: MEMORY REUSING MECHANISM


MiyakoDori [10] proposes a memory reusing mechanism to reduce the amount of transferred data in a live migration process. In the case of dynamic VM consolidation, a VM may migrate back to the host where it was once executed and so the memory image in that host can be reused, thus contributing to shorter migration time and greater optimizations by VM placement algorithms. This technique enables to reduce the total migration time even more. The dirty pages alone need to be transferred to the former host.
(1)
VM memory

HOST A
Migrating out

HOST B

Copy updated pages only

(2)

Memory image

HOST A
Migrating back (Memory reusing)

HOST B

Fig 2: Basic idea of memory reusing

3.8 ENERGY OPTIMIZED AND WORKLOAD ADAPTIVE MODELING


Taking the workload characteristics into account, the workload adaptive modeling avoids considerable unnecessary live migrations and achieves stable live migration with various workloads and thus reduces the energy usage. In order to handle the changeable workload and realize minimal energy cost, two models are established respectively: energy guided migration model and workload adaptive model.

522

Vol. 5, Issue 1, pp. 515-525

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
In the energy guided migration model [11], the VM with the smallest cost is selected which should satisfy the minimal energy consumption and maintain certain performance levels. Minimizing the migration cost means selecting the VM with smaller memory and larger CPU reservations. This model solves the problem of selecting one VM to be migrated to achieve minimum energy cost. When workload gets heavy for a physical node, it measures the energy cost related function and then decides on the optimal VM. Suppose there is a need to migrate two workloads, then it is preferable to migrate a VM with heavy workload. Suspending a single application is considered better than two, always. In workload adaptive model [11], three correlations are investigated: Workload and Performance: Using subtractive clustering algorithm, the workloads are characterized and is parallely executed by each VM of each physical server. Given the current workload, the performance can be measured by the performance of the cluster with the current workload. Workload and Migrate moment: After categorizing the current workload into a cluster, calculate the center and performance of the cluster after readjusting itself. If it exceeds a performance threshold, live migration is requested. Workload and Energy consumption: The cluster with minimal energy per workload is selected as the optimal cluster. The physical server with the nearest distance between the optimal cluster and the workload is selected as the best candidate. This method minimizes the VM migration vibration and selects one physical server optimizing the energy efficiency.

3.9 ENERGY AWARE RESOURCE ALLOCATION HEURISTICS


Currently, resource allocation in a cloud data center has the objective to provide high performance while meeting SLAs. It does not consider the allocation of VMs to minimize energy consumption. To explore the performance and energy efficiency there are mainly three issues to be dealt with. Excessive power consumption by a server compromises its reliability. Turning off the resources is risky in a dynamic environment from the QoS perspective. Third issue is that the ensuring of SLAs is quite challenging in such a virtualized environment. In order to address such issues effective consolidation policies are required which are both energy efficient and maintains strict SLA. Thus the problem of VM allocation has two parts; First is the VM provisioning and placing them on the hosts while second is the optimization of the current VM allocation. The optimization of the current VM allocation has again two steps: selecting the VMs to be migrated and where the chosen VMs should be placed. Non Power Aware (NPA) [4] policy This policy does not have any power aware optimizations and implies that all hosts machines may run at 100% CPU utilization and consume maximum power. Single Threshold (ST) [4] policy Here an upper utilization threshold alone is set up and the VMs are placed in such a way that the total CPU utilization falls below this threshold. The main aim behind this is to preserve the free resources to prevent SLA violations due to consolidation. Certain policies have upper and lower utilization thresholds. They include: The Minimization of Migration (MM) [12] policy It selects the minimum number of VMs to migrate from a host to maintain the CPU utilization below the upper utilization threshold that is predefined. The best VM is selected if it satisfies two conditions: First, the VM should have the utilization higher than the difference between the hosts overall utilization and the upper utilization threshold. Second, if the VM is migrated from the host, the difference between the upper utilization threshold and the new utilization is the minimum across the values provided by all the VMs. The algorithm stops when the new utilization of the host is below the upper utilization threshold. The Highest potential growth (HGP) [12] policy Whenever the upper threshold is violated, the HPG migrates the VMs having the lowest CPU usage relative to the CPU capacity in order to minimize the potential increase of the hosts utilization and thus prevent as SLA violation. The Random Choice (RC) [12] policy

523

Vol. 5, Issue 1, pp. 515-525

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
The policy relies on random selection of the number of VMs needed to decrease the CPU utilization by a host below the upper utilization threshold.

IV. CONCLUSION AND FUTURE WORK Energy management in Data Centers is one of the most challenging issues faced by the infrastructure providers. Due to the daily increase in the data and processing, it is not possible to keep control over the power consumption in the way required as the performance will primarily be affected. Some of the existing techniques for energy efficient VM live migration were surveyed on. All the techniques mainly focus on reducing the energy consumption to a minimum. There are various ways for achieving this objective including application scheduling, DVFS, adaptive threshold utilization, storage synchronizations etc. Also there are workload adaptive models, memory reusing techniques and various resource allocation techniques discussed. Overcoming all the barriers for energy efficiency is not possible as each of the techniques throw light on different parameters, though with certain disadvantages of their own. Techniques can be proposed to reduce the energy consumption and to overcome the energyperformance trade-offs. Various scheduling and consolidation techniques can be applied for reducing the CPU utilization and bring about drastic changes in energy efficiency. REFERENCE
[1] Bo Li, Jianxin Li, Jinpeng Huai, Tianyu Wo, Qin Li, Liang Zhong, EnaCloud: An Energy-saving Application Live Placement Approach for Cloud Computing Environments, International Conference on Cloud Computing IEEE 2009. [2] Anton Beloglazov et.al, Power-aware Provisioning of Cloud resources for Real-time Services, Proceedings of the 7th International Workshop on Middleware for Grids, Clouds and e-Science, ACM 2009. [3] Anton Belaglozav, R. Buyya, Adaptive Threshold-Based Approach for Energy-Efficient Consolidation of Virtual Machines in Cloud Data Centers, Proceedings of the 8th International Workshop on Middleware for Grids, Clouds and e-Science, ACM 2010 [4] R.Buyya, A.Belaglozav, Jemal Abawajy, Energy-Efficient Management of Data Center Resources for Cloud Computing: A Vision, Architectural Elements, and Open Challenges, Proceedings of the 2010 International Conference on Parallel and Distributed Processing Techniques and Applications, 2010 [5] Daniel Versick, Djamshid Tavangarian, Reducing Energy Consumption by Load Aggregation with an Optimized Dynamic Live Migration of Virtual Machines, International Conference on P2P, Parallel, Grid, Cloud and Internet Computing, IEEE 2010 [6] Pablo Graubner, Matthias Schmidt, Bernd Freisleben, Energy-efficient Management of Virtual Machines in Eucalyptus, 4th International Conference on Cloud Computing, IEEE 2011 [7] Ching-Chi Lin, Pangfeng Liu, Jan-Jan Wu, Energy-Aware Virtual Machine Dynamic Provision and Scheduling for Cloud Computing, 4th International Conference on Cloud Computing, IEEE 2011 [8] Haikun Liu , Cheng-Zhong Xu, Jiayu Gong, Xiaofei Liao, Performance and energy modeling for live migration of virtual machines, ACM 2011 [9] Aziz Murtazaev, Sangyoon Oh, Sercon: Server Consolidation Algorithm using Live Migration of Virtual machines for Green Computing, IETE Technical Review, Vol 28, Issue 3, 2011 [10] Soramichi Akiyama, Takahiro Hirofuchi, Ryousei Takano, Shinichi Honiden, MiyakoDori: A Memory Reusing Mechanism for Dynamic VM Consolidation, Fifth International Conference on Cloud Computing, IEEE 2012 [11] Bing Wei, A Novel Energy Optimized and Workload Adaptive Modeling for Live Migration, International Journal of Machine Learning and Computing, Vol. 2, No. 2, April 2012 [12] Anton Belaglozav, Jemal Abawajy, Rajkumar Buyya, Energy-aware resource allocation heuristics for efficient management of data centers for Cloud computing, Future Generation Computer Systems, Volume 28, Issue 5, ACM 2012 [13] Jing Xu et.al, Multi-objective Virtual Machine placement in Virtualized Data Center Environments, IEEE/ACM International Conference on Green Computing and Communications, 2010 [14] Anton Belaglozav et.al, Optimal Online Deterministic Algorithms and Adaptive Heuristics for Energy and Performance Efficient Dynamic Consolidation of Virtual Machines in Cloud Data Centers, [15] Hady S. Abdelsalam et.al, Analysis of Energy Efficiency in Clouds, In Proc. of the Computation World: Future Computing, Service Computation, Cognitive, Adaptive, Content, Patterns, 2009, pp.416-421. [16] H. N. Van, F. D. Tran, J. M. Menaud, Performance and Power Management for Cloud Infrastructures, 3 rd International Conference on Cloud Computing, IEEE Press, Jul. 2010

524

Vol. 5, Issue 1, pp. 515-525

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
[17] Daniel Gmach, Jerry Rolia, Ludmila Cherkasova, Guillaume Belrose, Tom Turicchi, and AlfonsKemper: An Integrated Approach to Resource Pool Management: Policies, Efficiency and Quality Metrics, 38th Annual IEEE/IFIP International Conference on Dependable Systems and Networks, DSN'2008, June 24-27 [18] Christopher Clark, Live Migration of Virtual Machines, Proceedings of the 2 nd Conference on Symposium on Networked Systems Design & Implementation, Volume 2, pages 273-286, ACM 2005 [19] Cloud Computing for Dummies, Wiley Publishing, Inc. [20] Qi Zhang et.al, Cloud Computing : State-of-the-art and research challenges, Springer 2010 [21] Anton Beloglazov, Energy Efficient Resource management in Virtualized Cloud Data Centers, In Proc. of the 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing, IEEE Press, 2010, pp.826-831. [22] Virtualization Techniques & Technologies: State-of-the-art, Journal Of Global Research In Computer Science, Dec 2011

AUTHORS
Jyothi Sekhar has done Btech in Information Technology from the Institute of Engineering and Technology, Kerala in 2010 and is now doing her final year Mtech in Network and Internet Engineering from Karunya University, Coimbatore. The main areas of interest include energy efficiency in cloud data centers and virtual machine live migration.

Getzi Jeba is currently working as Assistant Professor in Karunya University, Coimbatore. She had completed her Mtech in Network and Internet Engineering from the same university in 2009. She has her area of interest in routing in computer networks and cloud computing. She is pursuing her PhD in Energy Efficient live migration techniques. She is a member of the Computer Society of India.

S Durga is currently working as Assistant Professor in Information Technology department of Karunya University. Her areas of interest include Internetworking and Security and Pervasive computing. Currently she is pursuing her PhD in Mobile Cloud Computing.

525

Vol. 5, Issue 1, pp. 515-525

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

DESIGN AN ENERGY EFFICIENT DSDV ROUTING PROTOCOL FOR MOBILE AD HOC NETWORK
Dheeraj Kumar Anand, Shiva Prakash
Department of Computer Science & Engineering, Madan Mohan Malaviya Engineering College, Gorakhpur, India

ABSTRACT
As the nodes are rely on batteries or exhaustive energy sources for power and drop out within an ad hoc network, since energy sources have a limited lifetime. As the power availability is one of the most important constraints for the operation of the ad hoc network. In this paper, we present performance evaluation and verification for energy efficient DSDV (EEDSDV) [1] routing protocol for MANET and simulate the result with the help of the performance matrices like End to End Delay and Packet Delivery Ratio. The EEDSDV uses variant transmission energy approach to overcome the problem of more energy consumption for transmission and receiving messages in DSDV. Our simulation results of EEDSDV show the consumption of energy is less over DSDV routing protocols.

KEYWORDS: Ad hoc Network, DSDV, EEDSDV Routing, Transmission Energy.

I.

INTRODUCTION

Wireless Network can be broadly classified into two part wireless mobile network and wireless ad hoc network which is a self-configuring infrastructure less network of mobile devices connected by wireless, and having the advantages and facilities in various types of applications such that disaster recovery, battle field communication and law enforcement operations which demanded for setting up a network in no time. Mobile ad-hoc networks can operate in a standalone fashion or could possibly be connected to a larger network such as the Internet [2]. Source and destination nodes are communicate with each other over multiple hops when no other networking infrastructure is available. For routing of the data packets into MANET there are mainly proactive routing protocol used i.e. DSDV [3], FSR [4], and reactive protocol i.e. DSR [5], TORA and hybrid protocol e.g. ZRP [6]. There are various energy constrained based protocol such as Flow Augmentation Routing (FAR) [7], Online Max-Min Routing (OMM) [8], and Power aware Localized Routing (PLR) [9] protocols and survey papers such as [10,11] which is based on transmission power control approach whose main aim is to minimize the total transmission energy but avoid low energy nodes. In this paper in the continuation of our proposed protocol energy efficient DSDV routing protocol (EEDSDV) [1] we implement and simulate EEDSDV with the help of NS2 and verify the result. Modification of DSDV into energy efficient routing protocol (EEDSDV) is performed by controlling the transmission energy of node and provides energy efficient routing which consumes less energy. We consider the DSDV protocol which was developed by C. Perkins and P. Bhagwat in 1994[3]. DSDV is a table-driven routing scheme for ad hoc mobile networks based on the Bellman-Ford algorithm. It helps in solving the routing loop problem. The routing table entry contains a sequence number, if a link is present the

526

Vol. 5, Issue 1, pp. 526-535

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
sequence numbers are generally even; else, an odd number is used. The number is generated by the destination, and the intermediate hops need to send out the next update with this number. Routing information is distributed between nodes by sending full dumps not in a frequently manner and smaller incremental updates sending more frequently. On reception of these update messages; the neighboring nodes checks for the sequence number, if new sequence number is greater than or equal to the old sequence number and hop count is less then updating in routing table performed, Otherwise, ignore the update message. Rest of the paper is organized as follows, section 2 describes literature survey and after that our proposed work is given in section 3 and then in section 4 we present simulation results. Finally in section 5 gives conclusion and future scope presents in section 6.

II.

LITERATURE REVIEW

DSDV protocol is not so much concerned with energy consumption as it is table driven. There have been a lot of research work has been done for the improvement of the proactive as well as reactive protocols. But there was least concentration on DSDV protocol modification. In the paper [12] author proposed a novel routing tree, as the Quality of service routing is challenged in the normal routing strategy and having problem of link breakage, congestion and energy consumption and namely gives novel routing tree as an Energy-Efficient Traffic-Aware Detour Tree, which is constructed completely in a bottom-up fashion, with the consideration of traffic pattern and residual energy and routing shows higher throughput than other detour trees, leading to a better routing performance. But there still exist some mechanism like if we apply both the tree construction method such as top down & bottom up considering traffic pattern & residual energy of network through which the routing may be fast and topology of the network may be constant which leads to better performance. Paper [13] Eff-DSDV overcomes the problem of stale routes and larger size network having larger number of nodes. For the given stale route problem author have devise an algorithm indicating that the performance evaluation of the given solution is superior as compared to original DSDV under the performance evaluation parameters. Although the author gives the stale route problem the protocol is still challenged by the communication time failure and security issues as it is not feasible to receive a late coming packet because it is worst than not receiving the message in time and if packet is coming late then there are following measures are affected i.e. more energy consumption, large numbers of control packets are required which leads to congestion as well as quality of service affected. In [14] author have focuses on an approach for energy conservation as well as reducing packet storming within the routing protocol of the ad-hoc network by the use of sending and receiving sleep mode sleep and wake up message. Many improved form of algorithm have been suggested in DSDV due to limitations like 1. Doesnt support Multi path Routing. 2. Wastage of bandwidth 3. Difficult to maintain the routing tables advertisement for larger network. Each and every host in the network should maintain a routing table or advertising. But for larger network this would lead to overhead, which consumes more bandwidth. 4. Difficult to determine a time delay for the advertisement of routes. As per the literature survey on energy consumption aspect of nodes in a network we have found that in DSDV protocol there are following problems exists which are The large number of route replies in DSDV because route reply is sent through all the available routes leading to unnecessary congestion & waste of energy (battery power). Each node uses constant power to forward the packet or to transmit the packet. Irrespective of the distance between adjacent nodes, each node transmits with a constant power which takes more battery power.

III.

PROPOSED WORK

We consider the energy constraint in MANET. By the use of Variant Transmission Energy Approach we implement an efficient energy management in DSDV and modify DSDV into EEDSV. From the

527

Vol. 5, Issue 1, pp. 526-535

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
baseline evaluation of the most protocol of MANET we come to the conclusion that DSDV which is having the problems like In the existing DSDV, the large number of route replies in DSDV because route reply is sent through all the available routes leading to unnecessary congestion & waste of energy (battery power). In existing DSDV, each node uses constant power to forward the packet or to transmit the packet. Irrespective of the distance between adjacent nodes, each node transmits with a constant power which takes more battery power. 3.1. Energy Efficient DSDV (EEDSDV) Routing In wireless networks architecture energy saving will be done at different layer [15]. There are different methods are used to save power at different layers. To deal with the above stated problem in DSDV we proposed Energy Efficient DSDV (EEDSDV) Routing Protocol. Our proposed protocol at network layer uses mechanism of unicasting on behalf of multicasting or broadcasting for the purpose of route reply as well as data transmission from source to destination is performed by nodes transmission energy, which is tuned according to distance. In EEDSDV before the actual route request all the available route (path) is provided by the network node by taking the route labeled with most recent sequence number. Route Request is sent to the destination via multihop network with specified network size and number of node. Whereas Route Reply is sent via the route through which the destination received the first Route Request, because it is the most active route for the particular source-destination pair at the moment of sending the request. At the time of Route Reply in EEDSDV one more strategy have used in our proposed protocol that is the calculation of distance between two nodes in the selected path by means of the time taken by Route Reply message known as Trip Time. The Trip Time (TT) is the time calculated by sending a route reply message from the local node to the remote node. Calculation is performed at the time of route reply where destination node inserts current time in the route reply message header and transmitted it to the immediate previous node at the selected path. Immediate previous node takes this header time and subtracts this time by his current time which is obviously higher than header time and store the result in the cache. Again the former node inserts own current time and transmits it to immediate previous node. Following the same procedure the time is calculated at the selected path from destination to source. Doing so, time at each node in the selected path is calculated. After the calculation of time at each node, for the estimation of distance from destination node to source will have to be calculated in ad hoc network according to the formula below, as the return time measured by sending a packet from the local node to the remote node is known as Round Trip Time (RTT). The formula below is used for the estimation of RTT in wireless LAN, which is also presented in [16] as, RTT = 2 distance/c Where c 3 108 m/s (speed of light), while the distance is the length of local node to the remote node. So from above equation, for one way distance, we can calculate the distance as, Distance (d) = TT * c where c 3 108 m/s. The c is the speed of light, while the distance (d) is length of the neighboring nodes in the selected path from source to destination. In such way all the link distance is available at each node in the selected path. After the calculation of distance at each node, appropriate data transmission energy is calculated at each node which will be discussed in section 3.2. When the link failure happened due to mobility or due to nodes become dead having less energy. By using single hop route request and acknowledgement protocol creates a temporary link through a neighbor which has a valid route to the desired destination. The temporary link is created by sending one-hop RREQ and ROUTE-ACK messages. The intermediate node upon finding the next hop broken link broadcasts a one-hop RREQ packet to all its neighbors. In turn, the neighboring nodes returns the ROUTE-ACK if it has a valid route to the destination. For route update time an additional entry is there in each entry of the routing table. This update time is embedded in the ROUTE-ACK packet and is used in selecting a temporary route. In case of receiving multiple ROUTE-ACK with the same number of minimum hops, ad hoc host node chooses that route which has the latest update time.

528

Vol. 5, Issue 1, pp. 526-535

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963 3.2. Energy Consummption Reduction Mechanism in DSDV
Mechanism corresponds to determining the appropriate data transmission energy at each node in a path, through which communication data can suitably be transmitted. Before the estimation of transmission energy previously we have to determine the distances among each hop through source to destination as discussed in section 3A. By this distance how much energy is required to transmit the data packet is calculated which is bitterly tuned according to the distance. Before starting of the Route Request we assume a specified threshold energy used at the time of route request and route request considers only those nodes which are having the energy greater than the specified threshold energy and We are assuming that each node having a larger energy source so that it can transmit the data packet. EEDSDV Routing strategy is used to determine the variant transmission energy in which once the distance between the two nodes is determined and stored in the corresponding cache of the node. The Variant Transmitted Energy (VTE) is determined using the following formula at every hop by which it can transmit the data suitably. Variant Transmission Energy (VTE) can be calculated by Variant Transmission Energy (VTE) = (a*d4) + c; Whered is the distance between two adjacent nodes, and a and c are arbitrary constants where c = 30e-13 mW and a = 6.35e-9 mW, as a=Pr*k; In which Pr is minimum Signal Received Energy or Channel Sensing Energy and is equal to 7.94e10mW & k =8. We can say that transmitted energy is directly proportional to distance (d); finally we transmit the packet by their desired energy to reach next hop which is tuned according to the distance between two nodes. Thus data transmission from source to destination is performed by nodes transmission energy, which is tuned according to distance. 3.2.1. Estimation of energy consumption in DSDV A mobile node consumes its battery energy not only when it actively send or receive packets but also when it stays idle listening to the wireless medium for any possible communication requests from other nodes. Thus energy based routing protocols either in the active communication energy required to transmit and receive data packet or the energy during inactive periods. If we calculate the active communication energy of DSDV protocol in terms of transmission energy which is used for the total transmission energy required to deliver the data packets from source to the destination. DSDV routing protocol is based on the uncontrolled transmission energy i.e. Constant Transmission Energy (CTE). In a CTE approach minimum constant transmission energy is used by all the nodes of the network to transmit and receive packet .Let the Constant Transmission Energy (CTE) at each node is equal to 3.97e-6 mW.

Figure 1. Energy Consumption in DSDV

Let us take a topology of 6 nodes A, B, C, D, E and F as shown in figure 1. Here data transmission performed through the route ABEF. So the total Constant Transmission Energy (CTE) required to transmit the data packet from source to destination is equal to the energy consumed by number of nodes involve in communication from source to destination. i.e. the Total Constant Transmission Energy (CTE) consumption during data transmission =1.19e5mW. 3.2.2. Estimation of energy consumption in EEDSDV

529

Vol. 5, Issue 1, pp. 526-535

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
If the transmission energy is controllable transmission energy (i.e. Variant Transmission Energy), it is more energy efficient to transmit packet using intermediate nodes [1]. In our proposed protocol the required Variant transmission energy (VTE) the communication between two nodes has super linear dependence on distance d as transmission energy is directly proportional to distance (d);

Figure 2.

Distance info when Route Reply

According to the proposed mechanism distance d is calculated at each node as shown in figure 2. Therefore Variant Transmission Energy (VTE) between node A and B whose distance is 4.5cm can be calculated at the node A as: VTE AB = 6.35e-9 mW *(4.5) 4 +30e-13 mW = 2.6e-6 mW The Variant Transmission energy between nodes B and E whose distance is 5.0 cm can be calculated at the node A as: VTE BE =6.35e-9 mW *(5) 4 +30e-13mW =3.97e-6 mW Similarly, VTE EF =6.35e-9 mW *(3) 4 +30e-10mW =5.14e-7 mW

Figure 3.

Data Transfer in EEDSDV

I.e. the Total energy consumption during data transmission from A to F in EEDSDV as shown in figure 3 = 1.175e-5 mW So the percentage decrease in overall energy consumption in between DSDV & EEDSDV = (total energy consumption in DSDV- total energy consumption in EEDSDV) * 100 / total energy consumption in DSDV = (1.19e-5 1.17e-5)*100/1.19e-5

530

Vol. 5, Issue 1, pp. 526-535

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
= 1.68 %

IV.

SIMULATION RESULTS AND ANALYSIS

To simulate our proposed routing protocol we have used NS2 [17, 18] version 2.31. The primary reason for choosing NS2 is its free availability for research and support of a multi-hop wireless environment & second reason is their citations in most of the studies in the literature where researchers have used NS2 as their simulation environment. During modification we have consider the parameters as stated in Table 1. We have considered performance matrices which are Packet Delivery ratio, End to End Delay and finally compare the consumption of Energy of DSDV by EEDSDV.
Table1. Simulation Metric Table Matrices Channel Type Area (meter) Antenna type No. of Nodes Minimum Transmission Energy Minimum Signal Received Energy Data Rate Periodic Update Dimension Wireless Channel 400*400 Omni Antenna 15 3.97e-6 mW 7.94e-10mW 30kbps 10-12 sec

4.1.

End To End Delay

The delay experienced by a packet from the time it was sent by a source till the time it was received at the destination. End to End Delay considers all possible delays caused by buffering at MAC layer during queuing delays, and propagation and transfer times of data packets. This is the average overall delay for a packet to traverse from a source node to a destination node. So, Average-End-to-EndDelay of routing protocol is calculated as: Avg. End-to-End Delay =Tt / P Where Tt = (Td -Ts) and Td = Time when packet received at destination, Ts=Time when packet created by source, and P =Total Packet

Figure 4. End to End Delay Vs Number of Nodes.

531

Vol. 5, Issue 1, pp. 526-535

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Figure 4 shows the End to End Delay for the two protocols as a function of the number of nodes. X Axis of the graph shows number of nodes used for simulation and Y axis shows End to End Delay for the two protocols. The performance of EEDSDV is better than DSDV protocol for varying number of nodes especially between 3 and 9 nodes as shown in figure but for 3 nodes End to End Delay for both the protocols are same and for 5 number of nodes there is a slight difference in both protocol. After that the performance of EEDSDV is better than DSDV protocol because of using the temporary routes in the EEDSDV the packet latency naturally is bound to decrease.

4.2.

Packet Delivery Ratio

It is the percentage of ratio between the number of packets sent by sources and the number of received packets at the sinks or destination. PDR = i [No. of received packet at sink i / No. of packet sent by source i]*100

Figure 5. Packet Delivery Ratio Vs Number of Nodes.

Figure 5 show the graph of the two protocols as a function of the number of nodes. X - Axis of the graph shows number of nodes used for simulation and Y axis shows packet delivery ratio. It can be seen from the graph that the performance of EEDSDV is better than the regular DSDV. It may observe that for 15 nodes the packet delivery ratio of DSDV protocols is up to almost 70% and decreasing after 8 nodes to 15 nodes whereas for EEDSDV packet delivery ration for the 15 node is more than 75 % for the same scenario. 4.2.1. Energy Consumption in DSDV and EEDSDV
180000 160000 140000 120000 100000 80000 60000 40000 20000 0 2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 7 Figure 6. Energy consumption Vs Distance Distance (Meter)

Energy (10^-10 mW)

DSDV EEDSDV

532

Vol. 5, Issue 1, pp. 526-535

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
The comparison of DSDV and our proposed routing Protocol EEDSDV are shown in figure 6. We consider x-axis as distance in (meter) and y-axis shows energy consumption in mW. Then the comparison between both the protocols DSDV and EEDSDV are performed. In the baseline evaluation of Regular DSDV protocol we have seen in Figure 6 that consumption corresponding to the distances in basic DSDV routing protocol have same level of energy consumption and consumed energy is not bother by distance due to constant power used at each node in network. For forwarding the packet or to transmit the packet and according to DSDV draft, each node uses 3.97e-6 mW energy. we have seen that irrespective of the distance between adjacent nodes each node transmits with a constant power i.e. for the different distance 2 meter, 3 meter, 4.5 meter & 5 meter, the consumption of energy is same that is 3.97e-6 mW.
120000
ENERGY(10^-10 mW)

100000 80000

60000
40000 20000 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 DISTANCE(meter)

DSDV
EEDSDV

Figure 7. Energy Consumption Vs Distance

But for our proposed routing protocol EEDSDV for the purpose of data transmission, consumption of energy used is controlled (VTE) energy according to distance i.e. for EEDSDV the transmission energy is tuned according to the distance between transmitting node and receiving node. For different distances different energy is consumed. As for the different distance 4.5 meter, 5 meter, 3 meter & 2 meter the consumption of energy is 2.603e-6, 3.97e-6 mW, 5.143e-7mW, and 3.21e-8mW respectively. So energy consumption is varied according to distance. We can also understand this energy consumption by the figure 7, in which for DSDV a constant level of energy consumption for different distances have shown whereas for EEDSDV energy consumption varied for different distances.
80 70

60
EFFICIENCY

50
40 30 20 10 0 64 128 256 512 1024 2048 4096 8192 DSDV EEDSDV

Packet size(Byte)

Figure 8.

Efficiency Vs Packet Size.

In figure 8 Efficiency of DSDV & EEDSDV for the different packet sizes is shown. For the 64 byte packet size efficiency of DSDV is 2 whereas efficiency of EEDSDV is quite higher for the same packet which is 33. For sizes greater than 1024B, there is no considerable difference between the two protocols although EEDSDV operates slightly better. In this section, we discussed he metrics used for our proposed EEDSDV approach

533

Vol. 5, Issue 1, pp. 526-535

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Mainly based on avoidance of unwanted route replies & variant transmission energy approach leads to decreasing end to end delay. Control packets generated in the network also reduced and packet delivery ratio increases. Our modifications make the data transmission better and the average percentage energy saved per node is found to be better than the original consumption.

V.

CONCLUSION

In this paper, we discussed and verify an Energy efficient DSDV routing protocol with the help of performance metrics. As DSDV used constant transmission power to transmit and receive messages where as our proposed routing Protocol used variant transmission power. Our results, shows that there is less energy consumption when we used EEDSDV routing protocol as compared with DSDV routing protocol. Mainly based on variant transmission energy approach and avoidance of unwanted route replies leads to decreasing end to end delay and increase in packet delivery ratio as well as control packets generated in the network is reduced. The average percentage energy saved is found better than the original consumption.

VI.

future work

Quality of Service guarantee is one very obvious offshoot of our proposed algorithm is to provide. With the routing protocol scalability could be an issue as we discussed earlier due to the increase in size of the history table as well as the routing updates with the increase in number of nodes in the network. This is another area, which can be worked upon. In implementing the routing protocol, we had fixed the periodic update intervals at 10 and 12 seconds for constant and variant energy control approach. It would be interesting to see at what periodic intervals the energy aware routing algorithm give best performance results.

REFERENCES
[1] Dheeraj Anand, Shiva PrakashEnergy Efficient DSDV (EEDSDV) Routing Protocol for Mobile Ad hoc Wireless Network ISSN: 2277 9043 International Journal of Advanced Research in Computer Science and Electronics Engineering Volume 1, Issue 3, May 2012 [2] Rudolph V, Meng TH. Minimum energy mobile wireless networks. IEEE Journal of Selected Areas in Communications 1999; 17. [3] C.E. Perkins and P.Bhagwat. Highly Dynamic Destination-sequence Distance vector Routing (DSDV)for mobile computers .In Proceedings of the ACM Conference on Communication Architecture Protocols and Applications(SIGCOMM),pages 234-244,1994. [4] Pei, G, Gerla, M. Chen, T. Fisheye State Routing: A Routing Scheme for Ad Hoc Wireless Networks. Proceedings of the IEEE International Conference on Communications. 2000. [5] D.Johnson, The Dynamic Source Routing Protocol (DSR),RFC4728, Feb 2007. [6] S. Giuannoulis, C. Antonopoulos, E. Topalis, and S. Koubias, ZRP versus DSR and TORA: Acomprehensive survey on ZRP performance. Publisher Springer Netherlands, ISSN 1876-1100 ,Volume 33, Online ISBN 978-1-4020-9532-0 , 2009. [7] Chang J-H,Tassiullas L. Energy Conserving Routing in wireless Ad Hoc Networks.Proceedings of the Intl Conf. on Computer Communications(IEEE Infocom 2000) 2000; 22-23. [8] Li Q, Aslam J, Rus D. Online Power Aware Rouitng in Wireless ad Hoc Networks , Proceedings of the Intl Conf. on Mobile Computing and Networking(MobiCom 98) 1998. [9] Stojmenovic I, Lin X. Power Aware localized Rouitng in Wireless ad Hoc Networks. IEEE Trans. Parallel and Distribited System 2001;12(11); 1122-1133. [10] Tanu Preet Singh, Shivani Dua, Vikrant Das Energy-Efficient Routing Protocols In Mobile Ad-Hoc Networks Volume 2, Issue 1, January 2012 ISSN: 2277 128X, , Department of CSE, ACET. [11] V. Kanakaris, D. Ndzi and D. Azzi Ad-hoc Networks Energy Consumption: A review of the Ad-Hoc Routing Protocols Journal of Engineering Science and Technology Review 3 162-167, 2010. [12] Lei Zhang, Deying Li and Alvin Lim energy-efficient traffic-aware detour trees for geographical routing International Journal of Computer Networks & Communications (IJCNC), Vol.2, No.1, January 2010.

534

Vol. 5, Issue 1, pp. 526-535

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
[13] Khalil Ur Rahman khan ,K A Reddy T S Harsha, R U Zaman An Efficient DSDV Routing Protocol for MANET and its usefulness for providing internet Access to Ad Hoc Hosts. Proceedings of the Intl Conf. of IEEE in ISBN: 978-0-7695-3325-4,2008. [14] Nayan Ranjan Paul, Laxminath Tripathy and Pradipta Kumar Mishra Analysis and Improvement of DSDV Protocol, IJCSI International Journal of Computer Science Issues, Vol. 8, Issue 5, No 1, ISSN (Online): 1694-0814, September 2011 [15] Matthew B. Shoemake, Wi-Fi (IEEE 802.11b) and Bluetooth Coexistence Issues and Solutions for the 2.4 GHz ISM Band, PhD Thesis at University of California. [16] A. Gnther and C. Hoene, Measuring Round Trip Times to Determine the Distance Between WLAN Nodes, in Proceedings of International IFIP-TC6 Networking Conference 05, LNCS 3462, pp. 768779, May2005. [17] NS-2 Manual, 2009 http://www.isi.edu/nsnam/ns/nsd. [18] www.isi.edu/nsnam/ns/tutorial Marc Greis tutorial on ns2.

AUTHORS
Dheeraj Kumar Anand submitted his disssertation for the award of M.Tech in Computer Science in Engineering branch in Madan Mohan Malaviya Engineering College Gorakhpur UP Affiliated to GB Technical University Lucknow, UP. He has publised papers in National and International Journals and Conferences in the field of Wireless Adhoc Network. He has completed his B. Tech. in Computer Science & Engineering in 2007 from UP Technical University Lucknow UP INDIA.

Shiva Prakash received his M. Tech. in Computer Science & Engineering, Motilal Nehru National Institute of Technology, Allahabad, India in 2006 & . Tech. in Computer Science & Engineering From K.E.C., Dwarahat, India in 1997. He started his career from teaching profession as Lecturer in 1998 in K.E.C., Dwarahat, India, and then joined as Assistant Professor in M.M.M. Engineering. College, Gorakhpur, India in 2009 and continuing till date. He is member of ISTE Life Member (LM 57242), CSI, Member, IJCI, Packeteer Bitpipe, and IETF as Online Member. Convener, Board of studies, Faculty of Engineering & Technology, Kumaun University, Nainital from August, 2004 to September, 2009 and also worked more than three years as Head of Department. He has supervised more than 30 projects/ dissertation of M. Tech. & B. Tech. Students. He has published eighteen research papers international and international journals /conferences.

535

Vol. 5, Issue 1, pp. 526-535

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

SEMANTIC WEB IN MEDICAL INFORMATION SYSTEMS


Prashish Rajbhandari1, Rishi Gosai2, Rabi C Shah3, Pramod KC4
Department of Information Technology, Indian Institute of Information Technology, Allahabad, India
1

ABSTRACT
The complex and heterogeneous storage architecture of hospitals makes it taxing to extract and collect useful information from the data repositories. In this paper we propose the application of an ontology-based method using the concepts of Semantic Web for obtaining interoperability between hospitals in the medical information system domain. We aim to annotate data in Relation Description Format (RDF)/Extensible Markup Language (XML) via common ontology, then interlink the available data from various medical institutions and consequently apply data mining to find meaningful results. The unified data will be stored at a central server accessible by all hospitals

KEYWORDS: Semantic Web, RDF, XML, Medical Systems, Ontology, Client Server.

I.

INTRODUCTION

Information technology has grown drastically over the last decade due to advancement in networking technologies. Along with the advancement in size, the underlying architecture and the structural complexity has also changed. Health Information Systems is a coalescent system formed by the intersection of information sciences and health care. It is a source of rich information of medical information, which is of primary importance for the doctors, nurses or other health care experts to treat or diagnose a patient. Health Information includes the storage and maintenance of health records and protection (by law) of patient information. The systems developed and implemented are proprietary and hence have a custom architecture. The usage of Electronic Health Records is increasing rapidly leading to a huge amount of digital formats. Comprehending the mixed data is becoming increasingly complex for the physicians, which is leading to slower decision making. The integration of the data is a very tedious and difficult process. It would require a considerable amount of programming effort to interlink the data due to different interfaces and architectures. Medical data requires a unified approach because of the promising opportunities in the field of Bio Medical research by world health organizations and agencies. Semantic web technology provides a common framework such that it is easy to access and process information by machines. The heterogeneous nature of the health care data makes it a very suitable candidate for Semantic Web Application. It aims to analyze the data by comparing the similar ontology and then interlink the data as a whole allowing easier querying by the end user in the repository of information. There are already some efforts in this field.

536

Vol. 5, Issue 1, pp. 536-543

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
In this paper, we aim to develop a Medical Information System that will be based on semantic web technology for sharing data between different hospitals. The proposed system can understand data from all the hospitals because of common ontology and semantic interoperability. The data is stored in a common repository which can be used for machine learning or data mining. The clients can then access the results and data from the server. This paper includes a description of the related works in section II. It is followed by an overview and the details regarding the proposed approach of the SMIS (Semantic Medical Information System) in section III. The results of the implementation of the system is mentioned in the following section. In Section V, the conclusion and future scope of the paper is mentioned.

II.

RELATED WORK

In the past few years, scientific community, especially from medical domain, has involved many efforts and resources in the development of eLearning technologies and materials . The
Semantic Web is considered as an extension of the World Wide Web where the data present can be integrated and evaluated easily. Although gaps in implementations still exist and the adoption is hampered by the lack of pre-installed base there has been significant research and development in the field of semantic web. Web Services like MedCIRCLE [4] and MedCERTAIN [1] are Semantic projects with the aim to guide users to health information on the internet and to filter quality health information available on the net. The related work focuses on developing a multi-ontology structure with peer to peer networking. The P2P style avoids physical and semantic bottlenecks in the information exchange. PeerDB [2], an object management system, provides detailed searching services. It is a network of local database on peers. Data is shared without a global schema by using meta-data for each attribute. P2P Information exchange and Retrieval (PIER) is a commonly used query engine [5][6]. It is used for processing queries in a distributed network. For ubiquitous computing environment [7] provides context aware framework ontology where the rules were used to convert low level context to high level context, for the purpose of providing personalized healthcare services to users. But for companies with less formal training the proposed approach is too complex. Another approach mentioned in [8] proposes merging two healthcare ontologies SNOMED CT and ICNP. Due to the proposed merging the different perspectives of the two ontologies are maintained without losing important information. But there is no efficient technique proposed for merging large ontologies. E Neuroscience data addresses [9] the integration problem by the using the oracle RDF data model. The data are extracted from brain plus and swan. The proposed system provides the conversion of relational to RDF architecture. This paper [10] presents an organized medical advice for patients suffering from diabetes with the help of food ontology. This ontology is tested to share knowledge between the different stake holders in the pipes project for diabetic control. The approach proposed in [11] states an ontology based knowledge framework that is capable of providing personalized healthcare to users by retrieving all required knowledge such as patient care, drug prescriptions etc. But the system does not use any semantic rule engines in its implementation. Apart from ontology based medical systems there are some rule based healthcare systems proposed by researchers. The paper [12] presents rule based information extraction from medical documents. The information hence extracted is then grouped and classified into the required complex templates. The method proposed can also be used to select the most prominent and important diagnosis and the patients that require special attention. But the development of the system is a time consuming approach. In [13] the dynamic modification of the workflow is described. If an exception occurs, the system identifies the affected workflow region dynamically corrects the affected workflow. Rules are used to detect semantic exceptions and decide about which activities are dropped or added. The paper [14] mentions a protocol that can handle exceptions and correct the workflow dynamically. A set of predefined guidelines is used. It gives a rule based exception detection and correction method.

537

Vol. 5, Issue 1, pp. 536-543

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

III.

SEMANTIC MEDICAL INFORMATION SYSTEM (SMIS)

3.1 An Overview
It is an ontology-based medical information system that captures the data from various heterogeneous systems and stores it at a central server. The underlying principle is the same as that of a client-server model. Since the database architecture of every client (Hospital) is specific and incompatible with other client databases, there is a need to unite them under a common structure. The server, in our case SMIS Server, merges these databases and follows a common ontology to unite the different relational databases. The data is transformed according to information contained and is stored in a unified and understandable manner. The SMIS server is accessible by all clients and since it has a common ontology throughout it is easy to query for information or mine for research. For example, consider the disease yellow fever. All the clients would contain relevant information regarding the symptoms about the disease. If a researcher or doctor needs this data he will have to study the individual database structure for each client leading to strenuous and tedious work overhead. The same information can be combined and stored in a central server, under a node titled yellow fever. All the required data can then be queried by only taking into consideration the database layout of the central server because the SMIS server holds information from all the clients (hospitals) connected to it.

Figure 1. Basic Architecture of Proposed Approach

When a new client is registered to the server, it is authenticated before it can use the services provided by the Semantic Medical Information System. Services include request for information from the central server and updating the data on the server. During registration, a mapping process takes place which uses the ontology of the client to create an RDF file containing the information the client needs to send to the server. This file is then transferred to the server where the RDF graph pertaining to each client is created and merged into a unified RDF graph containing the entire data from all the clients. The client Application can be in the form of a web browser or a desktop application connected to the network containing the SMIS server. Each client is provided with a username and password through which it can access the SMIS server and request information. This data can be used individually by each client for diagnostic or research purposes.

3.2 Details of Proposed Approach 3.2.1. Mapping Phase:


This is the most important phase of the Medical Information System. Since every hospital will have its own database schema, the central server needs to map each hospitals database schema into common ontology classes. This is done using a standardized ontology that is common to all the clients. The primary function of medical ontology is to map data from each hospital to the classes

538

Vol. 5, Issue 1, pp. 536-543

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
defined in the ontology. This is the extracted data mapped into known classes that is understandable to the central server and therefore incorporate it in its own RDF Model. The mapping application that we built for hospitals would initialize itself during setup and ask for the mapping schema of the hospitals database. It is assumed that the database isnt distributed and data is available to the application from a single database server. The application can be triggered for a change in the mapping schema if there are changes to the overall change in database schema. The changes and new data would be then transmitted to the central server for changes in its model. The application required database privileges of the client during setup. The application then lists down all tables and its corresponding columns so that the administrator can map each table-column with our defined common ontology class. After the administrator maps table-column to classes, D2RQ tool runs in the background to create an RDF Model of the data of relational database. The RDF file is generated in the system that is then transmitted to the central server. After transmission is complete, the file is deleted or kept as backup. Mapping phase is vital because it is where the administrator needs to map hospitals data to ontology classes. The algorithm for Conversion of RDBMS to RDF Model: BEGIN CONVERSION If Connected Start Mapping Interface For Each Class Map If Map present Get Database Maps with columns Store Database Map Else Continue End For If Map present Start D2RQ Convert into RDF Store RDF Else Start Setup END CONVERSION

3.2.2. Connection Phase:


Mapping phase is specialized to database administrator but this is the phase where normal users (doctors) can use the system to get data from various hospitals. After RDF Model is generated in the clients machine, it needs to connect to the central server. The central server must provide some kind of authentication for each hospital for file transfers. After authentication, the hospital can send its RDF model to the server. The Central server stores all the received RDF Model from all hospitals and merges them to create a common RDF Model. The nodes having the same URI is merged together in the RDF Model so that it is abstracted to have generated from common node classes. The algorithm to establish connection with central server: START CONNECT CENTRALSERVER If Maps complete If login successful Connect to Server Transfer RDF Else Login Fail Else Start Mapping END CONNECT CENTRALSERVER

539

Vol. 5, Issue 1, pp. 536-543

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963 3.2.3. Communication Phase:
After Connection phase is successful, hospitals can query in the SMIS Server for various information regarding diseases. Since all the data are merged into a common RDF Model, querying about disease will give results from all hospitals. Since, many results can be same, we use duplicate content filtering to filter duplicates from hospitals. The SMIS Server provides specific services to the clients to query into the RDF Model. User forms are created in the application for basic querying in the RDF Model like getting symptoms of a particular disease from all hospitals or selected hospitals. A single RDF Model is created in the central server but if a hospital queries results from selected list of hospitals, then the server temporarily builds a model merging the models of the hospitals and sends in result. This facility helps hospitals get specific results from registered hospitals. START CLIENT TO SERVER If Connected While !Log out Query to Central Server Transfer Updated RDF End While Else Terminate END CLIENT TO SERVER

Figure 2. Dataflow for Mapping

IV.

RESULTS AND DISCUSSIONS

The implementation of our SMIS was done in Java language using RMI (Remote Method Invoking), D2RQ and JDBC. We used Stanford Protg tool to create our standard ontology. The prototype ontology that we used: Disease Class hasSymptoms : Symptom Class hasDefinition: Definition Class hasCure: Cure Class

540

Vol. 5, Issue 1, pp. 536-543

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Table in RDF

Figure 3: Simulation of the proposed model

Clients were set up in different systems and were connected to the central server in a network. The clients could incessantly communicate with the server using sockets for file transfer. Clients couldnt communicate with each other directly. After clients authenticated with the central server, it could send its RDF Model and also, query remotely to the central server. Clients could query using Java RMI, which is instated in the client application. The services provided to the clients are all remotely invoked in the central server. No data is stored in the client machine to reduce load. Querying in the server RDF Model was possible using SPARQL a querying language that can retrieve data/resources stored in RDF. Clients could query for data from any other client or all clients. The central server used Apache Jena to combine RDF Models of clients depending on the clients from which data is to be received.

Figure 4: The visual layout of SMIS

541

Vol. 5, Issue 1, pp. 536-543

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Figure 5: Result of a query

As can be shown in Figure 5 the SMIS system proposed in the paper successfully integrated the required information from all the client hospital and returned the SPARQL query successfully. Most of the existing semantic web approaches do not follow the ACID properties of database. Our SMIS approach has overcome this shortcoming by incorporating the atomicity, consistency, isolation and durability of database transactions. It also provides the reliability of a client server model. Our approach emphasizes on the unification of the heterogeneous clients (Hospitals) in a complete and wholesome manner keeping in mind the database systems integrity.

V.

CONCLUSION AND FUTURE WORK

Due to the availability of ready-to-use, mature domain ontologies in medicine, we have


worked on ontology-based data integration system architecture to combine the client hospitals information and store it at a central server. Using Semantic technology we were able to create a system which is able to store the data from various heterogeneous sources in a unified manner. The unified RDF graph was successfully created at the server end using Apache Jena and SPARQL query language was used to get the required results at the server. In the near future, we will investigate the scalability and efficiency of SMIS approach in very large data storing relational databases. Also, we plan to improve the SMIS approach by adding the support for distributed client databases. We plan to expand the scope of the project from integrating client hospitals to mining the unified data at the central server to help in medical research.

ACKNOWLEDGEMENT
This research paper was possible through the wonderful working environment and infrastructure provided to us by our honourable Director (IIITA), Dr. M.D. Tiwari. Also, we would like to thank our parents for providing us encouragement and financial support without which this research paper would not have been successful.

REFERENCES
[1]. S. Gnanambal& M. Thangaraj, Research Directions in Semantic Web on Healthcare,International Journal of Computer Science and Information Technologies, Vol. 1(5) , pp449-453, 2010 [2]. V.S. Agneeswaran, A Survey of Semantic Based Peer-to-Peer Systems, Technical Report, Distributed Information Systems Lab (LSIR) EcolePolytechniqueFederale de Lausanne, 2007 [3]. B. McBride, Jena: A Semantic Web Toolkit, IEEE Internet Computing, 6(6), pp55-59, 2002 [4]. Mayer, M. A., Darmoni, S. J., Fiene, M., Eysenbach, G., Khler, C.&Roth-Berghofer, T.,MedCIRCLE modeling a collaboration for Internet rating, certification, labeling and evaluation of health information on the semantic world wide web., Proceedings of MIE 2003, Eighteenth International Congress of the European Federation for Medical Informatics. 95,pp6677, 2003 [5]. N. Khozoie, Health Information Management on Semantic Web :(Semanti HIM), International Journal of Web and Semantic Technology(IJWest), Vol 3, No. 1, 2012

542

Vol. 5, Issue 1, pp. 536-543

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
[6]. D. Dou, Integrating Databases into the Semantic Web through an Ontology-Based Framework, Data Engineering Workshops, 2006 [7]. Eunjungko, Hyungjiklee, Jeun woo lee, 2007, Ontology based context modeling and reasoning for U Healthcare, Web Information Systems Lab, Chugnam National University.http://winslab.cnu.ac.kr/resource/labseminar/Seminar2007-1/IEICE_kej_final_paper.pdf [8]. Fahim Imam, Wendy Maccaull, 2008, Integrating Healthcare Ontologies: An inconsistency tolerant Approach and Case study http://www.logic.stfx.ca/docs/prohealthPaper_July_3_Imam_McCaull.pdf. [9]. Hugo Y.K. Lam, Luis Marenco, Tim Clark, Semantic web meets e-neuro science: An RDF usecasehttp://www.oracle.com/technology/industries/life_sciences/press/semantic_web_meets_eneuroscien ce.pdf. [10]. Jaime cantais, David Dominguez, Valeria Gigante, Loredana Laera, 2007, An example of food ontology for diabetes control, http://www.csc.liv.ac.uk/~floriana/PIPS/papers/FoodOntology.pdf. [11]. Jiangbo Dang, Amirhedayati, An ontological knowledge framework for adaptive medical workflow, Journal of biomedical informatics 41 ,829-836 ,2008 [12]. Agnieszka Mykowiecka, Malgorzata Marciniak, Rule based information extraction from patients clinicaldata, Journal of biomedical informatics42,923-936, 2009 [13]. R. Muller, E. Rahm, Rule based dynamic modification of workflows in a medical domain,Spinger,429448, 1999 [14]. Yan ye, Zhibin jiang, A semantic based clinical pathway workflow and variance management framework, IEEE 978-1-4244-2013, 2009

AUTHORS
Prashish Rajbhandari is currently pursuing his B. Tech (IT) degree in Indian Institute of Information Technology, Allahabad. He is the brain of forty stones team. He has keen interest in Web Development and is fluent in various scripting languages like html/css, php, javascript, jquery and various CMS (Codeignitor, RoR).

Rishi Gosai is an under-graduate student majoring in Information Technology at The Indian Institute of Information Technology Allahabad (IIIT-A). His field of interests include Database Systems, Operating Systems and Artificial Intelligence.

Rabi C Shah is currently pursuing his under-graduate program with a major in Information Pramod K.C. Technology in Indian Institute of Information Technology Allahabad (IIIT-A). His field of interests include C, C++, Linux and Semantic Web Technology. He is the founder of fortystones.com and has completed many projects in java and php.

Pramod K.C is an ebullient and persistent student pursuing his under-graduate (IT major) at Indian Institute of Information Technology, Allahabad. His interests lie in Networking, Semantic Web Technology, Database Systems, Java and also has good Management skill.

543

Vol. 5, Issue 1, pp. 536-543

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

MAGNETIC AND DIELECTRIC PROPERTIES OF COXZN1-XFE2O4 SYNTHESIZED BY METALLO-ORGANIC DECOMPOSITION TECHNIQUE


1

Anshu Sharma1, Kusum Parmar1, R.K. Kotnala2 and N.S. Negi1


Department of Physics, Himachal Pradesh University, Shimla, India 2 National Physical Laboratory, New Delhi, India

ABSTRACT
Cobalt doped Zinc ferrites, CoxZn1-xFe2O4, (x = 0, 0.2, 0.4, and 0.6) were prepared by chemical solution method using metallo-organic precursors. The crystalline structure, microstructure, magnetic and dielectric properties of the samples were investigated by X-ray diffraction (XRD), scanning electron microscopy (SEM), vibrating sample magnetometer (VSM) and impedance analyser respectively. All samples show cubic spinel structure with minor Fe2O3 phase and lattice parameter decreases from 8.419 to 8.308 with increasing cobalt content from 0 to 60%. Magnetic measurement shows that CoxZn1-xFe2O4 samples exhibit ferromagnetic behavior at room temperature. The saturation magnetization initially increases from 2.17 to 71.2emu/g on increasing Co content to 40% and then decreases to value 55.9emu/g at Co concentration of 60%. The coercivity also increases to Hc ~ 330Oe for the sample with Co=20% and then decreases to minimum value of Hc ~ 32Oe for sample having 40% Co content. The coercivity value is then increases and is measured as Hc ~ 154Oe for sample with 60% Co content. The magnetic properties are explained in terms of cation distribution and grain size effect. The dielectric properties of the samples were studied with varying temperature and frequencies. The dielectric properties are influenced by electron hopping mechanism between Fe2+ and Fe3+ ions.

KEYWORDS: Co-Zn Ferrite, MOD Technique, XRD, Magnetic properties, Dielectric properties

I.

INTRODUCTION

Ferrites continue to be a fascinating magnetic material because of their potential applications in high density information, ferro-fluids, magnetic resonance imaging, biomedical diagnostics, drug delivery, high frequency electronic devices, sensors, permanent magnets and magnetic refrigeration system etc [1-5]. Among various ferrites, ZnFe2O4 and CoFe2O4 have been most extensively studied systems, because they exhibit the typically normal and inverse spinel ferrites respectively [6,7]. Zinc ferrite bulk is antiferro magnetic below the Neel temperature (TN = 10K) and turns to ferromagnetic or super paramagnetic when particle size reduces to a nanoscale. In ZnFe2O4, zinc ions occupy the tetrahedral sites and all Fe3+ ions occupy the octahedral sites. In contrast, CoFe2O4 exhibits ferromagnetism where cobalt ions occupy the octahedral sites and Fe3+ ions are equally distributed in tetrahedral and octahedral sites. Therefore, Co-Zn mixed ferrite has attracted considerable attention due to the completely different and interesting properties of ZnFe2O4 and CoFe2O4. Ferrites are usually prepared by autocatalytic decomposition [8], hydrothermal method [9-11], the reverse micelles method [12], co-precipitation [13], microwave combustion method [14] and sol-gel route [15]. The properties of the ferrites are highly sensitive to the ferrite compositions and synthesis techniques. S.B.Waje et.al [16]

544

Vol. 5, Issue 1, pp. 544-554

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
prepared Co0.5Zn0.5Fe2O4 by mechanical alloying and sintering method. Dielectric studies of their specimen showed that permittivity remains constant with frequency and varies with the sintering temperature. Lopez et.al [17] recently synthesized Zn doped CoFe2O4 magnetic nanoparticles by using a co-precipitation method. They observed decrease in coercive field and particle size with the increase of Zn concentration. Hassadee et.al [18] reported decrease in the magnetization and the coercivity of Co-Zn ferrite with increasing zinc content due to the magnetic behavior and the anisotropic nature of cobalt. Gul et.al [19] investigated magnetic and electrical properties of Co1-xZnxFe2O4 varying x from (0.0 to 0.6). A decrease in Curie temperature with an increasing Zinc doping concentration was observed. Temperature dependent dc resistivity measurements indicated semiconductor nature in CoZn ferrites. Veverka et.al [20] studied Co1-xZnxFe2O4 around x=0.6 because of potential applications of the composition in the magnetic fluid hyperthermia. They reported that the cationic distribution in ferrite is more complex and the distribution is random in nature. They also observed that presence of vacancies in the octahedral sites changed the cobalt ions partially or completely to the Co3+ state. A neutron diffraction study showed the differences of cationic distributions in nanoparticles and bulk ferrite samples. On other hand, Ghasemi et.al [21] prepared Zn1-xCoxFe2O4 powder by sol-gel process and showed a transition from paramagnetic to ordered ferromagnetic state with increasing cobalt concentration. They also observed increasing behavior of saturation magnetization and coercivity of Co substituted ZnFe2O4 with an increase in cobalt content. Although, there have been considerable works mostly on CoFe2O4 side, systematic investigations of CoxZn1-xFe2O4 system towards ZnFe2O4 are few and therefore an interesting subject to study. In this work, we report the synthesis of CoxZn1-xFe2O4 system with x = 0, 0.2, 0.4 and 0.6 by metalorganic decomposition (MOD) chemical route. To the best of our knowledge, no reports are available in the literature on studies of MOD processed Co-Zn ferrite. The advantages of this method include high solution stability, low processing temperature and composition is easily controllable. The structural, electrical and magnetic properties of CoxZn1-xFe2O4 are systematically investigated.

II.

EXPERIMENTAL

Cobalt Zinc mixed ferrite CoxZn1-xFe2O4 where x = 0, 0.2, 0.4 and 0.6 were synthesized by chemical solution technique using metallo-organic precursors. In this method, Cobalt-2-ethyl hexanoate (C7H15COO)2Co, Zinc-2-ethyl hexanoate (C7H15COO)2Zn and Iron-3-ethyl hexanoate (C7H15COO)3Fe were synthesized from starting materials Co(NO3)2.6H2O, Zn(NO3)2.H2O and Fe(NO3)3.9H2O respectively. The Co(NO3).6H2O was first dissolved in distilled water to prepare solution A. In a separate flask stoichiometric 2-ethyl hexanoic acid was neutralized by adding KOH solution under constant stirring to prepare solution B. The solution A was then poured into solution B under constant vigorous stirring. The Cobalt-2-ethyl hexanoate soap as formed was extracted with xylene solution. A clear solution of Cobalt-2-ethyl hexanoate in xylene was obtained and kept as cobalt stock solution. The chemical reactions during the synthesis can be represented as 1. KOH + C7H15COOH 2. 2C7H15COOK + Co(NO3)2 C7H15COOK + H2O 2KNO3 + (C7H15COO)2Co

Similarly, Zinc-2-ethyl hexanoate and Iron-3-ethyl hexanoate soaps were synthesized and extracted with xylene and kept as stock solutions for Zn and Fe respectively. The Chemical reactions are given below 3. 2C7H15COOK + Zn(NO3)2 4. 3C7H15COOK + Fe(NO3)3 2KNO3 + (C7H15COO)2Zn 2KNO3 + (C7H15COO)3Fe

These stock solutions were mixed in required molar ratio and magnetically stirred for 1hr at temperature ~700C. Polyethylene glycol (PEG) was added as surfactant. The resultant solution was subsequently dried at 3000C. The dried powder was sintered at 7000C for 3hr for crystallization. The powder was pressed into pallets by applying a load of about 5 ton and pallets were finally sintered at 10000C for 3hr in air.

545

Vol. 5, Issue 1, pp. 544-554

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
The crystallographic and microstructure properties of the ferrites were studied by X-ray diffraction (PANalytical XPert PRO diffractometer) with CuK radiation and scanning electron microscope (SEM) respectively. The compositional analysis of Co-Zn ferrites were performed using energy dispersive x-ray spectroscopy (EDXS). The dielectric measurements were performed by using Wayn Kerr 6520 impedance analyser in the frequency range of 20Hz to 1MHz. The magnetic properties were measured by using vibrating sample magnetometer VSM (Microsense, USA) at room temperature.

III.

RESULTS AND DISCUSSION

Fig. 1 shows the XRD patterns of the MOD synthesized CoxZn1-xFe2O4 (x = 0, 0.2, 0.4 and 0.6) powder sintered at 7000C for 3hrs. All XRD peaks are indexed with the JCPDS Cards (# 22-1086 for CoFe2O4 and # 89-1012 for ZnFe2O4). The patterns show characteristic diffraction lines (220), (311), (222), (400), (422) and (511) of spinel cubic structure. The extra reflection peaks of Fe2O3 are also observed in XRD patterns of doped samples. The observation is not unusual particularly at a low sintering temperature. Tomer et.al [22] have also observed Fe2O3 impurity phase in Co-Zn ferrite powders synthesized by sol-gel process.

Fig.1 X-ray diffraction patterns for CoxZn1-xFe2O4 (x = 0, 0.2, 0.4 and 0.6) samples sintered at 7000C for 3hrs

Fig 2(a) shows all XRD patterns of the Co-Zn powders sintered at 10000C for 3hr. The absence of extra reflection peaks of Fe2O3 in the diffraction patterns of Co-Zn ferrite with x = 0 and 0.4 represents formation of single phase with a spinel crystal structure at the sintering temperature ~ 10000C. The Fe2O3 phase in other two samples with x = 0.2 and 0.6 is also significantly reduced on increasing sintering temperature. It can be seen from Fig.1 and 2(a) the degree of crystallinity of the samples increases on increasing sintering temperature from 7000C to 10000C without any change in the basic crystal structure of Co-Zn ferrite. The lattice constant a for each sample was calculated using relation a = d(h2+k2+l2)1/2 [23], where h,k,l are miller indices of the crystal planes. The dependence of the lattice parameter a on Co2+ content is shown in fig.2(b). It can be seen that lattice constant a of the samples decrease by increasing the sintering temperature from 7000C to 10000C. This may be attributed to improvement in crystallization of the samples on increasing sintering temperature. On the other hand, the decrease in the lattice parameter a with increasing Co concentration can be correlated with difference in ionic radii of Co2+ (0.78) and Zn2+ (0.82) [19] which means that more the Co2+ ion the less the lattice parameter. Fig. 2(b) also shows that the lattice parameter of Co-Zn ferrite decreases linearly with increasing Co concentration thus obeying Vegards law approximately [24].

546

Vol. 5, Issue 1, pp. 544-554

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Fig.2 (a) X-ray diffraction patterns of CoxZn1-xFe2O4 (x = 0, 0.2, 0.4 and 0.6) samples sintered at 10000C for 3hrs and (b) lattice constant of CoxZn1-xFe2O4 (x = 0, 0.2, 0.4 and 0.6) samples with Cobalt concentration

The scanning electron microscope (SEM) images of Co-Zn ferrites sintered at 10000C are shown in Fig.3. In Fig.3 (a) to (d), it can be seen that particles are well distributed and agglomerated. Usually, these agglomerates are formed by smaller size particles. The size of the grain ranges from 95 to 142nm with varying Co concentration from 0 to 0.6. It is evident from Fig. 3(a) to (d) SEM images reveal dense microstructures.

Fig.3

SEM micrographs of CoxZn1-xFe2O4 (x = 0, 0.2, 0.4 and 0.6) samples

Energy dispersive x-ray spectroscopy (EDX) spectra of the samples sintered at 10000C are shown in Fig. 4(a) to (c). The presence of Co, Zn and Fe in the samples is depicted in the spectra. The EDX analysis indicates the wt% of cobalt and zinc in these samples are 9.55, 12.46, 17.41 and 9.32, 8.07. 4.67 respectively, which reveals the increasing concentration of cobalt and decreasing concentration of zinc in Co-Zn ferrite samples.

547

Vol. 5, Issue 1, pp. 544-554

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Fig.4 EDX of CoxZn1-xFe2O4 (x = 0, 0.2, 0.4 and 0.6) samples.

Fig.5 shows the field dependent magnetization of CoxZn1-xFe2O4 samples measured at room temperature in an applied field up to 20kOe. The magnetization increases with increase in external magnetic field strength at low field region and attains maximum value for field ~ 20kOe. It can be seen that all magnetization curves are saturated at higher field region and the hysteresis curves for samples are S shaped which are the characteristics of ferromagnetism.

Fig.5 Magnetic hysteresis loops for CoxZn1-xFe2O4 (x = 0, 0.2, 0.4 and 0.6) samples measured at room temperature.

548

Vol. 5, Issue 1, pp. 544-554

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Fig.6 Variation of saturation magnetization and coercive field with Co concentration

Fig.6 shows the dependence of saturation magnetization Ms and Hc of the Co-Zn ferrite on Co content. In a cubic system of spinel ferrites the magnetic order is mainly constructed because of a super exchange interaction between metal ions of sub lattices A and B. The saturation magnetization first increases when cobalt content is between x = 0.0 and 0.4 and then decreases for x = 0.6. The increase in saturation magnetization is directly related to the enhancement in magnetic Co2+ ions replacing Zn2+ non magnetic ions, which affects the super exchange interaction [25,26]. The decrease in the saturation magnetization value for Co = 0.6 is due to the migration of Fe3+ ion from B site to A site, the magnetic moment of A site increases due to this net magnetization MB-MA decreases. As seen from Fig.6 with an increase in the Co2+ content coercivity Hc first increases for the samples with Co concentration with x = 0.2 and reaches minimum value Hc ~ 32Oe for x=0.4. It is then increased for x = 0.6. The increase in coercivity may be attributed to the increase in magnetocrystalline anisotropy with increasing Co content. The minimum value of Hc for x =0.4 may be attributed to its dominating minimum grain size effect (grain size ~95nm). It can be emphasized here that the observed values of Ms ~ 71.2emu/g and Hc ~ 32Oe for Co0.4Zn0.6Fe2O4 sample are significantly improvement over earlier reports [27-29]. The cation distribution formula for the mixed Co-Zn ferrite are given in Table I
Table I: Cation distribution in Co-Zn ferrites Molecular formula ZnFe2O4 Zn0.8Co0.2Fe2O4 Zn0.6Co0.4Fe2O4 Zn0.4Co0.6Fe2O4 Cation distribution formula [A-site] [B-site] [Zn2+] [Fe3+]O4 2+ 3+ 2+ [Zn 0.8Fe 0.2] [Co 0.2Fe3+1.8]O4 [Zn2+0.6Fe3+0.4] [Co2+0.4Fe3+1.6]O4 [Zn2+0.4Fe3+0.6] [Co2+0.6Fe3+1.4]O4

Fig.7 (a) and (b) show variation of dielectric constant and dielectric loss (tan) with frequency at room temperature for Co substituted samples respectively. The dispersion in dielectric constant and loss tangent is observed in low frequency region for all samples. The dielectric constant decreases with increasing frequency.

549

Vol. 5, Issue 1, pp. 544-554

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Fig.7 Variation in (a) dielectric constant () and (b) dielectric loss (tan) with frequency at room temperature

The dielectric dispersion in Co-Zn ferrites can be explained on the basis of interfacial polarization as predicted by Maxwell-Wagner model. It is well known that dielectric structure of ferrite consists of conducting grains separated by the grain boundaries that are poor conductors. The hopping of electron between Fe2+ and Fe3+ results in the pile up of electrons at the grain boundaries and produce polarization in ferrites. However, as the frequency of the externally applied electric field is increased, the electron hopping between Fe2+ and Fe3+ does not follow the applied electric field as a result dielectric constant decreases and then becomes constant. The dielectric constant behavior is largely affected by Co content. The decrease in dielectric constant with increasing Co content for samples with x = 0.2 and 0.6 may be due to the migration of Fe3+ ions from octahedral site to tetrahedral site which decreases the hopping. However, the increase in dielectric constant for sample with x = 0.4 may be attributed to the formation of Fe2+ ions octahedral site. The formation of Fe2+ ions in octahedral site increases the electron exchange between Fe2+ and Fe3+ and hence enhances the polarization. The dielectric loss in ferrite mainly originates from electron hopping and defect dipoles. The electron hopping contributes to the dielectric loss only in low frequency range. The response of electron hopping is decreased with increasing frequency and hence the dielectric loss decreases in high frequency range as shown in fig.7(b). The charged defect dipoles contribute to the dielectric loss in high frequency range. In the meanwhile, dielectric loss peaks can be seen in fig.7(b) for sample with x = 0 and 0.4. For other two samples, the peaking behavior of dielectric loss is not observed in the measurable frequency range. The dielectric loss peak appears when the hopping frequency of the electron between Fe2+ and Fe3+ ions is close to the frequency of the external applied electric field. Furthermore, the loss peak in fig.7 (b) is moving to high frequency side with increase in Co content. This may be attributed to the fact that cobalt substitution prefers to the octahedral site which strengthens the dipole-dipole interaction that restricts the rotation of the dipoles.

550

Vol. 5, Issue 1, pp. 544-554

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Fig.8 Temperature dependence of dielectric constant () of CoxZn1-xFe2O4 (x = 0, 0.2, 0.4 and 0.6) samples at different frequencies from 500Hz to 1MHz.

Fig. 9 Temperature dependence of dielectric loss (tan) of CoxZn1-xFe2O4 (x = 0, 0.2, 0.4 and 0.6) samples at different frequencies from 500Hz to 1MHz

551

Vol. 5, Issue 1, pp. 544-554

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
The temperature dependence of the dielectric constant and dielectric loss tangent measured at different frequencies 500Hz to 1MHz are shown in fig.8 and fig.9. It can be seen that all samples exhibit the frequency dependent phenomena which is attributed to the interfacial polarization in ferrite. It is noticed for all four samples that by increasing the temperature, the dielectric constant increases upto the specific maxima, which shift to higher temperature with increasing frequency. As the temperature increases more charge carriers get excitation from their trapping centers and contribute to the polarization. This in turn increases dielectric constant values of samples. The behavior of tan with Co content is similar to that of dielectric constant. Thermally activated dielectric behavior and conductivity give rise to tan > 1 at lower frequencies and high temperature [30].

IV.

CONCLUSIONS

In conclusion Co substituted ZnFe2O4 systems have been successfully synthesized by chemical solution method using metallo-organic precursors. The XRD patterns revealed cubic spinel structure for all the compositions with minor impurity phase of Fe2O3 for samples with Co = 0.2 and 0.6 concentration. The lattice parameter decreases with increasing Co content due to smaller ionic radius of Co2+ than Zn2+ ion. SEM images showed fine-grained microstructures with grain sizes in the range of 95 to 142nm. The compositional analysis of these samples was performed by using EDX. As revealed by the observed results, the magnetic properties of the samples are mainly dominated by Co2+ ions replacing Zn2+ non magnetic ions as well as migration of Fe3+ ions from octahedral to tetrahedral sites. The cation distributions in mixed ferrites are very complex. The Mssbauer spectral study of the specimens is required to understand the magnetic properties completely and the investigation is under progress. The room temperature dielectric constant and loss tangent decrease with the increase in frequency indicates the normal dielectric behavior for all the samples. The variation of dielectric constant and loss tangent with temperature shows frequency dependent characteristics due to the electron hopping between the ions which plays dominant role in the dielectric behavior.

ACKNOWLEDGEMENT
One of the authors (Anshu Sharma) is very grateful to UGC, New Delhi for providing financial support under UGC-BSR fellowship.

REFERENCES
[1] [2] [3] [4] [5] Y.N. Xia et.al One dimensional nanostructures: synthesis, characterization and applications Adv. Mater. 15, 353 (2003) X. Kou et.al Tunable ferromagnetic resonance in NiFe nanowires with strong magnetostatic interaction Appl. Phys. Lett. 94, 112509 (2009) Q.A. Pankhurst, N.K.T. Thanh, S.K. Jones and J. Dobson, Progress in applications of magnetic nanoparticles in biomedicine J. Phys. D: Appl. Phys. 42, 224001 (2009) C.C. Berry, Progress in functionalization of magnetic nanoparticles for applications in biomedicine J. Phys. D: Appl. Phys. 42, 224003 (2009) Santosh Bhukal, Tsering Namgyal, S. Mor, S. Bansal and Sonal Singhal, Structural, electrical, optical and magnetic properties of chromium substituted Co-Zn nanoferrites Co0.6Zn0.4CrxFe2-xO4 (0 x 1.0) prepared via sol-gel auto-combustion method J. Mol. Structure, 1012 (2012) 162-167. J.H. Shim, S. Lee, J.H. Park, S.J. Hau, Y.H. Jeong and Y.W. Cho, Coexistence of ferromagnetic and antiferromagnetic ordering in Fe-inverted zinc ferrite investigated by NMR Phys. Rev. B 73, 064404 (2006) T.M. Meaz, S.M. Attia and A.M. Abo El Ata, Effect of tetravalent titanium ions substitution on the dielectric properties of CoZn ferrites J. Magn. Magn. Mater. 210 (2000) 189 L.R. Gonsalves, S.C. Mojumdar and V.M.S. Verenkar, Synthesis and characterization of Co0.8Zn0.2Fe2O4 nanoparticles J. Therm. Anal. Calorim. 104: 869-873 (2011) M. Sertkil, Y. Koseoglu, A. Baykal, H. Kavas and A.C. Basaran, Synthesis and magnetic characterization of Zn0.6Ni0.4Fe2O4 nanoparticle via a polyethylene glycol-assisted hydrothermal route J. Magn. Magn. Mater. 321, 157 (2009)

[6]

[7] [8] [9]

552

Vol. 5, Issue 1, pp. 544-554

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
[10] [11] [12] [13] F. Gozuak, Y. Koseoglu, A. Baykal and H. Kavas, Synthesis and characterization of CoxZn1-xFe2O4 magnetic nanoparticles via a PEG-assisted route J. Magn. Magn. Mater. 321, 2170 (2009) H.Y. He, Comparison study on magnetic property of Co0.5Zn0.5Fe2O4 powders by template-assisted sol-gel and hrdrothermal methods J. Mater. Sci.: Mater Electron 23: 995-1000 (2012) V.L. Calero-Ddelc and C. Rinaldi, Synthesis and magnetic characterization of cobalt-substituted ferrite (CoxFe3-xO4) nanoparticles J. Magn. Magn. Mater. 314, 60 (2007) R. Arulmurugam, B. Jeyadevan, G. vaidyanathan and S. Sendhilnathan, Effect of zinc substitution on Co-Zn and Mn-Zn ferrite nanoparticles prepared by co-precipitation J. Magn. Magn. Mater. 288, 470 (2005) Y. Koseoglu, A. Baykal, F. Gozuak and H. Kavas, Structural and magnetic properties of CoxZn1xFe2O4 nanocrystals synthesized by microwave method Polyhedron 28, 2887 (2009) I. H. Gul and A. Maqsood, Structural, magnetic and electrical properties of cobalt ferrites prepared by the solgel route J. alloys Compd. 465, 227 (2008) S.B. waje, M. Hashim, W.D.W. Yusoff and Z. Abbas, Sintering temperature dependence of room temperature magnetic and dielectric properties of Co0.5Zn0.5Fe2O4 prepared using mechanically alloyed nanoparticles J. Magn. Magn. Mater. 322, 686 (2010) J. Lopez, L.F. Gonzalez-Bahamon, J. Prado, J.C. Caicedo, G. Zambrano, M.E. Gomez, J. Esteve and P. Prieto, Study of magnetic and structural properties of ferrofluids based on cobalt-zinc ferrite nanoparticles J. Magn. Magn. Mater. 324, 394 (2012) A. Hassadee, T. Jutarosaga and W. Onreabroy, Effect of zinc substitution on structural and magnetic properties of cobalt ferrite Procedia Engineering 32, 597 (2012) I.H. Gul, A.Z. Abbasi, F. Amin, M. Anis-Ur-Rehman and A. Maqsood, Structural, magnetic and electrical properties of Co1-xZnxFe2O4 synthesized by co-precipitation method J. Magn. Magn. Mater. 311, 494 (2007) M. Veverka, Z. Jirak, O.Kaman, K. Knizek, M. Marysko, E. Pollert, K. Zaveta, A. Lancok, M. Dlouha and S. Vratislav, Distribution of cations in nanosize and bulk Co-Zn ferrite Nanotechnology, 22, 345701 (2011) A. Ghasemi, V. Sepelak, S.E. Shirsath, X. Liu and A. Morisako, Mossbauer spectroscopy and magnetic characteristics of Zn1-xCoxFe2O4 (x = 0-1) nanoparticles J. Appl. Phys. 109, 07A512 (2011) M.S. Tomar, S.P. Singh, O.P. Perez, R.P. Guzman, E. Calderon and C.R. Ramos, Synthesis and magnetic behavior of nanostructured ferrites for spintronics Microelectronic Journal 36, 475 (2005) Y.P. Fu and C.S. Hsu,Microwave induced combustion synthesis of Li0.5Fe2.5-xMnxO4 powder and their characterization J. Alloys. Compd. 391, 185 (2005) A.R. Denton and N.W. Ashcroft, Vegards law Phys. Rev. A 43, 3161 (1991) T. Ozkaya et.al, Synthesis of Fe3O4 nanoparticles at 100oC and its magnetic characterization J. Alloys Compd. 472, 18 (2009) M. Sertkol et.al, Microwave synthesis and characterization of Zn-doped nickel ferrite nanoparticles J. Alloys Compd. 486, 325 (2009) S.U. Romero, O.P. Perez, O.N.C. Uwakwch, C. Osorio and H.A. Radovan, Tunning of magnetic properties in Co-Zn ferrite nanocrystals synthesized by a size controlled co-precipitation method J. Appl. Phys. 109, 07B512 (2011) S. Singhal, T. Namgyal, S. Bansal and K. Chandra, Effect of Zn substitution on the magnetic properties of cobalt ferrite nano particles prepared via Sol-Gel route J. Electromagnetic Analysis & Applications, 2, 376 (2010) R. Rani, S.K. Sharma, K.R. Pirota, M. Knobel, S. Thakur and M. Singh, Effect of zinc concentration on the magnetic properties of cobaltzinc nanoferrite Ceramic International 38, 2389 (2012) C. Harnagea, L. Mitoseriu, V. Buscaglia, I. Pallecchi and P. Nanni, Magnetic and ferroelectric domain structures in BaTiO3(Ni0.5Zn0.5)Fe2O4 multiferroic ceramics J. Eur. Ceram. Soc. 27, 3947 (2007)

[14] [15] [16]

[17]

[18] [19]

[20]

[21] [22] [23] [24] [25] [26] [27]

[28]

[29] [30]

AUTHORS
Anshu Sharma is pursuing Ph.D from Himachal Pradesh University, Shimla. She did her M.Sc. and M.Phil degrees in the years 2006 and 2007. She has attended three international and three national conferences/workshops and presented papers. Her area of research focused on magnetic materials and composite materials

553

Vol. 5, Issue 1, pp. 544-554

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Kusum Parmar has obtained her M.Sc. degree in 2007 and M.Phil degree in 2009 in Physics from Himachal Pradesh University, Shimla and has joined Ph.D from the same university. She has attended four national conferences and workshops. Her area of research interests are ferrite/ferroelectric composite materials and thin film.

R. K. Kotnala obtained Ph.D. degree from Indian Institute of Technology, Delhi in1982. At present working as a chief scientist at National Physical Laboratory, New Delhi, India and he is Fellow IGU. His current field of interest is, multiferroics, magnetic materials, magnetic standards and sensor materials inclusive of nanomagnetism of materials.

N. S. Negi joined the Himachal Pradesh University, Shimla in October 1997. He received his PhD degree from the Himachal Pradesh University Shimla. He has extensive experience in deposition of ferroelectric & ferrite thin films. He currently works on ferroelectric & multiferroic thin films, high k materials, diluted magnetic semiconductors & pyrophoric materials. He has published over 40 papers in international journals. Currently he is a Professor and Head, Department of Physics. H.P. University, Shimla.

554

Vol. 5, Issue 1, pp. 544-554

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

APPLICATION OF METAHEURISTICS IN TRANSMISSION NETWORK EXPANSION PLANNING-AN OVERVIEW


Bharti Dewani1, M.B. Daigavane2, A.S. Zadgaonkar3
Department of Electrical and Electronics Engineering, Disha Institute of Management and Technology, Raipur, Chattisgarh, India 2 Principal, Suresh Deshmukh College Of Engg., Wardha, Maharashtra, India 3 Vice-Chancellor, Dr. C.V. Raman University, Bilaspur, Chhattisgarh, India
1

ABSTRACT
Within the electric power literature the transmission expansion planning problem (TEP) refers to the problem of how to upgrade an electric power network to meet future demands. As this problem is a complex, non-linear, and non-convex optimization problem, researchers have traditionally focused on approximate models of power flows. This research paper deals with various planning tools for TEP based on solution methods, the treatment of the planning horizon, and the consideration of the new competitive schemes in the power sector. Metaheuristics are by far the most popular and define mechanisms for developing an evolution in the search space of the sets of solutions in order to come close to the ideal solution with elements which will survive in successive generations of populations.

KEYWORDS:

transmission expansion planning (TEP), metaheuristics, heuristics, ant colony algorithm (ACO), particle swarm optimization (PSO).

I.

INTRODUCTION

Transmission system is one of the major parts of the electric power industry. It does not only provide a linkage between generation and distribution, but also a non discriminative and reliable environment to suppliers and demanders. The purpose of a power transmission network is to transfer power from generation plants to load centers securely, efficiently, reliably and economically. Since any practical transmission system is ever expanding, so TEP involves to indentify where to add new circuits to meet the increased demand by transferring the power from old to new network. In the last few years, research in the area of synthesis transmission planning models experienced an expansion. Many papers and reports about new models have been published in the technical literature due mostly to the improvement of the computer power availability, new optimization algorithms, and the greater uncertainty level introduced by the power sector deregulation. Several publications skillfully describe the general planning problem, Transmission system planners tend to use many methods to address the expansion problem. Planners utilize automatic expansion models to determine an optimum expansion system by minimizing the mathematical objective function subject to a number of constraints [1-6]. To find an optimal solution of TEP over a planning horizon, extensive parameters are required; for instance topology of the base year, candidate circuits, electricity demand and generation forecast, investment constraints, etc. This would consequently impose more complexity to solving TEP problem. Given the above information, indepth knowledge on problem formulation and computation

555

Vol. 5, Issue 1, pp. 555-561

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
techniques for TEP is crucial and therefore, this paper aims essentially at presenting fundamental information of these issues. The paper is organized as follows. Section II deals with publications that propose to classify the transmission planning problem (static and dynamic) and various models used for TEP problem. Section III reviews the techniques to solve TEP problem. Later on, in section IV some of the features of the available tools for development of transmission planning models are discussed. Finally, conclusion is drawn in section V.

II.

CLASSIFICATION OF TEP PROBLEM

Based on planning horizon, transmission expansion planning can be traditionally classified into two categories, namely static (single-stage) and dynamic (multi-stage) planning. In static planning, only a single time period is considered as a planning horizon. In contrast, dynamic planning considers the planning horizon by separating the period of study into multiple stages [1]. For static planning, the planner searches for an appropriate number of new circuits that should be added into each branch of the transmission system and in this case, the planner is not interested in scheduling when the new lines should be constructed and the total expansion investment is carried out at the beginning of the planning horizon [6]. Many research works regarding the static TEP are presented in [5, 8, 9, 10] that are solved using a variety of the optimization techniques. In contrast, time-phased or various stages are considered in dynamic planning while an optimal expansion schedule or strategy is considered for the entire planning period. Thus, multi-stage transmission expansion planning is a larger-scale and more complex problem as it deals with not only the optimal quantity, placement and type of transmission expansion investments but also the most suitable times to carry out such investments. Therefore, the dynamic transmission expansion planning inevitably considers a great number of variables and constraints that consequently require enormous computational effort to achieve an optimal solution, especially for large-scale real-world transmission systems. Many research works regarding the dynamic TEP [6, 11, 12, 13] are presented some of the dynamic models that have been developed.

III.

SURVEY OF VARIOUS TECHNIQUES TO SOLVE TEP PROBLEM

Over past few decades, many optimization techniques have been proposed to solve the transmission expansion planning problem in regulated power systems. These techniques can be generally classified into mathematical, heuristic and meta-heuristic optimization techniques. A review of these methods is discussed in this section: 1. Mathematical Optimization techniques Mathematical optimization methods search for an optimal expansion plan by using a calculation procedure that solves a mathematical formulation of the planning problem. In the problem formulation, the transmission expansion planning is converted into an optimization problem with an objective function subject to a set of constraints. So far, there have been a number of applications of mathematical optimization methods to solve the transmission expansion planning problem, for instance, linear programming [10], nonlinear programming [12] and, dynamic programming [13], branch and bound [9], mixed-integer programming [14] and Benders decomposition [15]. In 1970, Garver proposed a linear programming method to solve the transmission expansion planning problem [10]. This original method was applied to long-term planning of electrical power systems and produced a feasible transmission network with near-minimum circuit miles using as input any existing network plus a load forecast and generation schedule. Two main steps of the method, in which the planning problem was formulated as load flow estimation and new circuit selection could be searched based on the system overloads, were presented in [10]. In 1984, an interactive method was proposed and applied in order to optimize the transmission expansion planning by Ekwue and Cory [11]. The method was based upon a single-stage optimization procedure using sensitivity analysis and the adjoint network approach to transmit power from a new generating station to a loaded AC power system.

556

Vol. 5, Issue 1, pp. 555-561

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Discrete dynamic optimizing (DDO) was proposed to solve the transmission planning problem by Dusonchet and El-Abiad [13]. The basic idea of this method was to combine deterministic search procedure of dynamic programming with discrete optimizing a probabilistic search coupled with a heuristic stopping criterion. In 2003, Alguacil et al. [14] proposed a mixed-integer linear programming approach to solve the static transmission expansion planning that includes line losses consideration. The proposed mixed-integer linear formulation offers accurate optimal solution. Haffner et al. [9] presented a new specialized branch and bound algorithm to solve the transmission network expansion planning problem. Optimality was obtained at a cost, however: that was the use of a transportation model for representing the transmission network. The expansion problem then became an integer linear programming (ILP) which was solved by the proposed branch and bound method. A new Benders decomposition approach was applied to solve the real-world power transmission network design problems by Binato et al. [15]. This approach was characterized by using a mixed linear (0-1) disjunctive model, which ensures the optimality of the solution found by using additional constraints, iteratively evaluated, besides the traditional Benders cuts. 2. Heuristic and Meta-heuristic techniques Among the soft computing components, instead of EA (which can represent only one part of the search and optimization methods used), heuristic algorithms and even metaheuristics should be considered. The term heuristics comes from the Greek word heuriskein, the meaning of which is related to the concept of finding something and is linked to Archimedes famous and supposed exclamation. On this basis, a large number of heuristic procedures have been developed to solve specific optimization problems with great success, and the best of these have been extracted and used in other problems or in more extensive contexts. This has contributed to the scientific development of this field of research and to the extension of the application of its results. As a result, metaheuristics have emerged, a term which appeared for the first time in an article by Fred Glover in 1986. The term metaheuristics derives from the combination of the word heuristics with the prefix meta (meaning beyond or of a higher level), and although there is no formal definition for the term metaheuristics, the following two proposals give a clear representation of the general notion of the term: a) I. H. Osman and G. Laporte [18]: "An iterative generation process which guides a subordinate heuristic by combining intelligently different concepts for exploring and exploiting the search space". b) S. Voss et al. [19]: "is an iterative master process that guides and modifies the operations of subordinate heuristics to efficiently produce high quality solutions". It is therefore clear that metaheuristics are more broad-brush than heuristics. In the sections which follow, we will focus on the concept of metaheuristics, and will start by pointing out that in the terms that we have defined, certain metaheuristics will always be better than others in terms of their performance when it comes to solving problems. There are so many and such a variety of metaheuristics available that it is practically impossible to agree on one universally-accepted way of classifying them. Nevertheless, the hierarchy on which there is the most consensus considers three (or four) foremost groups: 1) metaheuristics for evolutionary procedures based on sets of solutions which evolve according to natural evolution principles. 2) metaheuristics for relaxation methods, problem-solving methods using adaptations of the original model which are easier to resolve. 3) metaheuristics for neighborhood searches, which explore the solution space and exploit neighbourhood structures associated to these solutions. 4) other types of intermediate metaheuristics between the ones mentioned above or derived in some way from them, but which we will not consider because of their great variability (and to avoid dispersion). In addition to mathematical optimization methods, heuristic and meta-heuristic methods become the current alternative to solve the transmission expansion planning problem. There have been many

557

Vol. 5, Issue 1, pp. 555-561

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
applications of heuristic and meta-heuristic optimization methods to solve transmission expansion planning problem, for example heuristic algorithms [5, 19], tabu search [20], simulated annealing [21], genetic algorithms [6, 22, 23, 24], artificial neural networks [25], particle swarm [31] and hybrid artificial intelligent techniques [25]. The detail of these methods is as discussed below. Constructive heuristic algorithm (CHA) is the most-widely used heuristic algorithms in transmission expansion planning. A constructive heuristic algorithm is an iterative process that searches a good quality solution in a step-by-step process. Romero et al. [3] presented and analysed heuristic algorithms for the transportation model in static and multistage transmission expansion planning. A constructive heuristic algorithm for the transportation model (TM) of Garvers work [10] was extensively analysed and excellent results were obtained in [3]. Tabu search (TS) is an iterative improvement procedure that starts from some initial feasible solution and attempts to determine a better solution in the manner of a greatest descent neighbourhood search algorithm [2]. The basic components of the TS are the moves, tabu list and aspiration level (criterion). Simulated annealing (SA) approach based on thermodynamics was originally inspired by the formulation of crystals in solids during cooling [2]. Simulated annealing technique has been successfully applied to a number of engineering optimization problems including power system optimization problems. Romero et al. [21] proposed a simulated annealing approach for solving the long-term transmission system expansion planning problem. Expert system is a knowledge-based or rule-based system, which uses the knowledge and interface procedure to solve problems. The state of the field of expert systems and knowledge engineering in transmission planning was reviewed by Galiana et al. [16]. Genetic algorithm (GA) is a global search approach based on mechanics of natural selection and genetics. GA is different from conventional optimization techniques as it uses the concept of population genetics to guide the optimization search. GA searches from population to population instead of point-to-point search. In 1998, Gallego et al. [17] presented an extended genetic algorithm for solving the optimal transmission network expansion planning problem. Two main improvements of GA, which are an initial population obtained by conventional optimization based methods and the mutation approach inspired in the simulated annealing technique, was introduced in [17]. Ant colony search (ACS) system was initially introduced by Dorigo in 1992 [18]. ACS technique was originally inspired by the behaviour of real ant colonies and it was applied to solve function or combinatorial optimization problems. Gomez et al. [19] presented ant colony system algorithm for the planning of primary distribution circuits. The planning problem of electrical power distribution networks, stated as a mixed nonlinear integer optimization problem, was solved using the ant colony system algorithm. Particle swarm optimization (PSO), using an analogy of swarm behaviour of natural creatures, was started in the early of the 1990s. Kennedy and Eberhart developed PSO based on the analogy of swarms of birds and fish schooling [20], which achieved efficient search by remembrance and feedback mechanisms. By imitating the behaviours of biome, PSO is highly fit for parallel calculation and good performance for optimization problems. A new discrete method for particle swarm optimization was applied for transmission network expansion planning (TNEP) in [21]. Al-Saba and El-Amin [20] proposed the application of artificial intelligent (AI) tools, such as genetic algorithm, tabu search and artificial neural networks (ANNs) with linear and quadratic programming models, to solve transmission expansion problem. An intelligent tool started from a random state and it proceeded to allocate the calculated cost recursively until the stage of the negotiation point was reached.

IV.

TOOLS FOR DEVELOPING PLANNING MODELS

The main options available nowadays to develop transmission planning models (optimization ) are [22], [23]: A. General Purpose Programming Languages In this case, the planning model is developed using a general purpose programming language (like Fortran, C, etc.) and commonly the algorithm calls an optimization dynamic library (*.dll). Using this option makes sense when the execution time is critical, the model must run very often (multiple

558

Vol. 5, Issue 1, pp. 555-561

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
scenarios), when made-to-measure interfaces are needed or when the model has to be integrated to another application. That is usually the case of planning models for real world power systems. Some of the generic features required for development of real world transmission planning models are: Highly optimized code, efficient mathematics, and robustness that allow maximal speed of execution; Easy interaction with optimization packages and other external tools; Availability of comprehensive diagnostic messages. As programmers who are equally comfortable working with several computer languages and with no intention of depreciating the features of C or any other language for numerical calculations, the authors want to clarify why Fortran is still a good option for high performance scientific and engineering applications. In addition to Fortran and C, there are other powerful and free languages that could also be considered: functional programming languages as Haskell [24]; concurrent programming languages as Erlang (www.erlang. org); and constraint programming languages as Mozart [25]. B. Languages or Environments for Numerical/Symbolic Calculations This option includes spreadsheets (e.g., Excel [26]), or environments for technical computing (e.g., MATLAB [27], Scilab [28], etc.), or symbolic computation (e.g., MAPLE [29], Mathematica [30], Fermat [31], etc.). The environments for numerical or symbolic computation, for instance MATLAB, were not specially designed to solve optimization problems, but make easy to deal with matrices or vectors. All of these alternatives can be used for prototype quick development since they have great graphic visualization features but it is very hard to use them to solve very large optimization problems as transmission planning for real world power systems.

V.

RESULT

Although metaheuristics are different in the sense that some of them are population-based (EC, ACO), and others are trajectory methods (SA, TS, ILS, VNS, GRASP,PSO), and although they are based on different philosophies, the mechanisms to efficiently explore a search space are all based on intensifi cation and diversification. Nevertheless, it is possible to identify .sub-tasks. in the search process where some metaheuristics perform better than others. This has to be examined more closely in the future in order to be able to produce hybrid metaheuristics performing considerably better than their pure parents . In fact we can and this phenomenon in many facts of life, not just in the world of algorithms. Mixing and hybridizing is often better than purity.

VI.

CONCLUSION AND FUTURE SCOPE

In this paper, we have presented a classified list of major publication on transmission expansion planning (synthesis models). This list is by no means complete.We have also presented and compared nowadays most important metaheuristic methods for TEP. In Section III we have outlined the basic metaheuristics as they are described in the literature and proposed a conceptual comparison of the different metaheuristics based on the way they implement the two main concepts for guiding the search process: Intensification and diversification. This comparison is founded on the I&D frame, where algorithmic components can be characterized by the criteria they depend upon (objective function, guiding functions and randomization) and their effect on the search process. Transmission planning researchers have worked and set their interest mostly on static planning models. The dynamic and pseudodynamic planning models are still in an undeveloped status and they have some limitations for their application to real power systems.

REFERENCES
[1]. G. Latorre, R. D. Cruz, J. M. Areiza and A. Villegas, Classification of publications and models on transmission expansion planning, IEEE Trans. Power Syst., vol. 18, no. 2, pp. 938-946, May 2003.

559

Vol. 5, Issue 1, pp. 555-561

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
[2]. Y. H. Song and M. R. Irving, Optimization techniques for electrical power systems: Part 2 Heuristic optimization methods, Power Engineering Journal, vol. 15, no. 3, pp. 151-160, Jun. 2001. [3] K. Y. Lee and M. A. El-Sharkawi [Ed.], Modern heuristic optimization techniques: Theory and applications to power systems, Wiley-IEEE Press, 2008. [4] L. L. Lai, Intelligent System Applications in Power Engineering: Evolutionary Programming and Neural Networks, Wiley, 1998. [5] R. Romero, C. Rocha, J. R. S. Mantovani and I. G. Sanchez, Constructive heuristic algorithm for the DC model in network transmission expansion planning, IEE Proc. Gener. Transm. Distrib., vol. 152, no. 2, pp. 277-282, Mar. 2005. [6] A. H. Escobar, R. A. Gallego and R. Romero, Multistage and coordinated planning of the expansion of transmission systems, IEEE Trans. Power Syst., vol. 19, no. 2, pp. 735-744, May 2004. [7]R. Romero, A. Monticelli, A. Garcia and S. Haffner, Test systems and mathematical models for transmission network expansion planning, IEE Proc. Gener. Transm. Distrib., vol. 149, no.1, pp. 27-36, Jan. 2002. [8] A. O. Ekwue and B. J. Cory, Transmission system expansion planning by interactive methods, IEEE Trans. Power App. Syst., vol. PAS-103, no.7, pp.1583-1591, Jul. 1984 [9] S. Haffner, A. Monticelli, A. Garcia, J. Mantovani and R. Romero, Branch and bound algorithm for transmission system expansion planning using transportation model, IEE Proc. Gener. Transm. Distrib., vol. 147, no.3, pp. 149-156, May 2000. [10] L. L. Garver, Transmission network estimation using linear programming, IEEE Trans. Power App. Syst., vol. PAS-89, no.7, pp.1688-1697, Sep./Oct. 1970. [11] R. Romero, C. Rocha, M. Mantovani and J. R. S. Mantovani, Analysis of heuristic algorithms for the transportation model in static and multistage planning in network expansion systems, IEE Proc. Gener. Transm. Distrib., vol. 150, no. 5, pp. 521-526, Sep. 2003. [12] H. K. Youssef and R. Hackam, New transmission planning model, IEEE Trans. Power Syst., vol. 4, pp. 918, Feb. 1989. [13] Y. P. Dusonchet and A. H. El-Abiad, Transmission planning using discrete dynamic optimization, IEEE Trans. Power App. Syst., vol. PAS-92, pp. 13581371, Jul. 1973. [14]L. Bahiense, G. C. Oliveira, M. Pereira, and S. Granville, A mixed integer disjunctive model for transmission network expansion, IEEE Trans. Power Syst., vol. 16, pp. 560565, Aug. 2001. [15]S. Binato, M. V. F. Pereira and S. Granville, A new Benders decomposition approach to solve power transmission network design problems, IEEE Trans. Power Syst., vol. 16, no. 2, pp. 235-240, May 2001. [16]L. Bahiense, G. C. Oliveira, M. Pereira, and S. Granville, A mixed integer disjunctive model for transmission network expansion, IEEE Trans. Power Syst., vol. 16, pp. 560565, Aug. 2001. [17] R. A. Gallego, A. Monticelli and R. Romero, Transmission system expansion planning by an extended genetic algorithm, IEE Proc. Gener. Transm. Distrib., vol. 145, no.3, pp. 329-335, May 1998. [18] M. Dorigo, Optimization, Learning and Natural Algorithms, PhD thesis, Politecnico di Milano, Italy, 1992. [19] J. F. Gomez, H. M. Khodr, P. M. De Oliveira, L. Ocque, J. M. Yusta, R. Villasana, and A. J. Urdaneta, Ant colony system algorithm for the planning of primary distribution circuits, IEEE Trans. Power Systems, vol.19, no.2, pp. 996-1004, May 2004. [20]J. Kennedy and R. Eberhart, Particle swarm optimization, in Proc. IEEE International Conference on Neural Networks (ICNN 1995), Perth, Australia, vol. 4, pp. 1942-1948, 27th Nov. 1th Dec. 1995. [21] Y. X. Jin, H. Z. Cheng, J. Y. Yan and L. Zhang, New discrete method for particle swarm optimization and its application in transmission network expansion planning, Elsevier Science, Electric Power Systems Research, vol. 77, pp.227-233, 2007. [22] R. Sharda and G. Rampal, Algebraic modeling languages on PCs, OR/MS Today, vol. 22, pp. 5863, June 1995. [23] A. Ramos, Languages for model development (in Spanish), in CREG Course on Energy Management Models, Buenos Aires, Argentina, 2000 [24] www.haskell.org [25] www.mozart-oz.org [26] www.microsoft.com [27] http://www.mathworks.com [28] http://www-rocq.inria.fr/scilab [29] http://www.maplesoft.com [30] http://www.wolfram.com [31] http://www.bway.net/ ~lewis/

560

Vol. 5, Issue 1, pp. 555-561

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

ACKNOWLEDGEMENT
Authors are very much thankful to all the informants, doctors and the people who have been directly or indirectly involved in writing this paper.

AUTHORS
Bharti Dewani received her B.E. (Electrical) Degree from NIT, Raipur, India in 2007 and M.E. (Power System Engg.) from SSCET, Bhilai in year 2010.She is working as Sr. Lect. in deptt. of Electrical & Electronics engg. (DIMAT, Raipur) since 2007.She is currently pursuing Ph.D from Dr. C.V. Raman University. Her field of interest is power system restructuring and power system optimization.

Manoj B. Daigavaneo btained the B.E.Degree in Power Electronics Engineering from Nagpur University,India in 1988. He received the M.S.Degree in Electronics and Control Engineering from Birla Institute of Technology and Science, Pilani (Raj) India in 1994. He also obtained the M.E. Degree in Power Electronics Engineering from Rajeev Gandhi University of Technology, Bhopal (M.P), India in 2001. He received Ph D Degree in Electrical Engineering from RSTMNagpur University, India in 2009. Presently, he is Principal of S. D. College of Engineering, Wardha Maharashtra (India). His main areas of interest are resonant converters, Power quality issues, DSP applications and Power electronics for motor drives. He is a Member of the Institution of Engineers (India) and a Life Member of the Indian Society for technical Education.

A.S. Zadgaonkar, Ph.D. (Instru.), Ph.D. (Materials), D. Lit (Speech Recog.) .H is currenty Vice-Chancellor of Dr. C.V. Raman University,Bilaspur,Chhattisgarh,India. He has fourty years of teaching and administrative experience. He has published more than 470 papers in International, National Journals/Conferences. He has guided more than 10 Ph.D. candidates and written 3 books. He has received more than 13 awards and 10 research grants. He is Member of 15 societies.

561

Vol. 5, Issue 1, pp. 555-561

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

AN EFFICIENT VARIABLE SPEED STAND ALONE WIND ENERGY CONVERSION SYSTEM & EFFICIENT CONTROL TECHNIQUES FOR VARIABLE WIND APPLICATIONS
R. Jagatheesan, K. Manikandan
U.G. Scholar, Park College of engineering & tech, Coimbatore, India

ABSTRACT
This paper provides an efficient control technique for the operation of a direct-drive synchronous generatorbased stand-alone variable-speed wind energy conversion system. The control strategy for the generator-side converter with maximum power extraction is presented. This control provides a constant output voltage and frequency which capable of delivering to variable loads. The main attention is DC link voltage control deals with the chopper control for various load conditions, also a battery storage system with converter and inverter has to be used to deliver continuous power at the time of fluctuated wind. The PI Controller in switch mode rectifier can be replaced with vector control technique to improve the output voltage level. The simulation results show this control strategy gives better regulating voltage and frequency under sudden varying loads. Dynamic representation of DC bus and small signal analysis of the system are presented. The dynamic controller shows very good performance.

KEYWORDS: PMSM, boost converter, inverter, driver circuit, dynamic controller and PIC/DSP.

I.

INTRODUCTION

In this paper a design of efficient control techniques in variable speed to give continuous Supply to load is implemented. Variable-speed wind turbines have many advantages over fixed-speed generation such as increased energy capture, operation at maximum power point, improved efficiency, and power quality [1]. There liability of the variable-speed wind turbine can be improved significantly by using a direct- generator. PMSG has received much attention in wind-energy application because of their property of self-excitation, which allows an operation at a high power factor and high efficiency [2].Optimum power/torque tracking is a popular control strategy, as it helps to achieve optimum wind energy utilization [3][4].The switch-mode rectifier has also been investigated for small-scale variable-speed wind turbine [5], [6].It is very difficult to obtain the maximum voltage level by using PI controller. In order to obtain the maximum output, PWM control can be used. For a stand-alone system, the output voltage of the load side converter has to be controlled in terms of amplitude and frequency. Previous publications related to PMSG-based variable-speed wind turbine are mostly concentrated on grid connected system [6]-[8]. Much attention has not been paid for a stand-alone system. Many countries are affluent in renewable energy resources; however, they are located in remote areas where power grid is not available. Local small-scale standalone distributed

562

Vol. 5, Issue 1, pp. 562-571

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
generation system can utilize these renewable energy resources when grid connection is not feasible. In this paper, a control strategy is developed to control the load voltage in a stand stand-alone mode. As there is no grid in a stand-alone system, the output voltage has to be controlled in terms of amplitude alone and frequency. The load-side pulse width modulation (PWM) inverter is using a relatively complex side vector-control scheme to control the amplitude and fr frequency of the inverter output voltage. The stand-alone control is featured with output voltage and frequency controller capable of handling alone variable load.

II.

BLOCK DIAGRAM

Fig 1.Block diagram of the project

Generator converts the variable speed mechanical power produced by the wind turbine into electrical power. The power produced in the generator having variable frequency and voltage AC power. This Ac power converted into DC power with the help of uncontrolled rectifier. The d power will be have dc variable voltage. This variable voltage is boostered to rated level with the help of boosted converter. Boosted dc power is converted into fixed frequency AC power and it is delivered to load. Between load and inverter as storage system with converter and inverter is used to store the energy. This system storage system will store the energy at the time of load lesser than maximum level. Also this storage system is used to deliver power to load when the boost converter unable to boost up the voltage. Microcontroller is used to control boost converter and inverter to get fixed frequency and voltage voltage.

III.

MATHEMATICAL MODELLING OF WECS

A. Model of Wind Turbine with PMSG Wind turbines cannot fully capture wind energy. The components of wind turbine have been modelled by the following equations [8-10].Output aerodynamic power of the wind turbine is expressed as: 10].Output wind-turbine

Where, is the air density (typically 1.225 kg/ kg/m3), A is the area swept by the rotor blades (in m2) Cp m2), is the coefficient of power conversion and v is the wind speed (in m/s). The tip speed ratio is defined tip-speed as:

563

Vol. 5, Issue 1, pp. 562-571

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Fig 2.characteristics of Cp Vs ; for various values of the pitch angle characteristics

Where m and R are the rotor angular velocity (in rad/sec) and rotor radium (in m), respectively. The wind turbine mechanical torque output m T given as:

The power coefficient is a nonlinear function of the tip speed ratio and the blade pitch angle (in degrees). If the swept area of the blade and the air density are constant, the value of Cp is a function of and it is maximum at the particular opt . Hence, to fully utilize the wind energy, should be maintained at opt , which is determined fr , from the blade design. Then:

A generic equation is used to model the power coefficient C (, ) based on the modeling turbine Cp ) characteristics described in [2], [7 and [11] as: [7-9]

The characteristic function Cp ( , ) vs , for various values of the pitch angle , is illustrated in , Fig.2. The maximum value of Cp ( , ) , that is Cp,max =0.36, is achieved for =2 and for =9.6. max This particular value opt results in the point of optimal efficiency where the maximum power is captured from wind by the wind turbine. For each wind speed, there exists a specific point in the wind generator power characteristic, MPPT, where the output power is maximized. Thus, the control of the WECS load results in a variable-speed operation of the turbine rotor,

Fig 3.wind generator power curves at various wind speed wind

So the maximum power is extracted continuously from the wind (MPPT control) .That illustrated in o Fig.3.

B. PMSG Model
Dynamic modeling of PMSG can be described in d-q reference system as follows [1], [10]: q

564

Vol. 5, Issue 1, pp. 562-571

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Where Rg is the stator resistance, Lq and Ld are the inductances of the generator on the d and q axis, f is the Permanent magnetic flux and electrical rotating speed of the generator, defined by

Where Pn is the number of pole pairs of the generator and m is the mechanical angular speed. In n order to complete the mathematical model of the PMSG, the expression for the electromagnetic torque can be described as [11]:

If id = 0, the electromagnetic torque is expressed as:

C. Wind-Turbine Characteristics Turbine


The amount of power captured by the wind turbine (power delivered by the rotor) is given by,

Where is the air density (kilograms per cubic meter), v is the wind speed in meters per second, A is the blades swept area, and Cp is the turbine turbine-rotor-power coefficient, which is a function of the tips power peed ratio () and pitch angle ( ).m = rotational speed of turbine rotor in mechanical radians per second, and R = radius of the turbine. If the wind speed varies, the rotor speed should be adjusted to speed follow the change. The target optimum torque can be given by

The mechanical rotor power generated by the turbine as a function of the rotor speed for different wind speed is shown in Fig. 4.

Fig 4.Mechanical power generated by the turbine as a function of the rotor Mechanical Speed for different wind speeds.

The optimum power is also shown in this figure. The optimum power curve (Popt) shows how Maximum energy can be captured from the fluctuating wind. The function of the controller is to keep the turbine operating on this curve, as the wind velocity varies. It is observed from this figure that there is always a matching rotor speed which produces optimum power for any wind speed. If the controller can properly follow the optimum curve, the wind turbine will produce maximum power at any speed within the allowable range.

565

Vol. 5, Issue 1, pp. 562-571

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

IV.

IC PIC16F877A MICRO CONTROLLER

PIC is a family of Harvard architecture microcontrollers made by Microchip Technology. It is the controller IC for controlling purpose and it convert the given signal into the digital signal and it will send the appropriate signal to receiver side. This PIC microcontroller is used to control all parts of the circuit. This is used to fire all IGBTs and relay. The main advantage of CMOS and RISC combination is low power consumption resulting in a very small chip size with a small pin count. The main advantage of CMOS is that it has immunity to noise than other fabrication techniques.

A. Features
1. 2. 3. 4. 5. 6. 7. 8. Pin out compatible to the PIC16C73B/74B/76/77 Interrupt capability (up to 14 sources) Eight level deep hardware stack Direct, indirect and relative addressing modes Power-on Reset (POR) Power-up Timer (PWRT) and Oscillator Start-up Timer (OST) Watchdog Timer with its own on-chip RC oscillator.

Fig 5.Architecture of IC PIC16F877A

V.

BOOST CONVERTER

A. Boost Converter Control Strategy

Fig 6.The power circuit is a dc-dc boost converter. The command circuit is the one, in which the analogue controller was replaced with a Fuzzy one. The output of the Fuzzy controller is vc.

In average current control method, an input voltage sensing is required to obtain a sinusoidal reference, an analogue multiplier to combine this reference with the output information, and an error amplifier in current loop to extract the difference between the input current and the reference to generate the control signal for modulating the input current. There are a lot of very sophisticated researches of boost converter dynamics. The most of PFC is based on boost converter, because of its

566

Vol. 5, Issue 1, pp. 562-571

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
input inductor which reduces the total harmonics distortion and avoids the transient impulse from power net, the voltage of semiconductor device below output voltage, the zero potential of Qs source side which makes it easy to drive Q and its simple structure. Therefore, satisfied teaching of advanced power electronics should be introduced by unity power factor and high efficiency by DC-DC boost converter. In this section one inductor and an IGBT are used to boost up the voltage. When the Dc voltage is lesser than the rated level, it will boost up the voltage. IGBT is used to charging and discharging the inductor. This IGBT is control by micro controller.

VI.

CONTROL OF LOAD-SIDE INVERTER

The control strategy of Vector-Control Scheme is used to perform the control of the grid side converter. They control of the DC-link voltage, active and reactive power delivered to the grid, grid synchronization and to ensure high quality of the injected power [2].The objective of the supply-side converter is to regulate the voltage and frequency. The control schemes are in the inner loops where they use different reference frames to perform the current control. In the first case, the currents are controlled in the synchronous rotating reference frame using PI controllers. The dc voltage PI controller maintains the dc voltage to the reference value. The PI controllers are used to regulate the output voltage and currents in the inner control loops and the dc voltage controller in the outer loop. This is the classical control structure, it is also known as dq-control. It transforms the grid voltages and currents from the abc to the dq reference frame. In this way the variables are transformed to DC values which can be controlled more easily. This structure uses PI controllers since they have good performance for controlling DC variable.

Fig 7.vectrol control technique

VII.

CONTROL OF SWITCH-MODE RECTIFIER EXTRACTION

WITH

MAXIMUM POWER

The structure of the proposed control strategy of the switch mode rectifier is shown in Fig. 8. The Control objective is to control the duty cycle of the switch S to extract maximum power from the variable-speed wind turbine and transfer the power to the load. The control algorithm includes the Following steps.

567

Vol. 5, Issue 1, pp. 562-571

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Fig 8.Control strategy of the switch-mode rectifier.

Fig 9.wind & generator speed analysis wind

1) Measure generator speed _g. 2) Determine the reference torque (Fig. 8 using the following equation: ine 8)

3) This torque reference is then used to calculate the dc current reference by measuring the rectifier Output voltage Vd as given by

4) The error between the reference dc current and measured dc current is used to vary the duty cycle of the switch to regulate the output of the switch mode rectifier and the generator torque through a switch-mode Proportionalintegral (PI) controller. The generator torque is controlled in the optimu torque curve optimum as shown in Fig.9.Finally, the generator will reach the point c were the accelerating torque is zero. .Finally, A similar situation occurs when the wind velocity decreases. In the proposed method, the wind speed . is not required to be monitored, and, therefore, it is a simple output-maximization control method maximization without wind-speed sensor (anemometer). speed

VIII.

RESULT AND DISCUSSION

The model of the PMSG-based variable based variable-speed wind-turbine system of Fig. 1 is built using Matlab/Simpower dynamic system simulation software. The simulation model is developed based on a Kollmorgen 1-kW domestic permanent permanent-magnet synchronous machine. It is seen that the controller can regulate the load voltage and frequency quite well at constant load and under varying load conditions.

A. Overall Circuit Diagram & Results Analysis Using Psim Software

Fig 10.Simulation circuit for PMSG based stand alone variable speed wind turbine using PSIM Simulation

568

Vol. 5, Issue 1, pp. 562-571

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
B. Output Waveforms

Fig 11.Result of VSI motor drive system

Fig 12.Result of generator o/p voltage

Fig 13.inverter side o/p voltage

Fig 14.Measurement of voltage & current

C.Screenshot

D. Future Scope
The PI controller in switch mode power supply can be replaced with perturbation and observation method to improve the output voltage level. It present the inverter is controlled by voltage & frequency method. It can be enhanced in future by inverter control strategies .these strategies are base current and dc link voltage. The harmonic introduced in grid connecting eliminating by filters.

569

Vol. 5, Issue 1, pp. 562-571

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

IX.

CONCLUSION

A control strategy for a direct-drive stand-alone variable speed wind turbine with a PMSG has been presented in this paper. A simple control strategy for the generator-side converter to extract maximum power is discussed and implemented using Simpower dynamic-system simulation software. The controller is capable of maximizing output of the variable-speed wind turbine under fluctuating wind. The load-side PWM inverter is controlled using vector-control scheme to maintain the amplitude and frequency of the inverter output voltage. It is seen that the controller can maintain the load voltage and frequency quite well at constant load and under varying load condition. The generating system with the proposed control strategy is suitable for a small-scale stand-alone variable-speed windturbine installation for remote-area power supply. The simulation results have proves that Regulating the o/p voltage & frequency under sudden load variations and typical wind movement and the controller works very well and shows very good dynamic and steady-state performance.

REFERENCES
[1] S. Mller, M. Deicke, and R. W. De Doncker, Doubly fed induction generator system for wind turbines, IEEE Ind. Appl. Mag., vol. 8, no. 3,pp. 2633, May 2002. [2] T. F. Chan and L. L. Lai, Permanent-magnet machines for distributed generation: A review, in Proc. IEEE Power Eng. Annu. Meeting, 2007. [3] M. De Broe, S. Drouilhet, and V. Gevorgian, A peak power tracker for small wind turbines in battery charging applications, IEEE Trans. Energy Convers., vol. 14, no. 4, Dec. 2005. [4] M. Chinchilla, S. Arnaltes, and J. C. Burgos, Control of permanent magnet generators applied to variable speed wind-energy systems connected to the grid, IEEE Trans. Energy Convers., vol. 21, no. 1, pp. 130135, Mar. 2006. [5] W. Lixin , Fuzzy System and Fuzzy Control, Beijing: Tsinghua Universtiy,2003. [6] Y Liyong,L.Yinghong,C Yaai,and L.Zhengxi, "A Novel Fuzzy Logic Controller for Indirect Vector ControlInduction Motor Drive ,Proceedings of the 7th World Congress on Intell igent Control and Automation, Chongqing, China, pp.24-28, June 2008 . [7] Gautam Poddar, Aby Joseph, and A.K.Unnikrishnan , Sensorless Variable-Speed Wind Power Generator With Unity-Power-Factor Operation, IEEE Trans.Ind. Electron, vol. 50, pp. 1007-1015 , Oct.2003. [8] T. Tafticht, K. Agbossou, A. Cheriti, and M.L. Doumbia, Output Power Maximization of a Permanent Magnet Synchronous Generator Based Stand-alone Wind Turbine,IEEE Industrial Electronics, vol. 3, pp. 2412-2416, July 2006. [9] M. G. Molina, and P. E. Mercado; A New Control Strategy of Variable Speed Wind Turbine Generator for Three-Phase Grid- Connected Applications, Transmission and Distribution Conference and Exposition: Latin America, 2008 IEEE/PES ,pp. 1-8, Aug. 2008 . [10] J.S. Thongam, P. Bouchard, H Ezzaidi, and M Ouhrouche, Wind speed sensorless maximum power point tracking control of variable speed wind energy conversion systems, Electric Machines and Drives Conference, 2009. IEMDC '09. IEEE International, pp. 1832-1837, May 2009. [11] K. Raiambal and C. Chellamuthu, Modeling and Simulation of Grid Connected wind electric generating system, TENCON '02. Proceedings. 2002 IEEE Region 10 Conference on Computers, Communications, Control and Power Engineering , vol. 3, pp.18471852, October 2002. [12] Alejandro Rolan, Alvaro Luna, Gerardo Vazquez, Daniel Aguilar and Gustavo Azevedo, Modeling of a Variable Speed Wind Turbine with a Permanent Magnet Synchronous Generator, IEEE Industrial Electronics, 2009. ISIE 2009. IEEE International Symposium on, pp.734 739, July 2009. [13] Ming Yin, Gengyin Li, Ming Zhou and Chengyong Zhao, Modelling of the Wind Turbine with a Permanent Magnet Synchronous Generator for Integrat, Power Engineering Society General Meeting, 2007. IEEE, pp. 1-6, June 2007.

AUTHORS BIOGRAPHY
R.JAGATHEESAN received his B.E. degree in Electrical and Electronics Engineering from Park college of engineering & technology, Coimbatore, India in 2011 and he is received Diploma in Electrical and Electronics Engineering from Dhanalakshmi Srinivasan Polytechnic college, Perambalur, India in 2008 and now he is doing ME degree in Power systems engineering from Jayaram college of engineering & technology, Trichy, India in 2011 to 2013. His areas of interest are Restructure power systems, Power Quality, Renewable

570

Vol. 5, Issue 1, pp. 562-571

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Energy system, FACTS devices and their control, Power Electronics and Drives and drives and control control.

K.MANIKANDAN received his B.E degree in Electrical and Electronics Engineering from Park College of Engineering & Techn Technology, Coimbatore, India in 2011 and he is received Diploma in Electrical and Electronics Engineering from K.S.Rangasamy Polytechnic College, Erode, India in 2007 and now he is working in Lecturer from Sree vaikundar polytechnic college, Nagercoil, India. His areas of interest are Power Electronics and Drives, Power Quality, Electrical machines, drives and control and Renewable Energy system.

571

Vol. 5, Issue 1, pp. 562-571

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

FUZZY LIKE PID CONTROLLER TUNING BY MULTIOBJECTIVE GENETIC ALGORITHM FOR LOAD FREQUENCY CONTROL IN NONLINEAR ELECTRIC POWER SYSTEMS
M. A. Tammam1, M. A. S. Aboelela2, M. A. Moustafa2, A. E. A. Seif2 1 Invensys Process Systems, Cairo, Egypt 2 Department of Electrical Power and Machines, Faculty of Engineering, Cairo University, Egypt

ABSTRACT
This paper studies control of load frequency in single and two area power systems with fuzzy like PID controller. In this study, multi-objective genetic algorithm is used to determine the parameters of the fuzzy like PID controller according to the system dynamics. The proposed controller has been compared with the conventional PID controllers tuned by Ziegler-Nicholasmethod and Particle Swarm Optimization technique. The overshoots and settling times with the proposed Genetic-PID controller are superior to the outputs of the same characteristics of the conventional PID controllers. The effectiveness of the proposed schemes is confirmed via extensive study using single area and two areas load frequency control examples through the application of MATLAB-Simulink software.

KEYWORDS: Load Frequency Control, Electric Power System, Fuzzy Logic, Multi-Objective Genetic Algorithm.

I.

INTRODUCTION

Load Frequency Control (LFC) as a major function of Automatic Generation Control (AGC) is one of the important control problems in electric power system design and operation. It is becoming more significant today because of the increasing size, changing structure, emerging new uncertainties, environmental constraints and the complexity of power systems. A large frequency deviation can damage equipment, corrupt load performance, reason of the overloading of the transmission lines and can interfere with system protection schemes, ultimately leading to an unstable condition for the electric power system. Maintaining frequency and power interchanges with neighboring control areas at the scheduled values are the two main primary objectives of a power system LFC [1]. Many control strategies for Load Frequency Control in electric power systems have been proposed by researchers over the past decades. This extensive research is due to fact that LFC constitutes an important function of power system operation where the main objective is to regulate the output power of each generator at prescribed levels while keeping the frequency fluctuations within prespecifies limits. A unified tuning of PID load frequency controller for power systems via internal mode control has been proposed [2]. In this paper the tuning method isbased on the two-degree-offreedom (TDF) internal model control (IMC) design method and a PID approximation procedure. A new discrete-time sliding mode controller for load-frequency control in areas control of a power system has been presented [3]. In this paper full-state feedback is applied for LFC not only in control areas with thermal power plants but also in control areas with hydro power plants, in spite of their non-minimum phase behaviors. To enable full-state feedback, a state estimation method based on fast

572

Vol. 5, Issue 1, pp. 572-583

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
sampling of measured output variables has been applied. The applications of artificial neural network, genetic algorithms and optimal control toLFC have been reported in [4-7]. An adaptive decentralized load frequency control of multi-area power systemshas been presented in [7]. Also the application of robust control and adaptive methods for load frequency control problem hasbeen presented in [8-10]. Furthermore, the application of some evolutionary techniques on LFC has been reported for single area and multi-areas power systems in literature [11-18] As stated in some literature [19], some control strategies have been suggested based on the conventional linear control theory. These controllers may be inappropriate in some operating conditions. This could be due to the complexity of the electric power systems such as nonlinear load characteristics and variable operating points. Now-a-days the LFC systems are faced by new uncertainties in the electrical market. To meet these uncertainties and to support the control process an open communication infra-structure is important. In conventional LFC schemes dedicated communication channels are used for transmit the measurements to the control centre and then to the generator unit. The communication delays are considered as significant uncertainties in the LFC due to the complexity of the power system and cause the system instability. This also degrades the system performance. Thus the analysis of LFC model in the presence of time delays is most important. Now-a-days many researchers concentrate on LFC modeling/synthesis in the presence of time delays [20-24]. They mainly focused on the network delay models and the communication network requirements. The incorporation of power system nonlinearities in the LFC strategies has been described by some researchers [25]. In this study, multi-objective genetic algorithm is used to determine the parameters of the fuzzy like PID controller according to the system dynamics. Adjusting the maximum and minimum values of the PID gains ( , ) and output gain ( ) respectively, the outputs of the system (voltage, frequency) could be improved. In this simulation study, a single and two-area nonlinear electric power system is chosen and load frequency control of this system is made by genetic based fuzzy like PID controller. This paper is organized as follow: Section II will give an overview of the genetic algorithm (GA). In section III provides the multi-objective optimization technique using the GA. In section VI introduces the fuzzy-like PID controller structure. Section V presents the nonlinear load frequency modeling technique. Simulation results will be given in section VI. The main conclusions have been introduced in section VII. In section VIII, possible future work has been suggested. Some references are listed at the end of the paper.

II.

OVERVIEW ON GENETIC ALGORITHM

The Genetic Algorithm (GA) is an optimization and search technique based on the principles of genetics and Darwinian selection. The GA allows a population composed of many individuals to evolve under specified selection rules to a state that maximizes the fitness (i.e., minimizes the cost function), many versions of evolutionary programming have been tried with varying degrees of success. Some of the advantages of a GA include [26-27]: Optimization with continuous or discrete variables. Derivative information is not required. Simultaneously searching from a wide sampling of the cost surface. Optimization of the variables with extremely complex cost surfaces (they can jump out of a local minimum). Encode the variables so that the optimization is done with the encoded variables. Working with numerically generated data, experimental data, or analytical functions. These advantages are interesting and produce surprising results when traditional optimization approaches fail miserably. There are many variations of the genetic algorithms but the basic form is simple genetic algorithm (SGA). This algorithm works with a set of population of candidate solution represented as strings. The initial population consists of randomly generated individuals. Then the fitness of each individual in current population is computed. The population is then transformed in stages to yield a new current population for next iteration. The transformation is usually done in three stages by simply applying the following genetic operators: (1) Selection, (2) Crossover, and (3) Mutation.

573

Vol. 5, Issue 1, pp. 572-583

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
In the first stage selection operator is applied as many times as there are individuals in the population. In this stage every individual is replicated with a probability proportional to its relative fitness in the population. In the next stage, the crossover operator is applied. Two individuals (parents) are chosen and combined to produce two new individuals. The combination is done by choosing at random a cutting point at which each of parents is divided into two parts; these are exchanged to form the two offspring which replace their parents in the population. In the final stage, the mutation operator changes the values in a randomly chosen location on an individual. The algorithm terminates after a fixed number of iterations and the best individual generated during the run is taken as the solution.

III.

MULTI-OBJECTIVE GA

In many real-life problems, objectives under consideration conflict with each other, and optimizing a particular solution with respect to a single objective can result in unacceptable results with respect to the other objectives [26]. A reasonable solution to a multi-objective problem is to investigate a set of solutions, each of which satisfies the objectives at an acceptable level without being dominated by any other solution [27]. Being a population based approach, GA are well suited to solve multi-objective optimization problems. A generic single-objective GA can be modified to find a set of multiple non-dominated solutions in a single run. The ability of GA to simultaneously search different regions of a solution space makes it possible to find a diverse set of solutions for difficult problems with non-convex, discontinuous and multi-modal solutions spaces. The cross over operator of GA may exploit structures of good solutions with respect to different objectives to create new non - dominated solutions [26]. The goal of MOO is to find as many of these solutions as possible. If reallocation of resources cannot improve one cost without raising another cost, then the solution is Pareto optimal. A Pareto GA returns a population with many members on the Pareto front. The population is ordered based on dominance. Several different algorithms have been proposed and successfully applied to various problems such as [28-30]: Vector-Evaluated GA (VEGA), Multi-Objective GA (MOGA), A Non-Dominated Sorting GA (NSGA) and Non-Dominated Sorting GA (NSGA II) which is used in the proposed research.

IV.

FUZZY LIKE PID CONTROLLER STRUCTURE

Fuzzy logic (FL) was first proposed by Lotfi A. Zadeh (1965) [31] and is based on the concept of fuzzy sets. Fuzzy set theory provides a means for representing uncertainty. In general, probability theory is the primary tool for analyzing uncertainty, and assumes that the uncertainty is a random process. However, not all uncertainty is random, and fuzzy set theory is used to model the kind of uncertainty associated with imprecision, vagueness and lack of information. In this work; the development of the fuzzy logic approach here is limited to the design and structure of the controller. The input constraints were terminal voltage error (e), error derivative (de) and error integration (se); the output constraint was the increment of the voltage exciter as shown in Figure 1.

Figure 1: Fuzzy Like PID Controller

574

Vol. 5, Issue 1, pp. 572-583

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Three fuzzy sets are defined for each input variable; (N - negative, Z - zero, P - positive). Five fuzzy sets are defined for the output variable; (LN Large negative, N - negative, Z - zero, P positive, LP Large positive) with 27 rules as shown in Table 1. The membership functions for input and output variable are triangular. The complete set of control rules is shown in Table 1; each of the control rules represents the desired controller response to a particular situation.
e de N se N Z P LP P P Table 1: Rule Base for Fuzzy like PID Controller N Z Z P P Z P P Z N N P P Z Z P Z N P Z N N N P Z N P Z Z N N P N N LN

The min-max method inference engine is used; the defuzzify method used in is the center of area, the input and output normalized to the [-1, 1] universe. The optimal values of the Fuzzy like PID controller parametersK , K , K and K in Figure 1 are found using genetic Algorithm multi-objective optimization [32-33].

V.

NONLINEAR LOAD FREQUENCY CONTROL MODEL

Non-reheat type nonlinear electric power system represented by a block diagram of a closed loop controlled system model is shown in Figure 2, Figure 3 for single area, two-area electric power system respectively; where is the system frequency (Hz), is regulation constant (Hz/unit), is speed governor time constant (sec), is turbine time constant (sec), is inertia constant (s) and is area parameter (Mw/Hz) [34-35]. The model includes the effect of Generation Rate Constraint (GRC) and limits on the position of the governor valve, which are caused by the mechanical and thermodynamic constraints in practical steam turbines systems. A typical value of 0.01 p.u. /min has been included in the model as stated in [36]. A. Single Area Nonlinear Electric Power System

The system can be modeled by the following form: )+ )+ ) = (1) Where: , , are the system, the input and disturbance matrices. ), ) and ) are state, control signal and load change disturbance vectors respectivelydefined )= . as ) = AND The system output depends on the objective function which is Integral Absolute Error (IAE) can be given as: )= ) = ) = The control signal for the fuzzy like PID controller can be given as: (2)

)= + + (3) Percentage of overshoot and settling time are two more objective functions have been added to the IAE performance index to define the multi-objective genetic algorithm problem.

575

Vol. 5, Issue 1, pp. 572-583

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Figure 2: Non-Linearized Single Area Power System Simulink Model with Multi-Objective Genetic AlgorithmTuned Fuzzy like PID Controller

The nominal system parameters are: = 1, = 0.08 , = 1,

= 0.3

= 120,

= 20

= 2.4

B. Two-Area Nonlinear Electric Power System An interconnected power system is divided into control areas connected by a tie line. In each control area, all generators are supposed to constitute a coherent group. The tie-line power flow and frequency of the area are affected by the load changes. Therefore, it can be considered that each area needs its system frequency and tie-line power flow to be controlled. Area Control Error (ACE) signal is used as the plant output of each power generating area. Driving ACEs in all areas to zeros will result in zeros for all frequency and tie-line power errors in the system. So it can be defined as: ACE = P , + B F (4) , , Where: B is the frequency response characteristic for area I defined as b = D + . The system can be modeled by the following form: = )+ )+ ) (5)

) and ) are Where A is the system matrix, , the input and disturbance matrices and ), state, control signal and load change disturbance vectors respectively defined as )= )= )= , and where and are the control signals in area 1, area 2 respectively. The system output which depends on Area Control Error (ACE) can be given as: )= = ) (6)

The control signal for the fuzzy like PID controller can be given as: )= + + (7)

To simplify the study, for the two interconnected areas were considered identical. So the optimal parameter chosen such that = = and = = . Percentage of overshoot and settling time are two more Objective functions have been added to the IAE performance index to define the multi-Objective genetic Algorithm problem.

576

Vol. 5, Issue 1, pp. 572-583

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Figure 3: Non-Linearized Two-Area Power System Simulink Model with Multi-Objective Genetic AlgorithmTuned Fuzzy like PID Controller

The nominal system parameters are: 2.4, = = = 0.08 = 0.425, , = = 0.3 , , = 1. = = 20 ,, = = 100 = =

= 0.05

VI.

SIMULATION RESULTS

The simulation set up needs only the incorporation of single area and two areas models of Figures 2 and 3 in the Simulink tool of MATLAB and run the simulation to get the results described in this paper as follow:
A. Single Area Nonlinear Electric Power System

By using the Simulink model shown in Figure 2 with multi-objective genetic algorithm technique in conjunction with equation (1)-(3), optimal controller parameters were obtained as shown in Table 2. Figure 4 shows the time domain performance of the nonlinear electric power system under the proposed multi-objective genetic algorithm based PID controller with step change of 0.01 p.u. At the simulation, the genetic algorithm was run for 1000 generations with a population size of 100.

577

Vol. 5, Issue 1, pp. 572-583

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
Table 2: Single Area Fuzzy like PID Controller Parameters using Multi-Objective Genetic Algorithm Technique Fuzzy like PID Parameters Values 3.358 0.557 0.984 0.821

Figure 4: Single Area Nonlinear Electric Power System Response with Multi-Objective Genetic Algorithm Tuned Fuzzy like PID Table 3: Response Characteristics Using Genetic Algorithm-Tuned Fuzzy like PID Technique in NonLinearized Single Area Electric Power System Overshoot Settling Time (Hz) (Sec) 2.2142 0.0289

B. Two-Area Nonlinear Electric Power System By using the Simulink model shown in Figure 3 with multi-objective genetic algorithm technique in conjunction with equation (4)-(7), optimal controller parameters were obtained as shown in Table 4. Figure 5-a, 5-b, 5-c show the time domain performance of the frequency deviation in first area, second area and tie line power deviation respectively under the proposed multi-objective genetic algorithm based fuzzy like PID controller with step change of 0.01 p.u. At the simulation, the genetic algorithm was run for 100 generations with a population size of 100.
Table 4: Two-Area Fuzzy like PID Controller Parameters using Multi-Objective Genetic Algorithm Technique Fuzzy like PID = = = Parameters = Values 4.216 0.368 1.833 3.827

578

Vol. 5, Issue 1, pp. 572-583

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

(a)

(b)

(c) Figure 5: Two-Area Nonlinear Electric Power System Response with Multi-Objective Genetic Algorithm Tuned Fuzzy like PID a) First Area Frequency b) Second Area Frequency c) Tie Line Power Table 5: Response Characteristics Using Genetic Algorithm-Tuned Fuzzy like PID Technique in NonLinearized Two-Area Electric Power System Overshoot First Area Frequency (Hz) Second Area Frequency (Hz) Tie Line Power (P.u) 0.0202 0.0128 0.0050 Settling Time (Sec) 3.0257 4.3364 4.4746

579

Vol. 5, Issue 1, pp. 572-583

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

VII.

CONCLUSION

In this proposed study, a multi-objective genetic algorithm based fuzzy like PID technique has been applied for automatic load frequency control of a nonlinear single and two-area electric power system. For this purpose, first, a fuzzy like PID controller has been proposed then the parameters of the fuzzy like PID controller has been added to the model the PID gains ( , ) and output gain ( ) and at last, a tuning mechanism for the fuzzy like PID controller parameters is obtained. The single area and two areas power systems have been simulated using MATLAB/Simulink software on a standard personal computer. It has been shown that the proposed control algorithm is effective and provides significant improvement in system performance both in the transient and steady state responses. Therefore, the proposed multi-objective genetic based fuzzy like PID controller is recommended to generate a good quality and reliable electric energy. In addition, the proposed controller is very simple and easy to implement since it does not require many information about system parameters.

VIII.

FUTURE WORK

The work presented in this paper canbe extended in several directions. Some possible areas of extension are givenbelow. a) Improvement on the Model of the Power Systems The performed studies in the previous sections on LFC dynamic performancehave been made based upon A linearized model analysis. A non linear model analysis with a saturated steam or hydrocontrol valve. The described LFC model so far does not consider the effects of all the physical constraints; an important physical constraint can be a point of a research in the LFC is on the rate of change of power generation due to the limitation of thermal and mechanical movements. LFC studies that do not take into account the delays caused by the crossover elements in a thermal unit, orthe behavior of the penstocks in a hydraulic installation, in addition to the sampling interval of the data acquisition system, results in a situation where frequency and tie-line power could be returned to their scheduled value within1 s.For the LFC problem, some of the plant limits such as generation rate constraints and dead bands are disregarded in this paper. However, in reality, they exist in power systems. In the future, a plan should be done to include theplant limits in the model of the power system to make the model morepractical. Accordingly, LFC will be modified so as to successfully apply it tothe new model.It should be noted that most of the proposed control strategies, so far, forsolution of the LFC problem have not been implemented due to systemoperational constraints associated with thermal power plants. The main reasonis the non-availability of the required power. Also, due to the persistence of the system frequency and tie line deviations for a long duration in the case of small load disturbances. On the other hand, electromechanical oscillations in a power system can be effectively damped by fast acting energy storage devices, because additional energy storage capacity is provided as a supplement to the kinetic energystorage. The energy storage devices share the sudden changes in power requirement in the load. Thus, in a power system, the instantaneous mismatch between supply and demand of real power for sudden load changes can bereduced by the addition of active power sources with fast response such as BES, SMES and CES devices. Another competitive point of research is the LFC in a deregulated environment. Nowadays, the electric power industry is in transition to a competitive energy market. In the new structure, GENCOs may not participate in the LFC task and DISCOs have the liberty to control any available GENCOs in their own orother areas. On the other hand, the real world power system contains different kinds of uncertainties and disturbances, and coming deregulation significantly increasesthe severity of this problem. Under this condition, the classical controller is certainly not suitable for the LFC problem.

580

Vol. 5, Issue 1, pp. 572-583

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
b) Improvement on the Load Frequency Controller From the point of view of control, among all categories of LFC strategies, robust control and AIbased methods have shown an ability to give betterperformance in dealing with the system non linearities, modeling uncertainties and area load disturbances under different operating conditions. The main capability of robust control approaches is alleviation of the impossibility ofcontroller design based on a more complete model of the system that considers uncertainties and physical constraints, too. The salient feature of the AI technique is that it provides a model-free description of the control system and does not require an accurate model of the plant. A continuation of this work could be using a different kind of controller otherthan Genetic Algorithm based PID controller, Genetic Algorithm based FuzzyLike PID controller such as Genetic Algorithm based ANFIS controller; sothat GA can be used to optimize the membership functions of the ANFIS controller. Comparison between control strategies can be done with the aid of different kinds of power systems uncertainties and models in order to reach for the mostsuitable controller for the real world two-area power system.The authors suggest, for future work, the extension of the proposed algorithm on multi area power systems including different generation renewable resources such as wind and solar systems. Also the investigation of the algorithm sensitivity for system parameters would be considered in future research studies.

REFERENCES
[1]. [2]. [3]. [4]. [5]. [6]. [7]. [8]. [9]. [10]. [11]. [12]. [13]. Bevrani, H. (2009). Robust Power System Frequency Control. Brisbane, Australia: Springer Science. Tan W. (2010). Unified tuning of PID load frequency controller for power systems via IMC. IEEE Transactions Power Systems, 25 (1), pp. 341-350. Vrdoljak K., N. Peric and I. Petrovic (2009). Sliding mode based load-frequency control in power systems. Electric Power Systems Research, 80, pp. 514527. Shayeghi H., H.A. Shayanfar and O.P. Malik (2007). Robust decentralized neural networks based LFC in a deregulated power system. Electric Power Systems Research, 77, pp. 241251. Kassem Ahmed M. (2010). Neural Predictive Controller of a Two Area Load Frequency Control For Interconnected Power System - Ain Shams Engineering Journal. M. LY, . K. (2008). Load Freqency Control In A Single Area Power System By Artificial Neural Network (ANN). University Of Pitesti, Electronic And Computers Science, Scientific Bulletin, No. 8, Vol.2, , ISSN-1453-1119. Liu F., Y.H. Song, J. Ma, S. Mai and Q. Lu (2003). Optimal load frequency control in restructured power systems. IEE Proceedings Generation, Transmissions and Distribution, 150(1), pp. 87-95. Rerkpreedapong D., A. Hasanovic and A. Feliachi (2003). Robust load frequency control using genetic algorithms and linear matrix inequalities. IEEE Transactions Power Systems, 18(2), pp. 855-861. Zribi M., M. Al-Rashed and M. Alrifai (2005). Adaptive decentralized load frequency control of multiarea power systems. Electrical Power and Energy Systems, 27, pp. 575583. Taher S.A., and R. Hematti (2008). Robust decentralized load frequency control using multi variable QFT method in deregulated power systems. American Journal Applied Sciences, 5(7), pp. 818-828. M.Peer Mohamed, E.A.Mohamed Ali, and I.Bala Kumar (2012). BFOA Based Tuning Of PID Controller For A Load Frequency Control In Four Area Power System. IJART, Vol. 2 Issue 3, 2012,pp. 133-138. K.RamaSudha, V.S.Vakula and Vijaya Shanthi (2010). PSO Based Design Of Robust Controller For Two Area Load Frequency Controller With Non Linearities. International Journal of Engineering Science and Technology - Vol. 2(5), pp. 1311-1324. Jenica Ileana Corcau Eleonor Stoenescu (2007). Fuzzy Logic Controller As a Power System Stabilizer, International Journal of Circuits, Systems And Signal Processing Issue 3 - Volume 1. - pp. 266-273. Kocaarslan I. and E. Cam (2005). Fuzzy logic controller in interconnected electrical power Systems for load-frequency control. Electrical Power and Energy Systems, 27, pp. 542549. Kumar B.Venkata Prasanth and S.V.Jayaram (2005). Robust Fuzzy Load Frequency Controller For A Two Area Interconnected Power System - Journal of Theoretical and Applied Information Technology. Lu Chia-Feng Juang and Chun-Feng (2005). Power System Load Frequency Control by Genetic Fuzzy Gain Scheduling Controller - Journal of the Chinese Institute of Engineers, Vol. 28, No. 6. - pp. 10131018.

[14]. [15].

581

Vol. 5, Issue 1, pp. 572-583

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
[16]. Mehdi Nikzad, R. Hemmati, S. A. Farahani and S. M. Boroujeni (2010). Comparison of Artificial Intelligence Methods for Load Frequency Control Problem. Australian Journal of Basic and Applied Sciences , 4910-4921. [17]. Muhammad S. Yousuf Hussain N. Al-Duwaish And Zakariya M. Al-Hamouz (2010). PSO Based Single And Two Interconnected Area Predictive Automatic Generation Control - WSEAS Transctions On Systems And Control - Issue 8 Volume 5, pp. 677 - 690. [18]. Yildiz, C., Yilmaz, A., and Bayrak, M. (2006). Genetic Algorithm Based PI Controller For Load Frequency Control In Power Systems. Proceedings of 5th International Symposium on Intelligent Manufacturing Systems, pp. 1202-1210, Kahramanmara, Turkey. [19]. H. Shayeghi a, H. S. (2009). Load Frequency Control Strategies A State Of The Art Survey for The Researcher. Energy Conversion and Management Journal , pp. 344-353. [20]. Hassan Bevrani, Takashi Hiyama (2009). On Load-Frequency Regulation with Time Delays: Design and Real-Time Implementation, IEEE transactions on energy conversion, Vol.24, No. 1. [21]. L. Jiang, W. Yao, J. Cheng and Q.H. Wu (2009). Delay-dependent Stability for Load Frequency Control with Constant and Time-Varying Delays, IEEE transactions. [22]. S. Bhowmik, K. Tomosovic, and A. Bose (2004). Communication models for third party load frequency control, IEEE Trans. Power syst., vol. 19, no. 1, pp. 543-548. [23]. X. Yu and K. Tomosovic (2004). Application of linear matrix inequalities for load frequency control with communication delays, IEEE Trans. Power system technol., vol. 1, pp. 139-143. [24]. Kenji Okada, GosShirai and RyuichiYokoyana (1987). LFC incorporating time delay, IEEE, Vol.75. [25]. S.Sumathi A.Soundarrajan (2009). Effect of Non-linearities in Fuzzy Based Load Frequency Control International Journal of Electronic Engineering Research, Volume 1 (1), pp. 3751. [26]. R. L. Haupt (2004). Practical Genetic Algorithms - Second Edition. Hoboken, New Jersey: A John Wiley and Sons, Inc., Publication. [27]. Darrell Whitley (2005). A Genetic Algorithm Tutorial. Colorado - USA: Computer Science Department, Colorado State University. [28]. Abdullah Konak, D. W. Coit and Alice E. Smith (2006). Multi-objective optimization using genetic algorithms: A tutorial. Reliability Engineeringand System Safety, 91, pp. 992-1007. [29]. Ivo F.Sbalzarini, Sibylle Muller and Petros Koumoutsakos (2000). Multi Objective Optimization Using Evolutionary Algorithms. Center for Turbulence Research Proceedings of the Summer Program . [30]. Kalyanmoy Deb, Samir Agrawal, Amrit Pratap and T. Meyarivan (2000). A Fast Elitist Non-Dominated Sorting Genetic Algorithm for Multi-Objective Optimization :NSGA-II. Kanpur, India: Indian Institute of Technology Kanpur, (KanGAL Report No.200001). [31]. L.A.Zadah. (1965). Fuzzy Sets. Journal of Information Control , Vol 8, pp. 338-353. [32]. Burns, S. (2001). Advanced Control Engineering. Plymouth - UK: A Division of Reed Educational and Profession Puplishing LTD., ISPN 0750651008. [33]. Richard Bishop, C. D. (2000). Modern Control Systems. Colorado, USA: Prentice Hall. [34]. Elgerd, O. I. (1983). Electric Energy Systems Theory. London: McGrawhill Book Company. [35]. P.Kundur. (1994). Power System Stability And Control. New York: McGraw-Hill. [36]. M. A. Tammam (2011). Multi Objective Genetic Algorithm Controllers tuning for load Frequency Control in Electric Power systems, M. Sc., Cairo University.

BIOGRAPHIES
M. A. Tammamis a TV certified functional safety Principle Project Engineer working at Invensys Operations Management specialized in safety, ESD, F&G, BMS and critical control applications.Mohamed Mahmoud worked in process control for oil & gas, nuclear and power plants in different countries such as Egypt, United States of America, United Kingdom, Singapore, United Arab Emirates, Saudi Arabia, and Oman.Mohamed Mahmoud was born in Egypt in 1982, received his B.S degree in Electrical Engineering (Computer & Systems Engineering Department) from Ain Shams University, in Egypt and the M.S degree in Electrical Engineering (Electrical Power & Machines Engineering Department) from Cairo University in Egypt in 2004 and 2011 respectively. M. A. S. Aboelela has been graduated from the electrical engineering department (Power and Machines section) in the faculty of engineering at Cairo University with Distinction and honor degree in 1977. He received his M.Sc degree in automatic control from Cairo University in 1981. He received his Ph. D. in computer aided system engineering from the state university of Ghent, Belgium in 1989. He was involved in the MIT/CU technological planning program from 1978 to 1984. He has been appointed as demonstrator, assistant professor, lecturer, associate professor and professor all at Cairo University where he is currently enrolled. He is currently a visiting professor at Ilorin University, Nigeria. He has given consultancy in information technology and computer

582

Vol. 5, Issue 1, pp. 572-583

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
science mainly for CAP Saudi Arabia, SDA Engineering Canada, Jeraisy Computer and Communication Services and other institutions. He is currently working as a professor of automatic and process control at faculty of engineering, Cairo University. He spent one year as a visiting professor at Ilorin University, Kwara State, Nigeria. His interest is Artificial Intelligence, Automatic Control Systems, Stochastic Modeling and Simulation, Database, Decision Support Systems, Management Information Systems, and Application of Computer technology in Industry. He has published more than 50 scientific articles in journals and conference proceedings. M. A. Moustafa (S. Member IEEE): received the B.Sc., M.Sc. and Ph.D. in 1977, 1982, and 1988 respectively, in Electrical Engineering, from Cairo University, Egypt. Since 1977, He joined the faculty of Electrical Engineering at Cairo University, Egypt as a Teaching staff. During his PhD research program, He visited Department of Electrical Engineering, BUGH - Wuppertal funded by DAAD, Germany for the academic years of 1984 to 1987. During the academic years of 1989 to 1992, he was a Visiting Scholar in the Department of Electrical Engineering, at The University of Calgary, CANADA, funded partially by CIDA. Since 1993 he was appointed as Associate Professor at the Department of Electrical Engineering, Cairo University. He is currently a Professor of control of Power Systems at Cairo University. He is currently working as Vice Dean for ITC Alamieria (Funded via EDF, Egyptian Ministry Cabinet), Cairo Egypt. His research activities include Control Systems, Fuzzy Logic, ANN, Artificial intelligence techniques in protection, control, and safety of power systems. A. E. A. Seif: received the B.Sc., M.Sc. in 1972, 1975, respectively, in Electrical Engineering, from Cairo University, Egypt and PhD from Paul Sabatier University, Toulouse, France in 1978. Since 1972, He joined the faculty of Electrical Engineering at Cairo University, Egypt as a Teaching staff. He was appointed as assistant Professor 1979, Associate Professor in 1988, and Professor in 1993 at the Department of Electrical Power Engineering, Cairo University. He is currently a Professor of Control Systems at Cairo University. He worked as Telecontrol and SCADA expert at Electricity Corporation (Design Dept.) in KSA from 1994 till 2000. His research activities include Control Systems, Fuzzy Logic, Robotics, ANN, Artificial intelligence techniques in control and power systems.

583

Vol. 5, Issue 1, pp. 572-583

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

ECONOMIC LOAD DISPATCH USING SIMPLE AND REFINED GENETIC ALGORITHM


Lily Chopra1 and Raghuwinder Kaur2
1

Sant Baba Bhag Singh Institute of Engineering & Technology, Jalandhar, India 2 Adesh Institute of Engineering & Technology, Faridkot, India

ABSTRACT
In present era it is important to economize generation cost by satisfying operational constraints. Economic load dispatch is important tool to solve this problem. This paper presents the simple genetic algorithm (SGA) and refined genetic algorithm (RGA) method applied to economic dispatch problem which accounts for minimization of cost along with operational constraints. As lambda iteration method requires exact adjustment of lambda and it does not give global optimum solution. Results proved that GA based technique give global optimum solution and by varying probabilities of crossover and mutation computer usage time can be drastically reduced in RGA. Elitism is a technique which save early solution by ensuring the survival of highest fittest string. So it improves performance capability of Genetic Algorithm

KEYWORDS:
technique.

Economic Dispatch, Simple genetic algorithm, Refined genetic algorithm, Lambda Iterative

I.

INTRODUCTION

Among the major economy security function in power systems operation, economic dispatch ranks the highest. It is defined as the process of allocating generation levels to the generating units in the mix, so that the system load may be supplied most economically, under all unit and system equality and inequality constraints [2]. It is also defined as production level of each plant so that total cost of generation and transmission is minimum for prescribed schedule of load So economic load dispatch problem is to reduce fuel cost to minimum so that system must operate economically. Various analysis techniques can be used to solve economic load dispatch such as: Lambda Iteration method. Gradient search method. Reduced gradient with linear constraints. Newton method. Most of these techniques suffer from many drawbacks such as difficult approach, more convergence time and lack of reliability [18]. Among these techniques Lambda Iteration method is faster. In lambda Iteration Method Incremental cost curves for all units are plotted and operating point is found where all units have minimum fuel cost and at same time specified demand is obtained but this technique is difficult in approach and due to complexity and non-monotonicity of the problem it may be unable to give global optimum solution so to overcome these problems and to find global optimum

584

Vol. 5, Issue 1, pp. 584-590

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
solution for economic dispatch problem with minimum cost artificial intelligence techniques can be used. In this paper to overcome these drawbacks Genetic Algorithm is used [4]. Genetic Algorithms (GAs) are global optimization techniques based on the operations observed in natural selection and genetics. GAs, unlike strict mathematical methods, have the apparent ability to adapt to non-linearity and discontinuities commonly found in power systems. Simple genetic algorithm (SGA) and refined genetic algorithm (RGA) are two broad categories of GA algorithms [1]. They operate on string structures, typically a concatenated list of binary digits representing a coding of the parameters for a given problem. Many such structures are considered simultaneously, with the most fit of these structures receiving exponentially increasing opportunities to pass on genetically important material to successive generations of string structures. In this way, GA's search from many points in the search space at once, and yet continually narrows the focus of the search to the areas of the observed best performance. Simple Genetic Algorithm gives global optimum solution when population size is more but if we increase population size computational time will also increase so to reduce computational time and to increase efficiency of genetic algorithm new technique RGA is proposed. RGA is Refined Genetic Algorithm [19]. Most of RGA subroutines mimic the subroutines in SGA program [8]. However crossover and mutation operators differ between the programs and other difference between the programs are variable probabilities of crossover and mutation operator and the technique Elitism [11]. So RGA provides accurate and feasible solution for economic load dispatch problem with minimum fuel cost. The paper is divided in to three sections. First section discusses introduction, second section discusses problem formulation, and the paper is concluded in the third section.

II.

PROBLEM FORMULATION

The economic dispatch problem is to minimize objective function i.e fuel cost, while satisfying several equality and inequality constraints. Generally the problem is formulated as follows. A. Objectives The main objective of Economic Dispatch problem is to minimize Fuel cost along with operational constraints and it can be formulated as: Minimization of Fuel cost: The generator cost curves are represented by quadratic functions and the total fuel cost F in (Rs/h) can be expressed as
F=

(a P
i =1

i i

+ bi Pi + c i

Rs /h

(1)

Where Pi is the generated power of ith unit in MW and ai, bi, ci are the cost coefficients of ith generating units, N is total number of generators[5]. B. Constraints The two constrain while achieving the objectives of ECD problem, Generation Capacity Constraint and Power Balance Constraint can be formulated as: 1) Generation Capacity Constraint: For stable operation, the real power output of each generator is restricted by lower and upper limits as follows: Pi min Pi Pi max i = 1,2 ,...,N where Pimin, Pimax are the minimum and maximum limits of power generation of ith generator 2) Power Balance Constraint: The total electric power generation must cover the total electric power demand PD and the real power loss PL in transmission lines [3].
N

PD Pi + PL = 0
i =1

(2)

2.1 Implementation of SGA to ECD Problem


Main steps to implement SGA to solve Economic Dispatch problem are as follows: A. Encoding and Decoding

585

Vol. 5, Issue 1, pp. 584-590

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
In order to implement GA's for finding the solution of given optimization problem, variables are first coded in some structure. The strings are coded by binary representations having 0s and 1s.The string in GAs corresponds to chromosome and bits in a string refers to genes in natural genetics[10]. For power dispatch problems, firstly a population of 20 strings, each of 16 bits, is generated. Then each string in the population is decoded using following Eq.
L

Zj =

2
i =1

i 1

bji

j=1,2,...PS

(3)

Where L is the length of the string, Bji is ith bit in the jth string, Zj is the equivalent decimal integer of jth binary string in the population, PS is the population size. From the decoded value of jth string in the population, the value of Lagrange multiplier, j can be found within min minimum and max maximum limits as under:
j = min + max min * Z j /(2 L 1)

j=1,2.,PS

(4)

A.

Fitness function

Implementation of power dispatch problem in GAs is realized within the fitness function [6]. Since the proposed approach uses the equal incremental cost criterion as its basis, the constraints can be written in form of error as:
N j i

= PD + PL

P
i =1

j=1,2,,PS

(5)

In order to emphasize the best chromosomes, the fitness function is normalized into range between 0 and 1. This formula for fitness function is used because in this case it is required to minimize objective function and objective function is fuel cost [16]. The fitness function is adopted is: j=1,2,,PS (6) FF j = 1 /[1 + j /( PD + PL )] B. Reproduction For subsequent genetic operation, the Router wheel selection is used. One point crossover is done in SGA. The probability of crossover is 0.5 and the probability of mutation is 0.01 in SGA and these probabilities remain constant for the entire run of the program.0.5 probability means that crossover is performed on only 50 percent of strings [12] . In this case as 20 strings are taken so crossover is performed on only 10 strings [7]. In one point crossover a random crossover site is selected if crossover site is 3 then from third bit onwards bits of parents will be interchanged to produce off springs.

2.2 Implementation of RGA to ECD Problem


Most of the RGA subroutines mimic the subroutines in the SGA program; however the reproduction operators, crossover and mutation differ between programs [9]. While implementing RGA to power dispatch problems, firstly a population of 100 strings, each of 16 bits, is generated. Then each string in the population is decoded. The reproduction operator for RGA is given below: A. Crossover Uniform crossover is done in RGA. The probability of crossover varies from 0.7 to 0.6. For every generation, the probability of crossover is exponentially decreased. Limit for crossover probability is 0.6. These limits are set so that the probabilities do not exceed specified standards. B. Mutation The probability of mutation varies from 0.001 to 0.1. For every generation, the probability of mutation is exponentially increased. In this a random bit generator is called for each bit and probability of random bit is compared with the probability of mutation and if the random bit is having less

586

Vol. 5, Issue 1, pp. 584-590

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
probability than mutation then that bit is altered otherwise it will remain same[14]. This process is repeated for all the strings. C. Elitism To reduce the computational time of RGA, Elitism is used along with RGA. Elitism compares the results of the most recent population to the elite population. It then combines the two populations and determines the best results from both populations in order of decreasing fitness value [13]. This combination of the most fit strings becomes the elite population. The process continues for each generation so that accuracy and convergence capability can be maintained in RGA.

2.3 Numerical Example and Results


In order to demonstrate the efficiency and the robustness of the proposed genetic algorithm, a 3 Generator system is considered. The cost equations of three units in Rs/h F1=0.03546 P12+38.30553P1+1243.53110 F2=0.02111 P22+36.32782P2+ 1658.56960 F3=0.01799P32+38.27041P3+1356.65920 The unit operating ranges in MW are 35 P1 210 130 P2325 125 P3315 The loss coefficient matrix is Bmn = 0.000071 0.000030 0.000025 0.000030 0.000069 0.000032 0.000025 0.000032 0.000080
BUS BAR

P1 Transmission Losses P2

P3

Load

Figure 1: Single line diagram of test system

Power Demand ( MW)

Method
Conventional Method SGA

P1 (MW)
81.69 97.99

P2 (MW)
174.94 211.54

P3 (MW)
151.17 194.12

PL (MW)
7.59 7.58

Fuel Cost (Rs/hr) 20821.75


20809.34

400

587

Vol. 5, Issue 1, pp. 584-590

Tab

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
RGA Conventional Method SGA RGA Conventional Method SGA RGA 97.92 105.59 124.73 124.65 129.96 152.43 152.22 211.43 212.70 256.48 256.35 51.03 303.03 302.69 193.99 193.43 246.82 246.68 236.18 301.42 301.02 7.25 11.91 11.1 10.9 17.3 17.2 16.5 20797.98 25456.35 25423.39 25410.24 30327.58 30318.86 30282.31

500

600

Power Demand=400MW
20825 20820 20815
Total Fuel Cost in Rs/hr

20810 20805 20800 20809.34 20795 20790 20785


Conventional Method SGA

20821.75

20797.98
RGA

Figure2: Comparison of total cost obtained from conventional method, SGA and RGA for 400MW power demand

Power Demand=500MW
25460 25450 Total Fuel Cost in Rs/hr 25440 25430 25420 25410 25400 25390 25380
Conventio nal M ethod SGA RGA

25456.35 25423.39 25410.24

Figure3: Comparison of total cost obtained from conventional method, SGA and RGA for 500MW power demand

Method

588

Vol. 5, Issue 1, pp. 584-590

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963 Power Demand=600MW
30340 30330 30320 30310 Total Fuel Cost in Rs/hr 30300 30290 30280 30270 30282.31 30260 30250
conv entional method SGA R GA

30327.58 30318.86

Figure4: Comparison of total cost obtained from conventional method, SGA and RGA for 700MW power demand.

A comparison between SGA, RGA and Conventional Lambda Iteration method (Table 1) has been realized. It is proved in the above figures that the total cost for the various demand are less for the solution obtained by the SGA and RGA. The reliability of the methods also better than the conventional method [15]. The feasibility of the proposed methods is nature of high quality solution, stable convergence and good computation efficiency [20].

III.

CONCLUSION

The global solution of ECD problem is found by using SGA and RGA techniques and the results are compared with conventional lambda search method. The results proved that the GA based approaches provide a global optimal solution than the Conventional method. By using the changing probability of mutation and crossover occurrence, computer-processing time can be drastically reduced in RGA method. Elitism is another effective tool to improve the performance capability of genetic algorithms. Because elitism stores the fittest strings from each population, the programs are able to quickly find and keep the best solutions to the problem. When the program converges, it produces natural stopping criteria for the program. The computer usage time can be drastically reduced with implementation of Elitism along with RGA.

REFERENCES
[1] A El-kieb, H Ma and J L Hard, Environmentally Constrained Economic Dispatch using the Lagrangian Relaxation Method, IEEE, vol 9, no 4, November 1994. [2] R. Yokoyama, S. H. Bae, T. Morita, and H. Sasaki, Multi-objective generation dispatch based on probability security criteria, IEEE Trans. Power Syst., vol. 3, no. 1, pp. 317324, 1988. [3] C Palanichamy and K Srikrishna, Economic Thermal Power Dispatch with Emission Constraints, JIE, vol 72, April 1991. [4] D E Goldberg and J H Holland, Genetic Algorithms in Search Optimization and Machine Learning, Addison Wesley, 1992. [5] G B Sheble and K Brittig, Refined Genetic Algorithm : Economic Dispatch Example, IEEE Transactions on Power System, vol 10, no 1, November 1995, pp 117-124. [6] A J Wood and B F Woolenburg, Power Generation Operation and Control, John Wiley and Sons, 1984. [7] M.Sudhakaran and Dr.S.M.R Slochanal, "Integrating Genetic Algorithm and Tabu search for Emission and Economic problem, IE Journal Vol 86,June 2005. [8] D.P.Kothari and J.S.Dhillon, Power system optimization, Prentice Hall of India.2004. [9] Ji-Yuan Fan and Lan Zhang, Real-Time Economic Dispatch with Line Flow and Emission Constraints Using Quadratic Progranuning, IEEE Transactions on Power Systems, Vol. 13, No. 2, May 1998.

589

Vol. 5, Issue 1, pp. 584-590

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
[10] M. A. Abido, Multiobjective Evolutionary Algorithms for Electric Power Dispatch Problem, IEEE Transactions on Evolutionary Computation, VOL. 10, NO. 3, June 2006. [11] Kalyanmoy Deb, Optimization for engineering design algorithm and examples, Prentice hall of India, 2002. [12] Kolcun, M.; Benc, R.; Szathmary, P.; Genetic Algorithms In Power Systems; Proc. of 8th Scientific Conference Electro-Power Engineering '97; TU in Kosice; 1996; Stara Lesna; pp. 165-70. [13] Heitkoetter, J.; Beasley, D.; The Hitch-Hiker's Guide to Evolutionary Computation: A list of Frequently Asked Questions (FAQ),USENET: comp.ai.genetic. Available via anonymous FTP from rtfm.mit.edu:/pub/usenet/news.answers/ai-faq/genetic/; 1996; About 100 pp; [14] Sheble, G.B.; Brittig, K.; Refined genetic algorithm-economic dispatch example; IEEE Transactions on Power Systems; Vol.: 10 Issue: 1; USA; 1995; p. 117-24. [15] Song, Y.H.; Li, F.; Morgan, R.; Cheng, D.T.Y.; Effective implementation of genetic algorithms on power economic dispatch; IPEC '95. Proc. of the International Power Engineering Conference; Vol.1; Nanyang Technol. Univ. Singapore; 1994; p. 268-74; [16] Yee Ming Chen and Wen-Shiang Wang, " A particle swarm approach to solve the environmental/economic dispatch problem. '' International journal of Industrial Engineering computation 1(2010)157-172 [17] D. P. Kothari and K. P. SinghParmar, A Noval Approach for Eco-friendly and Economic Power Dispatch using MATLAB,IEEE Conference. PEDES,2006, New Delhi, INDIA. [18] Talaq JH, EI-Hawary ME. A summary of environmental/economic dispatch algorithms. IEEE Trans power syst 1994; 9(3):1508-16 . [19] Helsin JS, Hobbs BF. A multiobjective production costing model for analyzing emission dispatching and fuel switching. IEEE Trans Power syst 1989;4(3):836-842 [20] Dhillion JS, Parti SC, Kothari DP. Stochastic economic emission load dispatch. Electr Power Syst Res 1993;26:179-86.

BIOGRAPHY
Lily Chopra is presently working as Assistant Professor in S.B.B.S.I.E.T, Padhiana, Jalandhar. She has completed her degree of B.Tech from A.I.E.T, Faridkot in the year 2005 and M.Tech from G.N.E, Ludhiana in the year of 2009.

Raghuvinder Kaur is presently working as Senior lecturer in A.I.E.T, Faridkot. She has completed her degree of B.Tech from A.I.E.T, Faridkot in the year 2005 and M.Tech from G.N.E, Ludhiana in the year of 2009.

590

Vol. 5, Issue 1, pp. 584-590

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

EXPERIMENTAL INVESTIGATIONS ON THE PERFORMANCE AND EMISSION CHARACTERISTICS OF DIESEL ENGINE USING PREHEATED PONGAMIA METHYL ESTER AS FUEL
1

Dinesha P1, Mohanan P2


Department of Mechanical Engineering, KVGCE., Sullia, Karnataka, India. 2 Department of Mechanical Engineering, National Institute of Technology Karnataka, Surathkal, India.

ABSTRACT
Biodiesel is a renewable fuel, which can reduce the use of petroleum based fuels and possibly lower the overall greenhouse gas emissions of internal combustion engines. Therefore, to reduce emissions, researchers have focused their interest in the areas of biodiesel as alternative fuel for diesel engine. Investigations have shown that B20 blend has good performance and emission characteristics on CI engines. Further increase of biodiesel fraction in the blends will increase the viscosity and decrease performance. To increase the fraction of biodiesel in blends, it is required to reduce the viscosity by preheating. In the present work, an experimental investigation is carried out on a four stroke single cylinder CI engine to find out the performance and emission characteristics with preheated B40 blend of pongamia biodiesel and B20 biodiesel. The B40 blend is preheated at 60, 75, 90 and 110C temperature using waste exhaust gas heat in a shell and tube heat exchanger. Transesterification process is used to produce biodiesel required for the present research from raw pongamia oil. Experiments were done using B40 biodiesel blend at different preheating temperature and for different loading. A significant improvement in performance and emission characteristics of preheated B40 blend is obtained. B40 blend preheated to 110C showed maximum 8.97% increase in brake thermal efficiency over B20 blend at 75% load. Also the highest reduction in UBHC emission and smoke opacity values were obtained as 78.12% and 73.54% respectively over B20 blend for B40 blend preheated to 110C at 75% load. Thus preheating of higher biodiesel blend at higher temperature improves the viscosity and other properties sharply and improves the performance and emission.

KEYWORDS:
gas.

Transesterification, Preheating, Biodiesel blends, Heat exchanger, Waste heat from exhaust

I.

INTRODUCTION

Industrial development and economy of any country is mainly depends on its energy resources. Majority of the worlds energy needs are supplied through petrochemical sources, coal, natural gases, hydroelectricity and nuclear energy. Diesel and gasoline fuel plays a very important role in meeting the energy requirement of various applications across the world. The high energy demand in the industrialized world and wide spread use of fossil fuels is leading to fast depletion of fossil fuel resources as well as environmental degradation. Due to the depletion of the worlds petroleum reserves and the increasing environmental concerns, it necessitates continued search and sustainable development of alternative energy sources that are environmental friendly [1-2]. Biomass sources,

591

Vol. 5, Issue 1, pp. 591-600

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
particularly vegetable oils, have attracted much attention as an alternative energy source. They are renewable, non-toxic and can be produced locally from agriculture and plant resources. Their utilization is not associated with adverse effects on the environment because they emit less harmful emissions and green house gases [3]. Biodiesel, a form of biomass particularly produced from vegetables oils, has recently been considered as the best candidate for a diesel fuel substitution [4-6]. Biodiesel is a clean renewable fuel, simple to use, biodegradable, nontoxic, and essentially free of sulphur and aromatics [7]. It can be used in any compression ignition engine without the need for modification. Also usage of biodiesel will allow a balance to be sought between agriculture, economic development and the environment [8]. Rudolph Diesel, the inventor of the Diesel engine, used peanut oil as the fuel for its demonstration in diesel engine. However, it is only in recent years that systematic efforts have been made to utilize vegetable oil as fuels in the engines. Biodiesel, which can also be known as fatty acid methyl ester (FAME), is produced from transesterification of vegetable oils or animal fats [9]. Biodiesel is quite similar to petroleum derived diesel in its main characteristics such as cetane number, energy content, viscosity and phase changes. It has a reasonable cetane number and hence possesses less knocking tendency. Biodiesel contains no petroleum products, but it is compatible with conventional diesel and can be blended in any proportion with fossil-based diesel to create a stable biodiesel blend. Therefore, biodiesel has become one of the most common biofuel in the world [10]. Chemically, biodiesel is a mixture of methyl esters with long-chain fatty acids and is typically made from nontoxic, biological resources such as vegetable oils, animal fats, or even used cooking oils. Vegetable oils include edible and non-edible oils. Many standardized procedures are available for the production of bio-diesel fuel oil. There are four primary ways to produce biodiesel, direct use and blending of raw oils, micro emulsions, thermal cracking, and transesterification of vegetables oils and animal fat oil. The most commonly used method for converting oils to biodiesel is through the transesterification [11-12].

1.1 Biodiesel from pongamia oil


As biodiesel is mainly produced from vegetables oils, Pongamia oil is a very good source of biodiesel [13]. Oil of Pongamia pinnata is a non-edible oil of Indian origin. Karanja and Honge are the other Indian names of Pongamia oil. It is found mainly in the native Western Ghats in India, northern Australia, Fiji and in some regions of Eastern Asia. The oil contains primarily eight fatty acids viz. palmitic, stearic, oleic, linoleic, lignoceric, eicosenoic, arachidic and behenic. Pongamia oil has high viscosity and poor combustion characteristics which cause poor atomization, fuel injector blockage, excessive engine deposit and engine oil contamination. In India the prohibitive cost of edible oils prevents their use in biodiesel preparation, but non-edible oils are affordable for biodiesel production.

1.2 Preheating of biodiesel-diesel blends


Although vegetable oils have some similar physical fuel properties with diesel fuel in terms of energy density, cetane number, heat of vaporization and stoichiometric air/fuel ratio, the use of neat vegetable oils or its blends as fuel in diesel engines leads to some problems such as poor fuel atomization and low volatility mainly originated from their high viscosity, high molecular weight and density. It is reported that these problems may cause important engine failures such as piston ring sticking, injector coking, formation of carbon deposits and rapid deterioration of lubricating oil after the use of vegetable oils for a long period of time [14]. The transesterification is a widely applied, convenient and most promising method for reduction of viscosity and density of vegetable oils. Despite transesterification process, which has a decreasing effect on the viscosity of vegetable oil, biodiesel still has higher viscosity and density when compared with diesel fuel. The viscosity of fuels has important effects on fuel droplet formation, atomization, vaporization and fuelair mixing process, thus influencing the exhaust emissions and performance parameters of the engine. It has been also revealed that the use of biodiesel leads to a slight reduction in the engine break power and torque, and a slight increase in the fuel consumption and brake specific fuel consumption compared to diesel fuel. These changes can be attributed to the lower heating value of biodiesel. The higher viscosity of biodiesel compared to diesel limits the use of complete biodiesel and biodiesel blends in the I.C. engine. The higher viscosity has effect on combustion and proper mixing of fuel with air in the combustion chamber. It inhibits the proper atomization, fuel vaporization and combustion. Due to

592

Vol. 5, Issue 1, pp. 591-600

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
high viscosity, the fuel droplet size will be bigger and the fuel droplet will not get burned. When these droplets mix with the hot gases in the later part of the power stroke oxidation reaction occurs but may not have enough time to undergo complete combustion. Many investigations have shown that the performance and emission characteristics of B20 biodeiesel blend is similar to that of diesel fuel. If the biodiesel proportion is further increased, the density and viscosity of the blends increases. Higher viscosity of these blends can be reduced by adopting suitable techniques like preheating. Because of the heating process, the viscosity and density of biodiesel decreases and improves volatility thus leading to a favorable effect on fuel atomization and combustion characteristics. It will improve the oxidation of biodiesel in the cylinder and CO emissions arisen from incomplete combustion will decrease [14-18].

1.3 Preheating of biodiesel blends: Related research work


Murat Karabektas et. al. [19] carried out experiments at full load conditions in a single cylinder, fourstroke, direct injection diesel engine. Before supplied to the engine, Cotton seed methyl ester (COME) was preheated to four different temperatures, namely 30, 60, 90 and 120C. The test data were used for evaluating the brake power and brake thermal efficiency (BTE) together with CO and NOx emissions. The results revealed that preheating COME up to 90C leads to favorable effects on the BTE and CO emissions but causes higher NOx emissions. Moreover, the brake power increases slightly with the preheating temperature up to 90C. The authors suggest that COME preheated up to 90C can be used as a substitute for diesel fuel without any significant modification in expense of increased NOx emissions. M. Pugazhvadivu and K. Jeyachandran [20] determined the performance and exhaust emission characteristics of a single cylinder diesel engine using diesel, waste frying oil (without preheating) and waste frying oil preheated to two different inlet temperatures (75 and 135C). The engine performance was improved and the CO and smoke emissions were reduced using preheated waste frying oil. It was concluded from the results of the experimental investigation that the waste frying oil preheated to 135C could be used as a diesel fuel substitute for short-term engine operation. In the present work B40 blend is selected as fuel and preheated to reduce the viscosity.

II.

EXPERIMENTAL SETUP

2.1. Computerized Engine Test rig


The engine tests are conducted on a computerized single cylinder four-stroke, naturally aspirated, open chamber (Direct Injection) and water-cooled diesel engine test rig as shown in Fig.1. The specification of diesel engine used for experiments is given in Table.1. It is directly coupled to an eddy current dynamometer. The engine and the dynamometer are interfaced to a control panel, which is connected to a computer. Test rig is provided with necessary equipment and instruments for combustion pressure and crank angle measurements with accuracy. These signals are interfaced to computer through an analog to digital converter (ADC) card PCI-1050 which is mounted on the motherboard of the computer.
Table.1 Test Engine Specification Engine Make Model & BHP Compression Ratio Dynamometer Type Load Measurement Interfacing Four-stroke, single cylinder, constant speed, water cooled CI Engine Kirloskar TV1 & 5.2kW@ 1500 RPM 17.5:1 Eddy Current, with loading unit Strain Gauge Load cell ADC card- PCI 1050

593

Vol. 5, Issue 1, pp. 591-600

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Figure 1. Schematic diagram of the experimental test rig

T1- Inlet engine water temperature Tf - Fuel temperature at outlet of H.E

PT - Pressure transducer

N - RPM Decoder

T2 - Outlet engine jacket water temperature F1- Fuel Flow (Differential Pressure unit)

Ta, Tb - In and out temperature of exhaust gas in H.E EGA - Exhaust Gas Analyzer (5 gas) SM Smoke Meter

Figure 2 Heat Exchanger

III.

RESULTS AND DISCUSSION

A wide range of experiments were carried out at different load conditions to examine the effect of preheating on performance parameters like BSEC, brake thermal efficiency and also an analysis is

594

Vol. 5, Issue 1, pp. 591-600

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
carried out on the emission parameters like NOx, CO, HC and smoke. The tests have been carried out with B40 blend preheated at 60C, 75C, 90C, and 110C temperatures and loading conditions of no load, 25%, 50%, 75% and 100% of full load.

3.1. Brake thermal efficiency (BTE)


From Fig. 3 it is observed that B40 blend has higher brake thermal efficiency at 110C preheating temperature compared to other temperatures and B20 blend. It is showing increasing trend in BTE with increasing preheat temperature. The significant improvement in BTE is observed only after 50% loading. The increase in BTE is more when preheating temperature is increased from 75C to 90C as compared to increase in BTE when temperature is increased from 90C to 110C. It was found that the maximum BTE was achieved at 110C at 75% loading. The increasing BTE can be attributed to the good combustion characteristics of fuel because of their decreased viscosity and improved volatility by means of preheating process. As the preheating temperature is increases from 60C to 110C, viscosity of blend decreased sharply and volatility of molecule increased. It has favorable effect on atomization and vaporization of fuel. The maximum thermal efficiency was found to be 38.83% at 110C at 75% loading following 37.87% at full load condition. 8.97% and 6.97% increase in brake thermal efficiency was obtained over B20 blend at 75% and full load conditions respectively [19 &20]

Figure 3 Variation of BTE for B40 blend at different preheating temperature and B20

3.2. Brake specific energy consumption (BSEC)


From Fig. 4 it is observed that all temperature showed decreasing in values of BSEC with increasing load. 90C and 110C temperature showed sharp decrease in values of BSEC compared to other temperatures and B20 blend. The significant reduction in BSEC values are obtained only after 50% loading. Higher preheating temperature results in better spray and improved atomization during injection thereby improvising the combustion may be attributed for this. It can be seen that BSEC for 90C at 75% and full load are 9.66 MJ/KW-hr and 9.27 MJ/KW-hr respectively while for 110C at 75% and full load are 9.95 MJ/KW-hr and 9.51 MJ/KW-hr respectively. For 110C preheated B40 blend 2.79 and 2.14 MJ/KW-hr decrease in BSEC values were obtained over B20 blend for 75% and full load. BSEC values are observed to be higher for low load condition. At low load condition exhaust temperature is found to be lower, hence it could not preheat inlet fuel effectively as compared to be at higher load. Because of this it is not having favorable effect on combustion and leads to increasing BSEC at low load. While much decreased in BSEC values are observed at higher loads and at all temperature.

3.3. Unburned hydrocarbon (UBHC)


From Fig. 5 it is observed that B40 has lower emission of unburned hydrocarbon at 110C for all loading conditions as compared to other preheating temperature and B20 blend. B20 blend is showing

595

Vol. 5, Issue 1, pp. 591-600

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
maximum UBHC emission compared to all other preheated B40 blend. It is showing decreasing trend in UBHC emission with increasing preheat temperature. The decreasing UBHC emission is more when preheating temperature is increased from 75C to 90C as compared to decrease in UBHC emission when temperature is increased from 90C to 110C.

Figure 4 Variation of BSEC for B40 blend at different preheating temperature and B20

The maximum reduction of UBHC over B20 blend is 78.12% for B40 blend at 110C. UBHC are generally results of incomplete combustion of fuel. Preheating of B40 blend before to the injection is resulting in decrease of viscosity of B40 biodiesel blend and better mixing of fuel with air that leads to favorable effect on combustion of fuel and hence UBHC emissions are less at 90C and 110C temperature. The cetane number of ester based fuel is higher than diesel, it exhibits a shorter delay period and results in better combustion leading to low HC emission. Also the intrinsic oxygen contained by the PPME is responsible for the reduction in HC emission. B40 blend emitted more UBHC at 60C than other preheating temperature.

Figure 5 Variation of UBHC of B40 blend for different preheating temperature and B20

3.4. Nitrogen oxides (NOx)


The oxides of nitrogen in the exhaust emissions contain nitric oxide (NO), nitrogen dioxide (NO2), nitrous oxide and many other oxides of N2. The formation of NOx is highly dependent on the temperature in the combustion chamber and oxygen concentration for the reaction to take place. From Fig. 6 it is observed that NOx emissions are increased at higher preheating temperature and showing

596

Vol. 5, Issue 1, pp. 591-600

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
increasing trend as load increases. B40 showed higher NOx emission for 90C and 110C for 75% to full load conditions while B20 showed lowest NOx emission. The maximum NOx emission was 1390 ppm and 1680 ppm for 110C preheated B40 blend for 75% and full load. The increasing NOx emission was significant after 50% loading. The higher NOx emission at higher temperature can be attributed to various reasons, such as improved fuel spray characteristics, better combustion of biodiesel due to its oxygen content and higher temperature in the cylinder as a result of preheating [19]. Due to low exhaust gas temperature, the NOx emissions are lower at 25% and 50 % loading for all preheating temperature.

Figure 6 Variation of NOx for B40 blend for different preheating temperature and B20

3.5. Smoke opacity


It is observed from Fig. 7 that smoke opacity values tend to increase for B20 blend compared to other preheated blend for all loading conditions. However for 90C and 110C, smoke values are marginally lower up to 75 % loading and showed higher trend at full load. 73.14% reduction in smoke opacity emission was obtained for 110C preheated B40 blend over B20 blend for 75% load. B20 showed higher smoke opacity emission over the other preheating temperatures. The decreasing smoke opacity values is more when preheating temperature is increased from 75C to 90C as compared to decrease in smoke opacity when temperature is increased from 90C to 110C. As mentioned earlier, at high temperatures B40 becomes very less viscous and resulted in better atomization and vaporization and leads to complete combustion of the injected fuel [20]. This resulted in reduced smoke emissions. However smoke emission has increased particularly at higher load due to higher fuel consumption because of less calorific value of biodiesel fuel.

Figure 7 Variation of smoke opacity for B40 blend for different preheating temperature and B20

597

Vol. 5, Issue 1, pp. 591-600

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963 3.6. Carbon monoxide (CO)
It is observed from the Fig. 8 that CO emission is lower for 90C and 110C temperature for all load conditions while B20 and B40 blend at 60C temperature emitted higher CO for all load conditions. The significant improvement in CO emission was obtained after 50% loading. The decreasing CO emission is more when preheating temperature is increased from 75C to 90C as compared to decrease in CO emission when temperature is increased from 90C to 110C. High oxygen content and reduced viscosity of B40 blend due to preheating had good effect on complete combustion of fuel and reduced CO emission. For same loading condition a much reduction in CO emission level is obtained at higher temperature than at lower temperature [19]. As it can be clearly seen from graph 62.5% and 66.67% reduction in CO emission is obtained for 110 C preheated B40 blend over B20 blend for 75% load and full load.

Figure 8 Variation of CO for B40 blend for different preheating temperature and B20

IV.

CONCLUSIONS

From the above results findings conclude that the preheating temperature increases the performance and emission characteristics also improved over B20. The significant improvement in performance and emission values is obtained after 50% load. The maximum brake thermal efficiency for B40 is found to be 38.83% at 110C at 75% loading following 37.87% at full load condition. 8.97% and 6.97% increase in brake thermal efficiency is obtained over B20 blend at 75% and full load condition. BSEC for 110C at 75% and full load are 9.95 MJ/KW-hr and 9.51 MJ/KW-hr respectively. For 110C preheated B40 blend 2.79 and 2.14 MJ/KW-hr decrease in BSEC values are obtained over B20 blend for 75% and full load. The maximum reduction of UBHC over B20 blend is 78.12% for B40 blend at 110C. The maximum NOx emission is 1390 ppm and 1680 ppm for 110C preheated B40 blend for 75% and full load. 73.14% reduction in smoke opacity is obtained for 110C preheated B40 blend over B20 blend for 75% load. 62.5% and 66.67% reduction in CO emission is obtained for 110 C preheated B40 blend over B20 blend for 75% load and full load.

REFERENCES
Magin Lapuerta & Octavio Armas. (2008) Effect of biodiesel fuels on diesel engine emissions, Progress in Energy and Combustion Science, 34, pp198-223. [2] Surendra R. Kalbande & Subhash D. Vikhe., (2008) Jatropha and karanj bio-fuel: an alternate fuel for diesel engine, ARPN Journal of Engineering and Applied Sciences, 3(1).
[1]

598

Vol. 5, Issue 1, pp. 591-600

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
[3] [4]

[5]

[6]

[7] [8]

[9] [10] [11]

[12] [13] [14]

[15] [16]

[17]

[18] [19]

[20]

Ramadhas A.S., S. Jayaraj & C. Muraleedharan., (2004) Use of vegetable oils as I.C. engine Fuels - A review, Renewable Energy, 29, pp727742. P.K. Devan & N.V. Mahalakshmi., (2009) A study of the performance, emission and combustion characteristics of a compression ignition engine using methyl ester of paradise oileucalyptus oil blends, Applied Energy, 86, pp675680. Roberto G. Pereira, Cesar D. Oliveira, Jorge L. Oliveira, Paulo Cesar P. Oliveira, Carlos E. Fellows & Oscar E. Piamba., (2007) Exhaust emissions and electric energy generation in a stationary engine using blends of diesel and soybean biodiesel, Renewable Energy, 32, pp 24532460. M.C.G. Albuquerque, Y.L. Machado, A.E.B. Torres, D.C.S. Azevedo, C.L. Cavalcante, Jr., L.R. Firmiano & E.J.S. Parente, Jr., (2009) Technical Note-Properties of biodiesel oils formulated using different biomass sources and their blends, Renewable Energy, 34 , pp 857859. Narayan C.M., (2002) Vegetable oil as engine fuels-prospect and retrospect, Proceedings on Recent Trends in Automotive Fuels, Nagpur, India. Gerhard Knothe, Robert O. Dunn & Marvin O. Bagby., Biodiesel: the use of Vegetable Oils and their Derivatives as alternative Diesel fuels, Oil Chemical Research, National Center for Agricultural Utilization Research, Agricultural Research Service, U.S. Department of Agriculture, Peoria, IL 61604 Avinash Kumar Agarwal., (2006) Bio-fuels (alcohols and biodiesel) Applications as fuels for Internal Combustion Engine, Energy and Combustion science, pp1-39. B. Baiju, M.K. Naik & L.M. Das., (2009) A comparative evaluation of compression ignition engine characteristics using methyl and ethyl esters of Karanja oil, Renewable Energy, 34, pp16161621. A. Murugesan , C. Umarani , T.R. Chinnusamy, M. Krishnan, R. Subramanian & N. Neduzchezhain., (2009) Production and analysis of bio-diesel from non-edible oils-A review, Renewable and Sustainable Energy Reviews, 13, pp 825834 J.M. Marchetti, V.U. Miguel & A.F. Errazu., (2007) Possible methods for biodiesel production, Renewable and Sustainable Energy Reviews, 11, pp13001311. Malaya Naika, L.C. Meher, S.N. Naik and L.M. Das., (2008) Production of biodiesel from high free fatty acid Karanja (Pongamia pinnata) oil, Biomass and Bioenergy , 32, pp. 354-357. Deepak Agarwal & Avinash Kumar Agarwal., (2007) Performance and emissions characteristics of Jatropha oil (preheated and blends) in a direct injection compression ignition engine, Applied Thermal Engg, 27, pp23142323. Hanbey Hazar & Hseyin Aydin., (2010) Performance and emission evaluation of a CI engine fueled with preheated raw rapeseed oil (RRO)diesel blends, Applied Energy , 87, pp 786-790. M. Senthil Kumar, A. Kerihuel, J. Bellettre & M. Tazerout., (2005) Experimental investigations on the use of preheated animal fat as fuel in a compression ignition engine, Renewable Energy, 30, pp1443 1456. Murat Karabektas, Gokhan Ergen & Murat Hosoz., (2008) The effects of preheated cottonseed oil methyl ester on the performance and exhaust emissions of a diesel engine, Applied Thermal Engineering , 28, , pp 2136-2143. Pugazhvadivua & K. Jeyachandran., (2005) Investigations on the performance and exhaust emissions of a diesel engine using preheated waste frying oil as fuel, Renewable Energy, 30, pp 21892202. Murat Karabektas , Gokhan Ergen & Murat Hosoz (2008) The effects of preheated cottonseed oil methyl ester on the performance and exhaust emissions of a diesel engine, Applied Thermal Engineering, 28, pp21362143 M. Pugazhvadivu & K. Jeyachandran (2005) Investigations on the performance and exhaust emissions of a diesel engine using preheated waste frying oil as fuel, Renewable Energy, 30 , pp21892202

AUTHORS
Dinesha P. obtained his Bachelors degree in Mechanical Engineering from UVCE., Bangalore, India M.Tech degree in Energy Systems & Engineering from VTU., Belgaum, India. At present he is pursuing Ph.D. in Internal combustion Engines and alternative fuels at National Institute of Technology Karnataka, Surathkal. Also he is serving as Asst. Professor in the Department of Mechanical engineering, KVG College of Engineering, Sullia, India. His research interests include IC engines and combustion, alternative fuels and pollution control and renewable energy sources etc. He is the member of many professional societies. P. Mohanan obtained BSc.(Engg.) in Mechanical Engineering from Kerala University, MSc.(Engg.) in Heat Power Engineering from Kerala University and Ph.D in I.C. Engines from IIT Delhi. Currently he is working as Professor of Mechanical Engineering, National Institute of Technology Karnataka, Surathkal, India. He was the Head of the Department of Mechanical Engineering, National Institute of Technology Karnataka, Surathkal, India during

599

Vol. 5, Issue 1, pp. 591-600

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
23-12-2003 - 22-01-2007. He has published more than 76 research papers in national and international journals and conferences. He is currently the executive member of Combustion Institute Indian section besides being a member of many other professional societies. He has guided many M.Tech. and Ph.D.s. His research interests include Internal Combustion engines, alternative fuels, Heat Transfer Environmental Pollution and control, Automobile pollution and renewable energy sources etc.

600

Vol. 5, Issue 1, pp. 591-600

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

ANALYTICAL MODEL OF SURFACE POTENTIAL AND THRESHOLD VOLTAGE OF BIAXIAL STRAINED SILICON NMOSFET INCLUDING QME
Shiromani Balmukund Rahi1 and Garima Joshi2
Student M.Tech (Microelectronics)1, Assistant Professor (ECE)2 ECE Deptt, UIET, Panjab University, Chandigarh, India

ABSTRACT
In this paper physics based analytical model for threshold voltage of nanoscale biaxial strained nMOSFET has been presented. The maximum depletion depth and surface potential in biaxial strainedSi nMOSFET is determined, taking into account both the quantum mechanical effects (QME) and effects of strain in inversion charge sheet. The results show that a significant decrease in threshold voltage occurs with the increase in the germanium content in the silicon germanium layer. The results have been compared with the published data and the effect of variation of channel doping concentration has been examined.

KEYWORDS: Mobility, QME, MOSFET, strained-silicon (sS), biaxial, threshold voltage, surface potential, depletion
depth.

I.

INTRODUCTION

Strained silicon technology has enhanced the performance of planar MOSFET structure, which is reaching its scaling limits. To extend Moores law for nanoscale MOSFET, new materials and new innovations on MOSFET structures are being implemented and explored by the researchers. At 130nm and down to 32nm the semiconductor industry has used strained silicon technology to increase the carrier mobility in the active region of the MOSFET by introducing strain in the silicon channel [1]. Strain can be applied as either biaxial or uniaxial. To develop a physical insight and understand the characteristics of strained silicon MOSFET, its model equations are required. The modeling of electrical characteristics has been carried out by various researches [7]. The nanoscale planar MOSFET structure are affected by SCEs and QMEs which are included in [3] and to explain the biaxial physical phenomenon the model equations for stained silicon MOSFET must include the combined effect of strain, SCE, and QMEs [10, 13]. Inversion charge sheet is the one of the important parameters which helps in good understanding of the study of threshold voltage of a MOSFETs. The first step in this direction is to understand the inversion charge sheet in the channel in a strong inversion region, by modeling the maximum depletion depth, surface potential and hence the threshold voltage of strained silicon biaxial nMOSFET. Therefore, in this paper, a biaxial nMOSFET as shown in figure 1(i) has been studied. The paper is organized as, section I gives the QMEs in strained silicon MOSFET, section II gives the details of QMEs in strained silicon MOSFET, section III gives the analytical modeling of threshold voltage in strained silicon MOSFET. Discussion of results and conclusion is given in section IV.

601

Vol. 5, Issue 1, pp. 601-607

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

II.

QMES IN STRAINED SILICON MOSFET

A layer crystalline SiGe alloy, which has a higher lattice constant then Si, is grown over the substrate. Over which an epitaxial layer Si is grown, taking the same crystallographic orientation as the SiGe layer.

Figure 1(i) Device structure of biaxial strained-Silicon nMOSFET [3] (ii) Schematic diagram of the interoperation for the effect of biaxial tensile strain on inversion charge sheet [5]

The strain is developed in the upper layer due to the mismatch of the lattice constants of two layers causing a strained silicon layer. This process yields high speeds without scaling down the devices. The strain alters the band structure in the channel, this provides lower effective mass, suppresses intervalley scattering and results in enhancement of carrier mobility and the device on current. Energy quantization in nanoscale MOSFET causes shifts in the inversion charge sheet which influences the surface potential as well as threshold voltage of MOSFETs. Each energy level of silicon is composed of six equal energy states in three dimensions. The conduction band splitting due to QME has been shown in figure 1(ii). When biaxial stress is applied, the 2 states and 4 states are split up into lower and higher energy states respectively. This band alteration gives an alternate lower energy site for electrons to reside i.e. 2. The difference in the energy levels causes repopulation of the electrons in the lower energy states 2. The effective mass of electrons in the 2 valley is lesser than the 2. The effective mass of electrons in lower energy states is reduced from 0.33mo in unstrained silicon to 0.19mo in strained silicon structures as shown in figure 2 [5]. Due to this, the electron mobility increases. The biaxial tensile strain enhances electron mobility due to 2 valley population enhancement and the resulting decrease in the effective mass [11]. Biaxial tensile strain increases the occupancy of electrons in 2 valleys which exhibit much thinner layer than electrons in 4 valley and thus 4 decreases the distance between electrons and electron scattering centre located at the SiO2/Si interface.

III.

ANALYTICAL MODELING APPROACH

The classical definition of for determination of depletion depth as well as surface potential of the biaxial strainedSi nMOSFET, i.e. inversion layer electron concentration at the interface becomes equal to bulk hole concentration. The conduction band (in nMOSFET) , the strain induces a subband energy splitting E 0.67meV for each 0.10 increment in x, between the perpendicular 2 and parallel 4 sub-band [8] . The 2-D energy splitting for strained Si nMOSFET is shown in figure 2. Inversion charges on the sub-band energy followed by two dimensional distribution and total inversion charge Qinv is divided into two parts Here Q inv1 and Q inv2 correspond to the inversion charge sheet density associated with valley one and valley two, respectively [4].

602

Vol. 5, Issue 1, pp. 601-607

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

Figure 2: 2-D Energy quantization model for strained Si model and interoperation of strain effect

Qinv = Qinv1 +Qinv2 (1) E1,1 and E1,2 are first energy level for valley-1 and valley-2 respectively. Quantization effect splits continuous energy band into discrete energy level and applied strain causes shift in E12 valley by an amount Ec0.63x (eV). The shifted energy band is given by E12 = -EC + E1,2. The modified energy level due to QME in strained silicon can be written as (2) Here SsS, BsS, EgsS are surface potential, bulk potential and energy band gap for strained silicon MOSFETs respectively. All these parameters are function of germanium mole fraction x. The inversion charge Q inv2 is given by (3) Here Nc2 is the 2-D state charge sheet density and g2 is degeneracy of the energy sub-band valley-2 of lower mass and it is defined as (4) Using (3) and (4), in (2) gives (5) The surface potential for biaxial strained silicon nMOSFET is defined as, , (6)

dsScl is the depletion depth in classical model. Since the differ between the classical depletion and quantum mechanical depletion is small compared with the depletion depth itself and combining both quantum mechanical concept as well as strain and assuming that (7) Using (9) in (7), we obtain (8) By using (7) and (8), (5) can be rewritten as (9) (10) Here C is becomes equal to (11) Na is doping concentration and total inversion charge sheet shifted due to both combining QME and strain effect in a MOSFET can be determine by solving by above (12) which shows observe that inversion charge sheet is function of germanium mole fraction. Equation 12 dqmsS is the shifted

603

Vol. 5, Issue 1, pp. 601-607

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
inversion sheet due to QME in Biaxial strained-silicon MOSFET. In Biaxial strained silicon MOSFET, Si1-xGex layer and strained-silicon layer is much thinner and also assume that total depletion width in silicon substrate then maximum depletion width WdmsS is defined as . (12) Maximum depletion depth at the onset of strong inversion takes place. (13) Bulk potential for strained silicon MOSFET is defined as (14) nisS is the intrinsic carrier concentration for strained silicon. Due to quantum mechanical effect surface potential is slightly increased. Increased surface potential after taking QM effect can be estimated and modified surface potential is written as (15) Quantum mechanical effect also changes flat band potential corrected or modified is presented by (16)
ox and Si the permittivity of SiO2 and Si. The threshold voltage for strained-Si channel MOSFET can be expressed as + (17)

Oxide potential is the second important factor which also plays a significant role in threshold calculation. The oxide potential can be calculated by following relations where the body coefficient. Cox being the oxide capacitance per unit area in the inversion, and Si is the average permittivity of the strainedSi and Si1-xGex layers. As mentioned in section 2, the physical oxide thickness is slightly increased, when considering QM effect , named effective oxide thickness and modified expression for effective oxide thickness is written as [5, 6]. , dm is changed in depletion depth due QME. As result changed the modified expression is is the body effect coefficient which also

being the oxide capacitance per unit area in inversion, and s is the average permittivity of the strained- silicon and Si1-xGex layer [7,12].

IV.

RESULTS AND DISCUSSION

In the modeled threshold voltage of strained Si MOSFETs, the value of in the range 0.05 to 0.4 is taken, beyond which strain is more likely to be relaxed [11].

604

Vol. 5, Issue 1, pp. 601-607

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
0.7 0.65 0.6

Published Data

Na=10 cm t-ox=1.2 nm t-si=11nm T=300 K

18

-3

Threshold Voltage[V]

0.55 0.5 0.45 0.4 0.35 0.3 0.25

Modeled Vth

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

Germanium Fraction[x]

Figure 3(i): Variation of threshold voltage for various Ge mole fraction and bench marking with publish data. (ii): Comparison of threshold voltage in unstrained, unstrained with QME and strained Silicon MOSFET.

Figure 3(i) displays the variation of threshold voltage against Ge mole fraction x and bench marking with published data. It is observed that the results obtained from analytical model are in close agreement with published data A comparison of results from proposed model with the unstrained without QM effect and with QM effect against Ge mole fraction x is done in figure 3(ii). It is observed the threshold voltage decreases for a higher value of x and at the same doping concentration in strained-Si MOSFET. Threshold voltage is less than that of unstrained silicon MOSFET. Reason for this is, as x increases, the conduction and valance band offset also rise [3,5], thereby decreasing the value surface potential ( sSS), the drop in sSS causes a decrease in threshold voltage. Therefore, strained-Si technique minimizes QMEs and by increasing Ge content in SiGe effect of increase in threshold voltage due to QMEs at nanoscale in MOSFET can be mitigated. Figure 4 (i) and (ii) show the variation of maximum depletion depth, Wdm against Ge mole fraction, x. A comparison of proposed analytical results of unstrained with stained Si MOSFET with x, for channels doping concentration for range of 1016 to 1018[cm-3] has been done. It is observed Wdm decreases with increase in Na and also with increase in value of x. Wdm plays a significant role for determination of surface potential and hence the threshold voltage. Figure 5 shows variation of surface potential against channel doping concentration. It can be observed that energy quantization causes slight increase in the the surface potential in planar MOSFET and when biaxial stress is applied in nanoscale MOSFET, bandgap narrowing occurs. Smaller bandgap causes increases in intrinsic concentration of strained silicon. Thus biaxial strain reduces the inversion charge sheet shift and surface potential for biaxial strained nMOSFET reduced at the same doping concentration.
1400

140

1200

120

Depletion Depth ,W dm[n m]

Unstrained Slicon
Depletion Depth ,Wdm[nm] 1000

Na=10 cm
100

16

-3

800

Unstrained Silicon Strained Silicon

Na=10 cm

17

-3

80

600

60

400

200

Strained Silcon

Na=10 cm
40

18

-3

0 14 10

10

15

10

16

10
-3

17

10

18

20

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

Channel Doping,Na[cm ]

Germanium Fraction[x]

Figure 4(i) Variation of depletion depth with doping concentration. (ii)Variation of depletion depth with Ge mole fraction x.

605

Vol. 5, Issue 1, pp. 601-607

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963

V.

CONCLUSIONS

An analytical model for determination of surface potential, depletion depth and threshold voltage of biaxial strained Si nMOSFETs including quantum mechanical effect has been presented. Quantum mechanical effect is influences the surface potential as well as threshold voltage for nanoscale MOSFET. The modeling result shows that quantum mechanical effect that causes increase in threshold voltage of nanoscale MOSFET can be mitigated by introduction of strain. Threshold voltage, which is dependent on surface potential can be controlled by variation of various processes in n-MOSFET strained-Si MOSFET fabrication. The threshold model developed here can be used to develop the I-V characteristics of strained silicon MOSFET and can help in more clear understanding of device performance.
1.1 1 0.9

Surface Potential +QME(Unstrained -Si) Surface Potential(Unstrained-Si )

Surface Potential[V]

0.8 0.7 0.6 0.5

Surface Potentail(Strained-Si)
0.4
T=300 K x=0.2

10

14

10

15

10

16

10
-3

17

10

18

Doping Concentration [cm ]

Figure 5 Dependence of surface potential on channel doping concentration (Na)

REFERENCES
Samia Nawer Rahman, Hasan Mohaammad Faraby, Md. Manzur Rahman, Md. Quamarl Huda. And Ansul Haque, Inversion Layer Properrties of <110> Uniaxially Strained Silicon n-Channel MOSFETs, IEEE, Vol.5, No.1, pp.978-4244, December 2008. [2]. Yi Zhao, Mitsuru Takenaka, and Shinichi Takagi, Comprehensive Understanding of Coulomb Scattering Mobility in Biaxially Strained-Si p-MOSFETs, IEEE Trans. Electron Devices, Vol. 56, No. 5, pp. 00189383, May 2009. [3]. Bratati Mukhopadadhyay, Abhijit Biswas, P.K.Basu, G.Enman Verhenyen, Simoen and C Clayes, Modeling Of Threshold Voltage and Sub Threshold Slope of Strained-Si MOSFETs Including Quantum Effects, IOP, Semiconductor Science Technology, Vol. 29, pp. 0268-1242, 2008. Garima Joshi and Amit Choudhary, Analysis of Short Channel Effects in Nanoscale MOSFETs [4]. International Journal of Nanoscience, [5]. Scoot A.Harelend, S.Jallepali, Wei-Kai Shih, Haihong Wang Gori L. Chinalore Al.F Tasch C.M. Maizar, A Physically Based Model for Quantization Effects in Hole Inversion Layers, IEEE, Vol.45, No.1, January 1998. [6]. Jin He, Mansun Chan, Xing Zhang and Yangquan Wang, An Analytical Model to Account for Quantum-Mechanical effects of MOSFETs Using a Parabolic Potential Well Approximation, IEEE, Tans. on Elect. Devices, Vol.53, No.9, pp.0018-9383, September 2006. [7]. Ji-Song Lim, Scot E. Thompson, Jeery G. Fossum, Comparison of Threshold Voltage Shifts for Uniaxial and Biaxial Tensile-Stressed n-MOSFETs, IEEE, Vol. 25,No.11, November 2004. [8]. Weimin Zhang and Jeery G. Fossum, On the Threshold Voltage of Strained- Si Si1-xGex MOSFETs, IEEE, Vol.52, No.2, February 2005. [9]. Karthik Chandrasekaran, Xing Zhuo,Siau Ben Chiah,Guan hei See and Subhas C. Rustagi, Implicit Analytical Surface/ Interface Potential Soltions for Modeling StrainedSi MOSFETs, IEEE, Vol.53, No.12, December 2006. [10]. Rim K, Hoyt JL, Gibbons JF, Fabrication and analysis of deep submicron strained-Si N-MOSFETs IEEE Transactions of Electron Devices, Vol.47, No.7, pp 140615, 2000. [11]. C.K.Maiti, et al, Strained silicon Heterostructure Field Effect Transistors, Taylor and Francis, New York, 2007. [1].

606

Vol. 5, Issue 1, pp. 601-607

International Journal of Advances in Engineering & Technology, Nov. 2012. IJAET ISSN: 2231-1963
[12]. Hasan M. Nayfeh et al, A Physically Based Analytical Model for the Threshold Voltage of Strained-Si n-MOSFETs IEEE Transactions on Electron Devices, Vol. 51, No. 12,pp 2069-2072, Dec, 2004. [13]. Amit Chaudhry, J. N. Roy and Garima Joshi Nanoscale strained-Si MOSFET physics and modeling approaches: a reviewJournal of Semiconductors, Volume 3, Number 10.

AUTHORS
Shiromani Balmukund Rahi received the B.Sc. degree in PCM in 2002 and M.Sc. in Electronics from DDU Gorakhpur in 2005. He has done M.Tech (Microelectronics) from Panjab University Chandigarh, India. His current research interest is modeling of MOSFETs at Nanoscale.

Garima Joshi is M.E (Electronics and Communication Engineering) from UIET, Panjab University, Chandigarh, India. Her area of research includes modeling and simulation of Nanoscale MOSFETS. Currently she is working as Assistant Professor (ECE) and is also pursuing Ph.D. from UIET, Panjab University. She has 10 publications in international Journals/Conference proceedings.

607

Vol. 5, Issue 1, pp. 601-607

International Journal of Advances in Engineering & Technology. IJAET

ISSN: 2231-1963

MEMBERS OF IJAET FRATERNITY


Editorial Board Members from Academia
Dr. P. Singh, Ethiopia. Dr. A. K. Gupta, India. Dr. R. Saxena, India. Dr. Natarajan Meghanathan, Jackson State University, Jackson. Dr. Syed M. Askari, University of Texas, Dellas. Prof. (Dr.) Mohd. Husain, A.I.E.T, Lucknow, India. Dr. Vikas Tukaram Humbe, S.R.T.M University, Latur, India. Dr. Mallikarjun Hangarge, Bidar, Karnataka, India. Dr. B. H. Shekar, Mangalore University, Karnataka, India. Dr. A. Louise Perkins, University of Southern Mississippi, MS. Dr. Tang Aihong, Wuhan University of Technology, P.R.China. Dr. Rafiqul Zaman Khan, Aligarh Muslim University, Aligarh, India. Dr. Abhay Bansal, Amity University, Noida, India. Dr. Sudhanshu Joshi, School of Management, Doon University, Dehradun, India. Dr. Su-Seng Pang, Louisiana State University, Baton Rouge, LA,U.S.A. Dr. Avanish Bhadauria, CEERI, Pilani,India. Dr. Dharma P. Agrawal University of Cincinnati, Cincinnati. Dr. Rajeev Singh University of Delhi, New Delhi, India. Dr. Smriti Agrawal JB Institute of Engineering and Technology, Hyderabad, India Prof. (Dr.) Anand K. Tripathi College of Science and Engg.,Jhansi, UP, India. Prof. N. Paramesh

pg. A

International Journal of Advances in Engineering & Technology. IJAET

ISSN: 2231-1963

University of New South Wales, Sydney, Australia. Dr. Suresh Kumar Manav Rachna International University, Faridabad, India. Dr. Akram Gasmelseed Universiti Teknologi Malaysia (UTM), Johor, Malaysia. Dr. Umesh Kumar Singh Vikram University, Ujjain, India. Dr. A. Arul Lawrence Selvakumar Adhiparasakthi Engineering College,Melmaravathur, TN, India. Dr. Sukumar Senthilkumar Universiti Sains Malaysia,Pulau Pinang,Malaysia. Dr. Saurabh Pal VBS Purvanchal University, Jaunpur, India. Dr. Jesus Vigo Aguiar University Salamanca, Spain. Dr. Muhammad Sarfraz Kuwait University,Safat, Kuwait. Dr. Xianbo Qui Xiamen University, P.R.China. Dr. C. Y. Fong University of California, Davis. Prof. Stefanos Gritzalis University of the Aegean, Karlovassi, Samos, Greece. Dr. Hong Hu Hampton University, Hampton, VA, USA. Dr. Donald H. Kraft Louisiana State University, Baton Rouge, LA. Dr. Veeresh G. Kasabegoudar COEA,Maharashtra, India. Dr. Nouby M. Ghazaly Anna University, Chennai, India. Dr. Paresh V. Virparia Sardar Patel University, V V Nagar, India. Dr.Vuda Srinivasarao St. Marys College of Engg. & Tech., Hyderabad, India. Dr. Pouya Derakhshan-Barjoei Islamic Azad University, Naein Branch, Iran. Dr. Sanjay B. Warkad Priyadarshini College of Engg., Nagpur, Maharashtra, India. Dr. Pratyoosh Shukla Birla Institute of Technology, Mesra, Ranchi,Jharkhand, India. Dr. Mohamed Hassan Abdel-Wahab El-Newehy King Saud University, Riyadh, Kingdom of Saudi Arabia. Dr. K. Ramani K.S.Rangasamy College of Tech.,Tiruchengode, T.N., India. Dr. J. M. Mallikarjuna

pg. B

International Journal of Advances in Engineering & Technology. IJAET

ISSN: 2231-1963

Indian Institute of Technology Madras, Chennai, India. Dr. Chandrasekhar Dr.Paul Raj Engg. College, Bhadrachalam, Andhra Pradesh, India. Dr. V. Balamurugan Einstein College of Engineering, Tirunelveli, Tamil Nadu, India. Dr. Anitha Chennamaneni Texas A&M University, Central Texas, U.S. Dr. Sudhir Paraskar S.S.G.M.C.E. Shegaon, Buldhana, M.S., India. Dr. Hari Mohan Pandey Middle East College of Information Technology, Muscat, Oman. Dr. Youssef Said Tunisie Telecom / Sys'Com Lab, ENIT, Tunisia. Dr. Mohd Nazri Ismail University of Kuala Lumpur (UniKL), Malaysia. Dr. Gabriel Chavira Jurez Autonomous University of Tamaulipas,Tamaulipas, Mexico. Dr.Saurabh Mukherjee Banasthali University, Banasthali,Rajasthan,India. Prof. Smita Chaudhry Kurukshetra University, Kurukshetra, Harayana, India. Dr. Raj Kumar Arya Jaypee University of Engg.& Tech., Guna, M. P., India. Dr. Prashant M. Dolia Bhavnagar University, Bhavnagar, Gujarat, India. Dr. Dewan Muhammad Nuruzzaman Dhaka University of Engg. and Tech., Gazipur, Bangladesh. Dr. Hadj. Hamma Tadjine IAV GmbH, Germany. Dr. D. Sharmila Bannari Amman Institute of Technology, Sathyamangalam, India Dr. Jifeng Wang University of Illinois, Illinois, USA. Dr. G. V. Madhuri GITAM University, Hyderabad, India. Dr. T. S. Desmukh MANIT, Bhopal, M.P., India. Dr. Shaikh Abdul Hannan Vivekanand College, Aurangabad, Maharashtra, India. Dr. Zeeshan Ahmed University of Wuerzburg, Germany. Dr. Nitin S. Choubey M.P.S.T.M.E.,N.M.I.M.S. (Shirpur Campus), Dhule, M.S., India.

pg. C

International Journal of Advances in Engineering & Technology. IJAET

ISSN: 2231-1963

Dr. S. Vijayaragavan Christ College of Engg. and Technology, Pondicherry, India. Dr. Ram Shanmugam Texas State University - San Marcos, Texas, USA. Dr. Hong-Hu Zhu School of Earth Sciences and Engg. Nanjing University, China. Dr. Mahdi Zowghi Department of Sharif University of technology, Tehran, Iran. Dr. Cettina Santagati Universit degli Studi di Catania, Catania, Italy. Prof. Laura Inzerillo University of Palermo, Palermo, Italy. Dr. Moinuddin Sarker University of Colorado, Boulder, U.S.A. Dr. Mohammad Amin Hariri Ardebili University of Colorado, Boulder, U.S.A. Dr S. Kishore Reddy Swarna Bharathi College of Engineering, Khammam, A.P., India. Dr. V.P.S. Naidu CSIR - National Aerospace Laboratories, Bangalore, India.

Editorial Board Members from Industry/Research Labs.


Tushar Pandey, STEricsson Pvt Ltd, India. Ashish Mohan, R&D Lab, DRDO, India. Amit Sinha, Honeywell, India. Tushar Johri, Infosys Technologies Ltd, India. Dr. Om Prakash Singh , Manager, R&D, TVS Motor Company, India. Dr. B.K. Sharma Northern India Textile Research Assoc., Ghaziabad, U.P., India. Mr. Adis Medic Infosys ltd, Bosnia. Mr. M. Muralidharan Indian Oil Corporation Ltd., India. Mr. Rohit Kumar Malik Oracle India Pvt. Ltd., Bangalore, India.

pg. D

International Journal of Advances in Engineering & Technology. IJAET

ISSN: 2231-1963

Advisory Board Members from Academia & Industry/Research Labs.


Prof. Andres Iglesias, University of Cantabria, Santander, Spain. Dr. Arun Sharma, K.I.E.T, Ghaziabad, India. Prof. Ching-Hsien (Robert) Hsu, Chung Hua University, Taiwan, R.o.C. Dr. Himanshu Aggarwal, Punjabi University, Patiala, India. Prof. Munesh Chandra Trivedi, CSEDIT School of Engg.,Gr. Noida,India. Dr. P. Balasubramanie, K.E.C.,Perundurai, Tamilnadu, India. Dr. Seema Verma, Banasthali University, Rajasthan, India. Dr. V. Sundarapandian, Dr. RR & Dr. SR Technical University,Chennai, India. Mayank Malik, Keane Inc., US. Prof. Fikret S. Gurgen, Bogazici University Istanbul, Turkey. Dr. Jiman Hong Soongsil University, Seoul, Korea. Prof. Sanjay Misra, Federal University of Technology, Minna, Nigeria. Prof. Xing Zuo Cheng, National University of Defence Technology, P.R.China. Dr. Ashutosh Kumar Singh Indian Institute of Information Technology Allahabad, India. Dr. S. H. Femmam University of Haute-Alsace, France. Dr. Sumit Gandhi Jaypee University of Engg.& Tech., Guna, M. P., India. Dr. Hradyesh Kumar Mishra JUET, Guna , M.P., India. Dr. Vijay Harishchandra Mankar Govt. Polytechnic, Nagpur, India. Prof. Surendra Rahamatkar Nagpur Institute of Technology, Nagpur, India. Dr. B. Narasimhan Sankara College of Science And Commerce, Coimbatore, India. Dr. Abbas Karimi Islamic Azad University,Arak Branch, Arak,Iran. Dr. M. Munir Ahamed Rabbani Qassim University, Saudi Arabia.

pg. E

International Journal of Advances in Engineering & Technology. IJAET

ISSN: 2231-1963

Dr. Prasanta K Sinha Durgapur Inst. of Adva. Tech. & Manag., Durgapur, W. B., India. Dr. Tole H. Sutikno Ahmad Dahlan University(UAD),Yogyakarta, Indonesia. Dr. Anna Gina Perri Politecnico di Bari, BARI - Italy. Prof. Surendra Rahamatkar RTM Nagpur University, India. Dr. Sagar E. Shirsath Vivekanand College, Aurangabad, MS, India. Dr. Manoj K. Shukla Harcourt Butler Technological Institute, Kanpur, India. Dr. Fazal Noorbasha KL University, Guntur, A.P., India. Dr. Manjunath T.C. HKBK College of Engg., Bangalore, Karnataka, India. Dr. M. V. Raghavendra Swathi Institute of Technology & Sciences, Ranga Reddy , A.P. , India. Dr. Muhammad Farooq University of Peshawar, 25120, Khyber Pakhtunkhwa, Pakistan. Prof. H. N. Panchal L C Institute of Technology, Mehsana, Gujarat, India. Dr. Jagdish Shivhare ITM University, Gurgaon, India. Prof.(Dr.) Bharat Raj Singh SMS Institute of Technology, Lucknow, U.P., India. Dr. B. Justus Rabi Toc H Inst. of Sci. & Tech. Arakkunnam, Kerala, India. Prof. (Dr.) S. N. Singh National Institute of Technology, Jamshedpur, India. Prof.(Dr) Srinivas Prasad, Gandhi Inst. for Technological Advancement, Bhubaneswar, India. Dr. Pankaj Agarwal Samrat Ashok Technological Institute, Vidisha (M.P.), India. Dr. K. V. L. N. Acharyulu Bapatla Engineering College, Bapatla, India. Dr. Shafiqul Abidin Kalka Inst. for Research and Advanced Studies, New Delhi, India. Dr. M. Senthil Kumar PRCET, Vallam, Thanjavur, T.N., India. Dr. M. Sankar East Point College of Engg. and Technology, Bangalore, India. Dr. Gurjeet Singh Desh Bhagat Inst. of Engg. & Management, Moga, Punjab, India

pg. F

International Journal of Advances in Engineering & Technology. IJAET

ISSN: 2231-1963

Dr. C. Venkatesh E. B. E. T. Group of Institutions, Tirupur District, T. N., India. Dr. Ashu Gupta Apeejay Institute of Management, Jalandhar, India. Dr. Brijender Kahanwal Galaxy Global Imperial Technical Campus, Ambala, India. Dr. A. Kumaravel K. S. Rangasamy College of Technology, Tiruchengode, India. Dr. Norazmawati Md. Sani Universiti Sains Malaysia, Pulau Pinang Malaysia

Research Volunteers from Academia


Mr. Ashish Seth, Ideal Institute of Technology, Ghaziabad, India. Mr. Brajesh Kumar Singh, RBS College,Agra,India. Prof. Anilkumar Suthar, Kadi Sarva Viswavidhaylay, Gujarat, India. Mr. Nikhil Raj, National Institute of Technology, Kurukshetra, Haryana, India. Mr. Shahnawaz Husain, Graphic Era University, Dehradun, India. Mr. Maniya Kalpesh Dudabhai C.K.Pithawalla College of Engg.& Tech.,Surat, India. Dr. M. Shahid Zeb Universiti Teknologi Malaysia(UTM), Malaysia. Mr. Brijesh Kumar Research Scholar, Indian Institute of Technology, Roorkee, India. Mr. Nitish Gupta Guru Gobind Singh Indraprastha University,India. Mr. Bindeshwar Singh Kamla Nehru Institute of Technology, Sultanpur, U. P., India. Mr. Vikrant Bhateja SRMGPC, Lucknow, India. Mr. Ramchandra S. Mangrulkar Bapurao Deshmukh College of Engineering, Sevagram,Wardha, India. Mr. Nalin Galhaut Vira College of Engineering, Bijnor, India. Mr. Rahul Dev Gupta M. M. University, Mullana, Ambala, India.

pg. G

International Journal of Advances in Engineering & Technology. IJAET

ISSN: 2231-1963

Mr. Navdeep Singh Arora Dr B R Ambedkar National Institute of Technology, Jalandhar, Punjab, India. Mr. Gagandeep Singh Global Institute of Management and Emerging Tech.,Amritsar, Punjab, India. Ms. G. Loshma Sri Vasavi Engg. College, Pedatadepalli,West Godavari, Andhra Pradesh, India. Mr. Mohd Helmy Abd Wahab Universiti Tun Hussein ONN Malaysia, Malaysia. Mr. Md. Rajibul Islam University Technology Malaysia, Johor, Malaysia. Mr. Dinesh Sathyamoorthy Science & Technology Research Institute for Defence (STRIDE), Malaysia. Ms. B. Neelima NMAM Institute of Technology, Nitte, Karnataka, India. Mr. Mamilla Ravi Sankar IIT Kanpur, Kanpur, U.P., India. Dr. Sunusi Sani Adamu Bayero University, Kano, Nigeria. Dr. Ahmed Abu-Siada Curtin University, Australia. Ms. Shumos Taha Hammadi Al-Anbar University, Iraq. Mr. Ankit R Patel L C Institute of Technology, Mahesana, India. Mr.Athar Ravish Khan Muzaffar Khan Jawaharlal Darda Institute of Engineering & Technology Yavatmal, M.S., India. Prof. Anand Nayyar KCL Institute of Management and Technology, Jalandhar, Punjab, India. Mr. Arshed Oudah UTM University, Malaysia. Mr. Piyush Mohan Swami Vivekanand Subharti University, Meerut, U.P., India. Mr. Mogaraju Jagadish Kumar Rajampeta, India. Mr. Deepak Sharma Swami Vivekanand Subharti University, Meerut, U.P., India. Mr. B. T. P. Madhav K L University, Vaddeswaram, Guntur DT, AP, India. Mr. Nirankar Sharma Subharti Institute of Technology & Engineering, Meerut, U.P., India. Mr. Prasenjit Chatterjee MCKV Institute of Engineering, Howrah, WB, India. Mr. Mohammad Yazdani-Asrami

pg. H

International Journal of Advances in Engineering & Technology. IJAET

ISSN: 2231-1963

Babol University of Technology, Babol, Iran. Mr. Sailesh Samanta PNG University of Technology, Papua New Guinea. Mr. Rupsa Chatterjee University College of Science and Technology, WB, India. Er. Kirtesh Jailia Independent Researcher, India. Mr. Abhijeet Kumar MMEC, MMU, Mullana, India. Dr. Ehab Aziz Khalil Awad Faculty of Electronic Engineering, Menouf, Egypt. Ms. Sadia Riaz NUST College of E&ME, Rawalpindi, Pakistan. Mr. Sreenivasa Rao Basavala Yodlee Infotech, Bangalore, India. Mr. Dinesh V. Rojatkar Govt. College of Engineering, Chandrapur, Maharashtra State, India. Mr. Vivek Bhambri Desh Bhagat Inst. of Management & Comp. Sciences, Mandi Gobindgarh, India. Er. Zakir Ali I.E.T. Bundelkhand University, Jhansi, U.P., India. Mr. Himanshu Sharma M.M University, Mullana, Ambala, Punjab, India. Mr. Pankaj Yadav Senior Engineer in ROM Info Pvt.Ltd, India. Mr. Fahimuddin.Shaik JNT University, Anantapur, A.P., India. Mr. Vivek W. Khond G.H.Raisoni College of Engineering, Nagpur, M.S. , India. Mr. B. Naresh Kumar Reddy K. L. University, Vijayawada, Andra Pradesh, India. Mr. Mohsin Ali APCOMS, Pakistan. Mr. R. B. Durairaj SRM University, Chennai., India. Mr. Guru Jawahar .J JNTUACE, Anantapur, India. Mr. Muhammad Ishfaq Javed Army Public College of Management and Sciences, Rawalpindi, Pakistan. Mr. M. Narasimhulu Independent Researcher, India. Mr. B. T. P. Madhav K L University, Vaddeswaram, Guntur DT, AP, India. Mr. Prashant Singh Yadav Vedant Institute of Management & Technology, Ghaziabad, India. Prof. T. V. Narayana Rao

pg. I

International Journal of Advances in Engineering & Technology. IJAET

ISSN: 2231-1963

HITAM, Hyderabad, India. Mr. Surya Suresh Sri Vasavi Institute of Engg & Technology, Nandamuru,Andhra Pradesh, India. Mr. Khashayar Teimoori Science and Research Branch, IAU, Tehran, Iran. Mr. Mohammad Faisal Integral University, Lucknow, India. Prof. H. R. Sarode Shri Sant Gadgebaba College of Engineering and Technology, Bhusawal, India. Mr. Rajeev Kumar KIRAS, GGSIP University, New Delhi, India. Mr. Mamilla Ravi Sankar Indian Institute of Technology, Kanpur, India. Mr. Avadhesh Kumar Yadav University of Lucknow, Lucknow, India. Ms. Mina Asadi Independent Researcher, Adelaide, Australia. Mr. Muhammad Naufal Bin Mansor Universiti Malaysia Perlis, Malaysia. Mr. R. Suban Annamalai University, Annamalai Nagar, India. Mr. Abhishek Shukla R. D. Engineering College, Ghaziabad, India. Mr. Sunil Kumar Adama Science and Technology University, Adama, Ethiopia, Africa. Mr. A. Abdul Rasheed SRM Valliammai Engineering College, Chennai, T. N., India. Mr. Hari Kishore Kakarla K L University, Guntur, Andhra Pradesh, India. Mr. N. Vikram Jayaprakash Narayan College of Engineering, Dharmapur, A.P, India Mr. Samir Malakar MCKV Institute of Engineering, Howrah, India. Ms. Ketaki Solanki Vishveshwarya Institute of Engineering and Technology, Ghaziabad, India. Mr. Ashtashil V. Bhambulkar Rungta Group , Raipur, Chhattisgarh State, India. Mr. Nitin K. Mandavgade Priyadarshni College of Engineering, Nagpur, Maharashtra, India

pg. J

OUR FAST REVIEWING PROCESS IS OUR STRENGTH.

URL : http://www.ijaet.org E-mail : editor@ijaet.org ijaet.editor@gmail.com

You might also like